Reverse Engineering the Lytro .LFP File Format

Lytro Microlens Array

After getting my Lytro camera yesterday, I set about answering the questions about the light field capture format I had from the last time around.  Lytro may be focusing (pun absolutely intended) on the Facebook using crowd with their camera and software, but their file format suggests they don’t mind nerds like us poking around.  The file structure is the same as what they use for their compressed web display .lfp files, complete with a plain text table of contents, so I was able to re-use the lfpsplitter tool I wrote earlier with some minor modifications.  The README with the tool describes in detail the format of the file and how to parse it.

The table of contents in the raw .lfp files gives away most of the camera’s secrets.  It contains a bunch of useful metadata and calibration data like the focal length, sensor temperature, exposure length, and zoom length.  It also gives away the fact that the camera contains a 3 axis accelerometer, storing the orientation of the camera with respect to gravity in each image.   The physical sensor is 3280 by 3280 pixels, and the raw file just contains a BGGR Bayer array of it at 12 bits per pixel.  Saving the array and converting it to tif using the raw2tiff command below shows that each microlens is about 10 pixels in diameter with some vignetting on the edges.

raw2tiff -w 3280 -l 3280 -d short IMG_0004_imageRef0.raw output.tif

Syncing the camera to Lytro’s desktop software backs it up the first time.  Amazingly, the backup file uses the same structure as both .lfp file types.  The file contains a huge amount of factory calibration data like an array of hot or stuck pixels and color calibration under different lighting conditions.  Incredibly, it also lets loose that there is functioning Wi-Fi on board the camera with files named “C:\\CALIB\\WIFI_PING_RESULT.TXT” and “C:\\CALIB\\WIFI_MAC_ADDR.TXT”, which matches what the FCC teardowns show.  There is no mention of Bluetooth support though, despite support by the chipset.  In any case, it seems there is a lot of cool stuff coming via firmware updates.

Hopefully one of those updates enables a USB Mass Storage mode, as there does not appear to be any way to get files off of the camera in Linux. I had to borrow my roommate’s MacBook Air for this escapade. The camera shows up as a SCSI CD drive, but mounting /dev/sr0 only shows a placeholder message intended for Windows users.

Thank you for purchasing your Lytro camera.  Unfortunately, we do not have a
Windows version of our desktop application at this time.  Please check out
http://support.lytro.com for the latest info on Windows support.

It was pretty trivial to write the lfpsplitter to get the raw data shown above, but doing anything useful with it will take more effort.  Normally simple stuff like demosiacing the Bayer array will likely be complicated by the need to avoid the gaps between microlenses and not distort the ray direction information.  Getting high quality results will probably also require applying the calibration information from the camera backups.  A first party light field editing library would be wonderful, but Lytro probably has other priorities.

You can grab my lfpsplitter tool from GitHub at git://github.com/nrpatel/lfptools.git and I uploaded an example .lfp you can use with it if you want to play with light field captures without the $400 hardware commitment.

Related Posts

79 Responses to “Reverse Engineering the Lytro .LFP File Format”


  • This is awesome, man! I was wondering when someone would do this. May I port this to C# and give you credit? I want to write some sort of GUI for managing it because I really don’t want to break out my friend’s Mac Mini to manage my pictures.

  • Good job!

    Looking at the tiff, there seems to capture groups about 10×10 pixels in the source format for each “pixel”. I also assume that each of the groups captured are the light from different directions as it came through the lens, so the leftmost pixels in the group are the light that entered from the left (or right if they are mirrored).

    Shouldn’t be too hard to filter out and create 5 raw images with light coming from top, left, middle, right and bottom to further investigate.

    Fun stuff!

    • Indeed. Ren Ng’s thesis is full of descriptions of the necessary algorithms to do all sorts of cool stuff. The difficulty is largely in alignment. While the 10px circle is a reasonable approximation locally, I don’t think it extends all the way across the sensor. That is, the 200th light field pixel from the left probably doesn’t start at the 2000th sensor pixel. It probably starts at 1999.5 or 2002.1 or so, which is why there is information like this included within every .lfp file:

      		"mla" : {
      			"tiling" : "hexUniformRowMajor",
      			"lensPitch" : 1.3898614883422850808e-05,
      			"rotation" : -0.0026990664191544055939,
      			"defectArray" : [],
      			"scaleFactor" : {
      				"x" : 1,
      				"y" : 1.0004690885543823242
      			},
      			"sensorOffset" : {
      				"x" : -2.077848196029663261e-06,
      				"y" : -1.1220961570739747699e-05,
      				"z" : 2.5000000000000001198e-05
      			}
      		}

      I haven’t had time yet to determine exactly how it applies.

      • Got a brief chance to look into this data. Rotation is of the whole array, in radians. For my camera, it means that after rotating, I have about a 4 pixel border on all sides of data that has to be cropped or filled in with fake data.

        I’m not sure what sensorOffset or lensPitch are.

      • Lens pitch is a common way of defining the size of a micro-lens diameter in a lenslet array. In this case it seems to be roughly 13um? The lens pitch of a light field camera is the main limiting factor in spatial resolution i.e. a lenslet is analogous to a pixel in a standard camera. The pixels below each lenslet record the angular information but do not really give any extra resolution. If we know the sensor size then we could work out the actual resolution of each image in terms of spatial resolution and not in ray space.

        The focal length would also be useful to know for processing the raw file, did you see any info on that?

      • Makes sense! Presumably the sensorOffset values are in um as well then, and respresent the offset to get the top left microlens to some expected position.

        There is indeed focal length information:

        		"lens" : {
        			"infinityLambda" : 13.48490142822265625,
        			"focalLength" : 0.0064499998092651363371,
        			"zoomStep" : 981,
        			"focusStep" : 630,
        			"fNumber" : 1.9099999666213989258,
        			"temperature" : 26.646636962890625,
        			"temperatureAdc" : 2826,
        			"zoomStepperOffset" : 3,
        			"focusStepperOffset" : 1,
        			"exitPupilOffset" : {
        				"z" : 0.49113632202148432837
        			}
        		},
      • I somehow missed it hiding in plain sight, but the json metadata in the file also states the pixel pitch: “pixelPitch” : 1.3999999761581419596e-06

        This puts the microlens width at 9.92758223 pixels, and it makes the useful sensor area 330.39263 microlenses wide.

  • I started with your lfpsplitter Friday and hacked up my own raw parser, glad to see you got in on the first shipment, too! did you notice any glaring defects in your calibration images? mine has a couple rather nasty seeming smudges, and one place where two microlenses seem to be dislocated by 3 or 4 pixels each

  • Once again, excellent work!

    I used your EXCELLENT reference as a basis for porting this to C#.

    The project can be found here: https://github.com/mscappini/Lytro.Net

  • Nice work! Just curious if you can do focal stacking with the Lytro? I would like to segment out information only at one depth.

    • Sure. The easy way is to split the web display .lfp into its component .jpg files and use focus stacking software like the Enfuse tool that comes with Hugin.

      Ren Ng’s thesis also describes two ways of doing it. The first is to sample a “sub-aperture” image from the raw sensor array, which is an image made up of one pixel per microlens at the same offset. The problem there is that you end up throwing away close to 99% of the photons, so the result is noisy. The second way is what Enfuse does, which is graph cuts to grab the maximum contrast regions across all of the images and blend them together into a final image.

      • Thanks! What I really want to do is to collect only photons originating from a specific depth plane (certain distance away from the camera) and ignore all other photons not originating from this depth plane. Method 1 seems suitable for accomplishing this whereas method 2 searches for infocus material based on contrast. What do you think…?

      • That would require a level of magic that isn’t available from a regular CMOS sensor behind a microlens array. The best you can do is generate an image of a specific focal depth and then graph cut out the sharpest regions in it. You could check against images at other focal depths too to ensure that the regions you are grabbing are at their maximum sharpness at the depth you have chosen.

        This is basically what Enfuse does, but without the final step of blending the sharpest regions together into a full image.

  • If you replace the image sensor with an LED projector, will you see a 3-D image? In other words, could this same technology be used in reverse? It seems that this camera captures light rays from many different directions. It seems that using this in reverse could re-create the light rays and the original 3-D object (although likely a mirror image).

  • Where can i get more camera raws? I trying to figure out how to make the alignment… Has somebody tried it?

    • After converting to tif I can’t view it with a tiff viewer am I’m doing something wrong?

    • After converting to tif I can’t view it with a tiff viewer am I’m doing something wrong?

      tiffinfo:
      TIFF Directory at offset 0x14adbfe
      Image Width: 3280 Image Length: 3280
      Bits/Sample: 16
      Sample Format: unsigned integer
      Compression Scheme: PackBits
      Photometric Interpretation: min-is-black
      FillOrder: lsb-to-msb
      Orientation: row 0 top, col 0 lhs
      Samples/Pixel: 1
      Rows/Strip: 1
      Planar Configuration: single image plane

  • You can just open raw file in matlab or C. Assume that it has 16 bit per channel and “bggr” bayer pattern.

  • I have had a running discussion with Lyto on the Mss Storage Issue under Windows… here is quote from them.

    ….”nor can we waive the ban on reverse engineering of any kind within our EULA and Terms of Use. Therefore the light field data cannot currently be removed from the Lytro camera except by the Lytro Desktop software.”

    Seems that it may be illegal to look at your own data if it is processed thru Lytro’s hardware and software… Pretty funny Huh? I am surprised that they didn’t encrypt the files just to make it harder.

  • I have just started using my Lytro and though it works well…I was disappointed with the poor “export to jpg”
    function. I hope this is the start of something better.
    thanks

  • WOW…I just used the jpg extractor and it works!
    Better than the software supplied by Lytro. I can now at least get a small but printable image.

    Maybe a job at Lytro is waiting for the clever genius here!

  • OK . not that I care but I did read the EULA, then wrote to Lytro for a clarification. It does *not* violate the EULA in any way to extract JPGS from the LFP files.

    When I extract JPGs from the “stk” files there are four JPG’s
    of decent quality. Is there an “easy” way to extract more JPG’s at different focal points…I assume there is a lot more info in the non-stk files which I assume to be RAW

  • Thank You!

    I succressfully compiled lfpspiltter (using the GCC-10.7-v2.pkg) on my Macbook Pro and it generated the three .json and the .raw file, but no others (component jpgs).

    Can someone please advise?

    Thank You.

    • If it was the ~16MB .lfp file, that is the expected result. That file contains one raw image and some metadata. The smaller ~1-2MB .lfp files that the desktop app creates contain .jpg files.

      The eventual goal is to make the tool generate .jpg files from the raw image, but that requires much more work.

      • I see. Managed to get the .jpg files from an .lfp; thanks.

        If I understand correctly (also see http://nirmalpatel.com/hacks/lytro.html), closely matching the focal values for each jpg (from the .json file) to a value in the depth.txt, you can pinpoint the area to a 54x54px square, where the corresponding .jpg is at best focus. The depth.txt being a flat 2d array of the 20×20 values, a division into the matching value’s line number (in depth.txt) can give you your row and column, so you could calculate the pixel coordinates of the focus area, as it relates to the 1080×1080 image. This would need be the clickable region to call that jpg, etc. This should make it possible to build a lytro server independent html5 viewer.

        This one kinda works (with a rudimentary 4×4 clickable grid, instead), just as an html5 proof of concept, at least in safari and chrome. http://panoramablog.com/lytro/keyboard

        Question:
        So the camera decides how many jpg files to generate for the .flp file from the RAW data? The most .jpg files I have seen in any of the .flp files is 12.

        Thanks again.

      • Correct. The number of jpgs the camera or Lytro desktop software generates depends on some algorithm that probably involves doing graph cuts to find as many useful unique focal planes as there are in the image and generating a jpg at each one.

      • Thank you for all the great work you’ve put into this! I haven’t been able to get a camera yet, but I’ve been using your lfpsplitter tool to examine the IMG0004.lfp image that you posted. Would it be possible for you to post the IMG0004-stk.lfp version as well? As far as I understand, it is not trivial to create that from the IMG0004.lfp without my having the proprietary software.

      • As Joe has already pointed out, I was wondering as well if the web-optimized .lfp file could be uploaded (AKA: IMG_0004-stk.lfp)? I was curious as to how the resulting .jpg images turned out with your program. I’m planning on acquiring the camera soon as well, but I would like to get a head start on how to handle these types of files without any delay. Let me know if this is possible or not when you have a free moment. Either way, I’m very impressed with the lfpsplitter program thus far. Thanks.

  • Thanks for all the info and utilities.

    I’m trying to get the specs of the micro lens array.
    Does anyone know? It does not appear to be in the
    meta files, at least directly.

    cheers,
    hurf

  • Thanks for the splitter. Save me some time doing it myself.

    With it, I wrote a bunch of notes and demos. The first one can be found here:
    http://www.facebook.com/notes/hanlin-goh/light-field-photography-part-1-technical-introduction/10150933770112188

    Cheers,
    Hanlin

  • Any idea where to find the *.lfp files on windows machine? When connecting the camera, lytro desktop automatically transfer the files into it without seeing any lfp files!

  • Thanks for the tool and the introduction.

    Could anybody provide me some *stk.lpf files that I can play around with then? That would be great. Thanks!

  • Hi, Just wanted to say thank you for the file, it works great. I was wondering something. What is the depth file referring to?

  • Thanks for your tools!
    I have some questions.
    The resolution of the sensor is 3280*3280 ,while the output image is 1080*1080, but the pixels underneath each lenslet is about 10*10.What is the relationship between these numbers?
    The numbers of the lenslets are 3280/10 = 328.Whether the final resolution is 328*328?
    It is different from 1080*1080,who can tell me why?

    • I’m not sure how they decided on their end resolution, but anything that goes from a hexagonal grid to a square grid is going to require some interpolation.

  • How obtain the firts image that appears on top of page? What method you used to get it? thanks!!

  • Are there any plans to update this to deal with the new LFP stacks that come out of the updated Lytro Desktop? The new perspective shift stuff is cool, but it no longer works with lfpsplitter. Apparently both the perspective shift images and the refocused stack are now encoded in H.264…

  • I realy wanted to buy a Lytro camera, but considering the effort to be invested in the software part to get anything a tad usefull on linux!
    So I’ll save my 400 bucks to get myself a DSLR and run this trick: http://dof.chaoscollective.org/

  • How can i extract the perspective shift images from this file?
    Somebody knows it?

  • In the old file format, depth.txt had a 20×20 array, where each value revealed the optimum focus for a 54×54 pixel square of the 1080×1080 image. But with the new file format, depth.txt is a 330×330 array. Does anyone know how this corresponds to the 1080×1080 image? I can’t come up with any sized square (of integer pixels) that would match the depth array with the image.

  • Anyone interested in working with Lytro files on linux? I wrote a little program that takes the .raw data file that lfpsplitter produces and lets you create images from it focused at a variety of depths.

    All done with an old camera and old lytro software. Don’t know how perspective shift changes this. It doesn’t use the -stk file, just the .raw file extracted from the original .lfp file by lfpsplitter.

    Anyways, it is here: http://www.binslett.org/Lytro/refocus.tar.gz

    • I am extremely interested about this, I have compiled it in the visual studio 2010 environment, but it doesn’t look that good comparing to what you uploaded to the lytro website. So may I contact you, somehow?

      • Can you tell me how to compile it in the VS2010 environment? I tried and I linked the libnetpbm, but some “unresolved external symbol” linker errors returned. can you tell me why or where to get the lib you used?

  • http://www.binslett.org/Lytro/refocus.tar.gz
    I have some difficulties to compile refoucs.tar.gz in ubuntu raring
    can anyone help me?
    the errors is
    focusimage.c:94:49: error: ‘tuple_type’ undeclared (first use in this function)

  • I’ve been looking at the depth map, which gives depth information in “lambda” units. I would find it very useful to transform these units into real world distances. I have been trying to interpret the lambda values, but I can’t figure out what they relate to. Does anyone know if there is a transformation that I can apply to translate these into meters from the camera lens?

  • This is great job! I wonder how lytro photo select “focused image” by clicking one area. Is there a z-depth map? If so, it’d be nice to play around it.

  • Hi, I tried the raw2tiff tool, the command doesn’t work, it says the input file too small. I tried other size, but I can’t get a reasonable result. I used the sample lfp file in your package.

  • Hi,there is a problem while converting the raw.lfp to .tiff following ‘bggr’ rule, the .tiff is darker than the image exported by the Lytro software, so how can we get the real color of the image?

  • How could you convert the .raw file to the .tif file for windows pattern?

  • Se olhar para o mais tratamento de manchas na pele.
    Observação: neesses casos é menor do lado ireito
    ora do lado esquerdo.

    My blog post Leia o artigo completo

  • A divine intervention made him impervious to bodily harm, except behind the heels where he wwas grasped duuring the procedure that made him so well
    protected. This att the start of the game, home remedy
    treatments enables you to tame chlamydia before things progress.
    The Keen Sula may work for moderate bunions; it hhas a tendeency to run wider and inn additon features an adjustable closure Velcro system.

  • People at wineries are allowed to seal and take home one
    bottle of opened wine. This is a challenging program and all warnings and necessary precautions should be taken. Parents in a joint-custody relationship must first offer the other parent the option of
    temporarily caring for a child before seeking third-party child
    care.

  • 据获悉的两份光伏发电数据显示, 太阳能电池板 相同装机容量的单晶硅太阳能电池板电站发电量高于多晶硅太阳能电池板。

    一份报告来自宁夏某大型地面光伏电站,近三年的发电量 太阳能电池板 对比显示,同样装机容量的光伏电站,采用单晶硅技术比多晶硅多发电5%左右。单晶硅光伏电站的电量优势不仅限于大型地面电站。另一份报告来自山东

    分布式光伏发电 公司员工宿舍前的光伏车棚,结果为单晶硅车棚的发电量高于多晶硅5%-8%。

    多发电量达5%以上

    建于宁夏某大型地面光伏电站的数据显示,单晶硅电站比多晶硅平均多发电量达到5%。

    从2012年9月至2014

  • 记者今天从省发展改革委获悉,国家能源局、国务院扶贫办日前决定利用6年时间在全国开展光伏发电产业扶贫工程,确定在甘 分布式光伏发电 肃、安徽、宁夏、山西、河北、青海的30个县开展首批光伏扶贫试点。

    光伏扶贫工程是一种新的能源扶贫模式,主要在贫困县 分布式光伏发电 内为符合扶持标准的贫困户安装户用分布式光伏发电系统,分布式光伏发电,贫困户通过上网售电,直接增加收入,体现了能源精准扶贫,实现了扶贫开发由”输血式”扶贫向”造血式”扶贫的转变。

    据了解,省发展改革委、省扶贫办正在按照”政府扶持引导、农户自愿参加,省级统筹规划、县级确保实施,先行试点探索、总结经验推广,公平公正公开、脱贫造血发展 太阳能电池板

  • 四川蜀旺科技公司专业 太阳能电池板厂家 的太阳能电池板厂家,主要生产1W-310W单晶太阳能电池板、多晶太阳能电池板,我们生产的太阳能电池板有适合家用太阳能电池板, 太阳能电池板厂家 也有用在分布式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范围,在10年使用期间,输出功率在90%以上;25年使用期间,输出功率在80%以上,以下是150W单晶太阳能电池板的介绍:

    太阳能电池板

    一、电性能数据

  • 四川蜀旺科技公司专业生产1W-310W单晶太阳能电池板、多晶太阳能电池板,我们生产的太阳能电池板有适合家用太阳能电池板,也有用在分布 太阳能电池板 式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范围,在10年使用期间,输出功率在90%以上;25年使用 太阳能发电系统 期间,输出功率在80%以上,以下是210W单晶太阳能电池板的介绍:

    电性能数据

    标准测试条件(STC):辐照度1000W/㎡,电池温度 太阳能电池板厂家 2

  • 四川蜀旺科技公司专业生产1W-310W单晶太阳能电池板、多晶太阳 太阳能电池板 能电池板,我们生产的太阳能电池板有适合家用太阳能电池板 太阳能电池板厂家 ,也有用在分布式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范围,在10年使用期间,输出功率在90%以上;25年使用期间,输出功率在80%以上,以下是250W多晶太阳能电池板的介绍:

    太阳能电池板厂家

    一、电性能数据

    标准测试条件(STC):辐照度1000W/㎡,电池温度25℃,光谱分布AM1.5

  • 四川蜀旺科技公司专业生产1W-310W单晶太阳能电池板、多晶太阳能 太阳能电池板 电池板,我们生产的太阳能电池板有适合家用太阳能电池板,也有用在分布式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差 分布式光伏发电 范围,在10年使用期间,输出功率在90%以上;25年使用期间,输出功率在80%以上,以下是300W单晶太阳能电池板的介绍:

    一、电性 太阳能电池板 能数据

  • 四川蜀旺科技公司专业的太阳能电池板厂家,主要生产1W-310W单晶太阳能电池板、多晶太阳能电池板 太阳能电池板厂家 ,我们生产的太阳能电池板有适合家用太阳能电池板,也有用在分布式太阳能发电站的太阳能电池组件,我 太阳能电池板厂家 们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范围,在10年使用期间,输出功率在90%以上;25年使用期间,输出功率在80%以上,太阳能电池板厂家,以下是50W单晶太阳能电池板的介绍:

    一、电性能数据

    太阳能电池板

  • 四川蜀旺科技公司专业的太阳能电池板厂家,主要生产1W-310W单晶太阳能电池板、多晶太阳能电池板,我们生产的太 太阳能电池板厂家 阳能电池板有适合家用太阳能电池板,也有用在分布式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范围, 太阳能电池板 在10年使用期间,输出功率在90%以上;25年使用期间,输出功率在80%以上,以下是70W单晶太阳能电池板的介绍:

    一、电性能数据

    太阳能发电系统

  • 四川蜀旺科技公司专业的太阳能电池板厂家,主要生产1W-310W单晶太阳能电池板、多晶 太阳能电池板厂家 太阳能电池板,我们生产的太阳能电池板有适合家用太阳能电池板,也有用在分布 太阳能电池板 式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范围,在10年使用期间,输出功率在90%以上;25年使用期间,输出功率在80%以上,以下是80W单晶太阳能电池板的介绍 太阳能电池板

    一、电性能数据

  • 四川蜀旺科 太阳能电池板厂家 技公司专业的太阳能电池板厂家,主要生产1W-310W单晶太阳能电池 太阳能发电系统 板、多晶太阳能电池板,我们生产的太阳能电池板有适合家用太阳能电池板,也有用在分布式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范围,在10年使用期间,输出功率在90%以上;25年使用期间,输出功率在80%以上,以下是100W单晶太阳能电 太阳能电池板 池板的介绍:

    一、电性能数据

  • 家庭分布式光伏发电的推广 分布式光伏发电 是我国在新能源应用领域的新探索,近日,商丘市虞城县城郊乡殷楼村村民候振路投资兴建的分布式光伏发电 分布式光伏发电 项目装机容量为40千瓦的光伏电站顺利通过了县供电公司的验收,成功并网发电。

      2012年10月,国家电网出台政策,鼓励分布式光伏发电接入低压配电网,并承诺对6兆瓦 分布式光伏发电 以下的分布式光伏发电项目免费接入电网,全额收购富余电力。这让干了20多年电工的侯振路很是激动,在详细了解国家对分布式光伏发电的鼓励政策、系统的投资收益和专业的光伏发电设备安装技术等流程后,侯振路便开始研究筹建光伏”发电站”。

      通过前期并网计量装置安装、与电力部门签订合同、并网验收和调试后,侯振路

  • ,分布式光伏发电

    发福利啦!今天,小编特意搜罗了全国各省分布式光伏发电补贴的”红头文件”,其详尽程度堪称史无前例,有需要的小 分布式光伏发电 伙伴们速速收藏吧!

    分布式光伏发电

  • I love what you guys tend to be up too. Such clever work and coverage!
    Keep up the great works guys I’ve you guys to blogroll.

  • 据获悉的两份光伏发电数据显示,相同装机容量的单晶硅太阳能电池板电站发电量高于多晶硅太阳能电池板。

    太阳能电池板 一份报告来自宁夏某大型地面光伏电站,近三年的发电量对比显示,同样装机容量的光伏电站,采用单晶硅技术比多晶硅多发电5%左右。单晶硅光伏 太阳能电池板 电站的电量优势不仅限于大型地面电站。另一份报告来自山东

    某公司员工宿舍前的光伏车棚,结果为单晶硅车棚的发电量高于多晶硅5%-8%。

    多发电量达5%以上

    建于宁夏某大型地面光伏电站的数据显示,单晶硅电站比多晶硅平均多发电量达到5%。

    从2012年9月至201 分布式光伏发电 4

  • 家庭,太阳能发电系统分布式光伏发电的推广是我国在新能源应用领域的新探索,近日,商丘市虞城县城郊乡殷楼村村 太阳能发电系统 民候振路投资兴建的分布式光伏发电项目装机容量为40千瓦的光伏电站顺利通过了县供电公司的验收 分布式光伏发电 ,成功并网发电。

      2012年10月,国家电网出台政策,鼓励分布式光伏发电接入低压配电网,并承诺对6兆瓦以下的分布式光伏发电项目免费接入电网,全额收购富余电力。这让干了20多年电工的侯振路很是激动,在详细了解国家对分布式光伏发电的鼓励政策、系统的投资收益和专业的光伏发电设备安装技术等流程后,侯振路便开始研究筹建光伏”发电站”。

      通过 分布式光伏发电 前期并网计量装置安装、与电力部门签订合同、并网验收

  • 发福利啦!今天,小编特意搜罗了全国各省分布式光伏发电补贴的”红头文件 分布式光伏发电 &rdquo,分布式光伏发电;,其详尽程度堪称史无前例,有需要的小伙伴们速速收藏吧!

    分布式光伏发电

Leave a Reply