Reverse Engineering the Lytro .LFP File Format

Lytro Microlens Array

After getting my Lytro camera yesterday, I set about answering the questions about the light field capture format I had from the last time around.  Lytro may be focusing (pun absolutely intended) on the Facebook using crowd with their camera and software, but their file format suggests they don’t mind nerds like us poking around.  The file structure is the same as what they use for their compressed web display .lfp files, complete with a plain text table of contents, so I was able to re-use the lfpsplitter tool I wrote earlier with some minor modifications.  The README with the tool describes in detail the format of the file and how to parse it.

The table of contents in the raw .lfp files gives away most of the camera’s secrets.  It contains a bunch of useful metadata and calibration data like the focal length, sensor temperature, exposure length, and zoom length.  It also gives away the fact that the camera contains a 3 axis accelerometer, storing the orientation of the camera with respect to gravity in each image.   The physical sensor is 3280 by 3280 pixels, and the raw file just contains a BGGR Bayer array of it at 12 bits per pixel.  Saving the array and converting it to tif using the raw2tiff command below shows that each microlens is about 10 pixels in diameter with some vignetting on the edges.

raw2tiff -w 3280 -l 3280 -d short IMG_0004_imageRef0.raw output.tif

Syncing the camera to Lytro’s desktop software backs it up the first time.  Amazingly, the backup file uses the same structure as both .lfp file types.  The file contains a huge amount of factory calibration data like an array of hot or stuck pixels and color calibration under different lighting conditions.  Incredibly, it also lets loose that there is functioning Wi-Fi on board the camera with files named “C:\\CALIB\\WIFI_PING_RESULT.TXT” and “C:\\CALIB\\WIFI_MAC_ADDR.TXT”, which matches what the FCC teardowns show.  There is no mention of Bluetooth support though, despite support by the chipset.  In any case, it seems there is a lot of cool stuff coming via firmware updates.

Hopefully one of those updates enables a USB Mass Storage mode, as there does not appear to be any way to get files off of the camera in Linux. I had to borrow my roommate’s MacBook Air for this escapade. The camera shows up as a SCSI CD drive, but mounting /dev/sr0 only shows a placeholder message intended for Windows users.

Thank you for purchasing your Lytro camera.  Unfortunately, we do not have a
Windows version of our desktop application at this time.  Please check out
http://support.lytro.com for the latest info on Windows support.

It was pretty trivial to write the lfpsplitter to get the raw data shown above, but doing anything useful with it will take more effort.  Normally simple stuff like demosiacing the Bayer array will likely be complicated by the need to avoid the gaps between microlenses and not distort the ray direction information.  Getting high quality results will probably also require applying the calibration information from the camera backups.  A first party light field editing library would be wonderful, but Lytro probably has other priorities.

You can grab my lfpsplitter tool from GitHub at git://github.com/nrpatel/lfptools.git and I uploaded an example .lfp you can use with it if you want to play with light field captures without the $400 hardware commitment.

Related Posts

110 Responses to “Reverse Engineering the Lytro .LFP File Format”


  • This is awesome, man! I was wondering when someone would do this. May I port this to C# and give you credit? I want to write some sort of GUI for managing it because I really don’t want to break out my friend’s Mac Mini to manage my pictures.

  • Good job!

    Looking at the tiff, there seems to capture groups about 10×10 pixels in the source format for each “pixel”. I also assume that each of the groups captured are the light from different directions as it came through the lens, so the leftmost pixels in the group are the light that entered from the left (or right if they are mirrored).

    Shouldn’t be too hard to filter out and create 5 raw images with light coming from top, left, middle, right and bottom to further investigate.

    Fun stuff!

    • Indeed. Ren Ng’s thesis is full of descriptions of the necessary algorithms to do all sorts of cool stuff. The difficulty is largely in alignment. While the 10px circle is a reasonable approximation locally, I don’t think it extends all the way across the sensor. That is, the 200th light field pixel from the left probably doesn’t start at the 2000th sensor pixel. It probably starts at 1999.5 or 2002.1 or so, which is why there is information like this included within every .lfp file:

      		"mla" : {
      			"tiling" : "hexUniformRowMajor",
      			"lensPitch" : 1.3898614883422850808e-05,
      			"rotation" : -0.0026990664191544055939,
      			"defectArray" : [],
      			"scaleFactor" : {
      				"x" : 1,
      				"y" : 1.0004690885543823242
      			},
      			"sensorOffset" : {
      				"x" : -2.077848196029663261e-06,
      				"y" : -1.1220961570739747699e-05,
      				"z" : 2.5000000000000001198e-05
      			}
      		}

      I haven’t had time yet to determine exactly how it applies.

      • Got a brief chance to look into this data. Rotation is of the whole array, in radians. For my camera, it means that after rotating, I have about a 4 pixel border on all sides of data that has to be cropped or filled in with fake data.

        I’m not sure what sensorOffset or lensPitch are.

      • Lens pitch is a common way of defining the size of a micro-lens diameter in a lenslet array. In this case it seems to be roughly 13um? The lens pitch of a light field camera is the main limiting factor in spatial resolution i.e. a lenslet is analogous to a pixel in a standard camera. The pixels below each lenslet record the angular information but do not really give any extra resolution. If we know the sensor size then we could work out the actual resolution of each image in terms of spatial resolution and not in ray space.

        The focal length would also be useful to know for processing the raw file, did you see any info on that?

      • Makes sense! Presumably the sensorOffset values are in um as well then, and respresent the offset to get the top left microlens to some expected position.

        There is indeed focal length information:

        		"lens" : {
        			"infinityLambda" : 13.48490142822265625,
        			"focalLength" : 0.0064499998092651363371,
        			"zoomStep" : 981,
        			"focusStep" : 630,
        			"fNumber" : 1.9099999666213989258,
        			"temperature" : 26.646636962890625,
        			"temperatureAdc" : 2826,
        			"zoomStepperOffset" : 3,
        			"focusStepperOffset" : 1,
        			"exitPupilOffset" : {
        				"z" : 0.49113632202148432837
        			}
        		},
      • I somehow missed it hiding in plain sight, but the json metadata in the file also states the pixel pitch: “pixelPitch” : 1.3999999761581419596e-06

        This puts the microlens width at 9.92758223 pixels, and it makes the useful sensor area 330.39263 microlenses wide.

  • I started with your lfpsplitter Friday and hacked up my own raw parser, glad to see you got in on the first shipment, too! did you notice any glaring defects in your calibration images? mine has a couple rather nasty seeming smudges, and one place where two microlenses seem to be dislocated by 3 or 4 pixels each

  • Once again, excellent work!

    I used your EXCELLENT reference as a basis for porting this to C#.

    The project can be found here: https://github.com/mscappini/Lytro.Net

  • Nice work! Just curious if you can do focal stacking with the Lytro? I would like to segment out information only at one depth.

    • Sure. The easy way is to split the web display .lfp into its component .jpg files and use focus stacking software like the Enfuse tool that comes with Hugin.

      Ren Ng’s thesis also describes two ways of doing it. The first is to sample a “sub-aperture” image from the raw sensor array, which is an image made up of one pixel per microlens at the same offset. The problem there is that you end up throwing away close to 99% of the photons, so the result is noisy. The second way is what Enfuse does, which is graph cuts to grab the maximum contrast regions across all of the images and blend them together into a final image.

      • Thanks! What I really want to do is to collect only photons originating from a specific depth plane (certain distance away from the camera) and ignore all other photons not originating from this depth plane. Method 1 seems suitable for accomplishing this whereas method 2 searches for infocus material based on contrast. What do you think…?

      • That would require a level of magic that isn’t available from a regular CMOS sensor behind a microlens array. The best you can do is generate an image of a specific focal depth and then graph cut out the sharpest regions in it. You could check against images at other focal depths too to ensure that the regions you are grabbing are at their maximum sharpness at the depth you have chosen.

        This is basically what Enfuse does, but without the final step of blending the sharpest regions together into a full image.

  • If you replace the image sensor with an LED projector, will you see a 3-D image? In other words, could this same technology be used in reverse? It seems that this camera captures light rays from many different directions. It seems that using this in reverse could re-create the light rays and the original 3-D object (although likely a mirror image).

  • Where can i get more camera raws? I trying to figure out how to make the alignment… Has somebody tried it?

    • After converting to tif I can’t view it with a tiff viewer am I’m doing something wrong?

    • After converting to tif I can’t view it with a tiff viewer am I’m doing something wrong?

      tiffinfo:
      TIFF Directory at offset 0x14adbfe
      Image Width: 3280 Image Length: 3280
      Bits/Sample: 16
      Sample Format: unsigned integer
      Compression Scheme: PackBits
      Photometric Interpretation: min-is-black
      FillOrder: lsb-to-msb
      Orientation: row 0 top, col 0 lhs
      Samples/Pixel: 1
      Rows/Strip: 1
      Planar Configuration: single image plane

  • You can just open raw file in matlab or C. Assume that it has 16 bit per channel and “bggr” bayer pattern.

  • I have had a running discussion with Lyto on the Mss Storage Issue under Windows… here is quote from them.

    ….”nor can we waive the ban on reverse engineering of any kind within our EULA and Terms of Use. Therefore the light field data cannot currently be removed from the Lytro camera except by the Lytro Desktop software.”

    Seems that it may be illegal to look at your own data if it is processed thru Lytro’s hardware and software… Pretty funny Huh? I am surprised that they didn’t encrypt the files just to make it harder.

  • I have just started using my Lytro and though it works well…I was disappointed with the poor “export to jpg”
    function. I hope this is the start of something better.
    thanks

  • WOW…I just used the jpg extractor and it works!
    Better than the software supplied by Lytro. I can now at least get a small but printable image.

    Maybe a job at Lytro is waiting for the clever genius here!

  • OK . not that I care but I did read the EULA, then wrote to Lytro for a clarification. It does *not* violate the EULA in any way to extract JPGS from the LFP files.

    When I extract JPGs from the “stk” files there are four JPG’s
    of decent quality. Is there an “easy” way to extract more JPG’s at different focal points…I assume there is a lot more info in the non-stk files which I assume to be RAW

  • Thank You!

    I succressfully compiled lfpspiltter (using the GCC-10.7-v2.pkg) on my Macbook Pro and it generated the three .json and the .raw file, but no others (component jpgs).

    Can someone please advise?

    Thank You.

    • If it was the ~16MB .lfp file, that is the expected result. That file contains one raw image and some metadata. The smaller ~1-2MB .lfp files that the desktop app creates contain .jpg files.

      The eventual goal is to make the tool generate .jpg files from the raw image, but that requires much more work.

      • I see. Managed to get the .jpg files from an .lfp; thanks.

        If I understand correctly (also see http://nirmalpatel.com/hacks/lytro.html), closely matching the focal values for each jpg (from the .json file) to a value in the depth.txt, you can pinpoint the area to a 54x54px square, where the corresponding .jpg is at best focus. The depth.txt being a flat 2d array of the 20×20 values, a division into the matching value’s line number (in depth.txt) can give you your row and column, so you could calculate the pixel coordinates of the focus area, as it relates to the 1080×1080 image. This would need be the clickable region to call that jpg, etc. This should make it possible to build a lytro server independent html5 viewer.

        This one kinda works (with a rudimentary 4×4 clickable grid, instead), just as an html5 proof of concept, at least in safari and chrome. http://panoramablog.com/lytro/keyboard

        Question:
        So the camera decides how many jpg files to generate for the .flp file from the RAW data? The most .jpg files I have seen in any of the .flp files is 12.

        Thanks again.

      • Correct. The number of jpgs the camera or Lytro desktop software generates depends on some algorithm that probably involves doing graph cuts to find as many useful unique focal planes as there are in the image and generating a jpg at each one.

      • Thank you for all the great work you’ve put into this! I haven’t been able to get a camera yet, but I’ve been using your lfpsplitter tool to examine the IMG0004.lfp image that you posted. Would it be possible for you to post the IMG0004-stk.lfp version as well? As far as I understand, it is not trivial to create that from the IMG0004.lfp without my having the proprietary software.

      • As Joe has already pointed out, I was wondering as well if the web-optimized .lfp file could be uploaded (AKA: IMG_0004-stk.lfp)? I was curious as to how the resulting .jpg images turned out with your program. I’m planning on acquiring the camera soon as well, but I would like to get a head start on how to handle these types of files without any delay. Let me know if this is possible or not when you have a free moment. Either way, I’m very impressed with the lfpsplitter program thus far. Thanks.

  • Thanks for all the info and utilities.

    I’m trying to get the specs of the micro lens array.
    Does anyone know? It does not appear to be in the
    meta files, at least directly.

    cheers,
    hurf

  • Thanks for the splitter. Save me some time doing it myself.

    With it, I wrote a bunch of notes and demos. The first one can be found here:
    http://www.facebook.com/notes/hanlin-goh/light-field-photography-part-1-technical-introduction/10150933770112188

    Cheers,
    Hanlin

  • Any idea where to find the *.lfp files on windows machine? When connecting the camera, lytro desktop automatically transfer the files into it without seeing any lfp files!

  • Thanks for the tool and the introduction.

    Could anybody provide me some *stk.lpf files that I can play around with then? That would be great. Thanks!

  • Hi, Just wanted to say thank you for the file, it works great. I was wondering something. What is the depth file referring to?

  • Thanks for your tools!
    I have some questions.
    The resolution of the sensor is 3280*3280 ,while the output image is 1080*1080, but the pixels underneath each lenslet is about 10*10.What is the relationship between these numbers?
    The numbers of the lenslets are 3280/10 = 328.Whether the final resolution is 328*328?
    It is different from 1080*1080,who can tell me why?

    • I’m not sure how they decided on their end resolution, but anything that goes from a hexagonal grid to a square grid is going to require some interpolation.

  • How obtain the firts image that appears on top of page? What method you used to get it? thanks!!

  • Are there any plans to update this to deal with the new LFP stacks that come out of the updated Lytro Desktop? The new perspective shift stuff is cool, but it no longer works with lfpsplitter. Apparently both the perspective shift images and the refocused stack are now encoded in H.264…

  • I realy wanted to buy a Lytro camera, but considering the effort to be invested in the software part to get anything a tad usefull on linux!
    So I’ll save my 400 bucks to get myself a DSLR and run this trick: http://dof.chaoscollective.org/

  • How can i extract the perspective shift images from this file?
    Somebody knows it?

  • In the old file format, depth.txt had a 20×20 array, where each value revealed the optimum focus for a 54×54 pixel square of the 1080×1080 image. But with the new file format, depth.txt is a 330×330 array. Does anyone know how this corresponds to the 1080×1080 image? I can’t come up with any sized square (of integer pixels) that would match the depth array with the image.

  • Anyone interested in working with Lytro files on linux? I wrote a little program that takes the .raw data file that lfpsplitter produces and lets you create images from it focused at a variety of depths.

    All done with an old camera and old lytro software. Don’t know how perspective shift changes this. It doesn’t use the -stk file, just the .raw file extracted from the original .lfp file by lfpsplitter.

    Anyways, it is here: http://www.binslett.org/Lytro/refocus.tar.gz

    • I am extremely interested about this, I have compiled it in the visual studio 2010 environment, but it doesn’t look that good comparing to what you uploaded to the lytro website. So may I contact you, somehow?

      • Can you tell me how to compile it in the VS2010 environment? I tried and I linked the libnetpbm, but some “unresolved external symbol” linker errors returned. can you tell me why or where to get the lib you used?

  • http://www.binslett.org/Lytro/refocus.tar.gz
    I have some difficulties to compile refoucs.tar.gz in ubuntu raring
    can anyone help me?
    the errors is
    focusimage.c:94:49: error: ‘tuple_type’ undeclared (first use in this function)

  • I’ve been looking at the depth map, which gives depth information in “lambda” units. I would find it very useful to transform these units into real world distances. I have been trying to interpret the lambda values, but I can’t figure out what they relate to. Does anyone know if there is a transformation that I can apply to translate these into meters from the camera lens?

  • This is great job! I wonder how lytro photo select “focused image” by clicking one area. Is there a z-depth map? If so, it’d be nice to play around it.

  • Hi, I tried the raw2tiff tool, the command doesn’t work, it says the input file too small. I tried other size, but I can’t get a reasonable result. I used the sample lfp file in your package.

    • With the raw2tiff command above it is only possible to convert 16-bit images. But as he mentioned, the file contains an image with 12-bit per pixel.

  • Hi,there is a problem while converting the raw.lfp to .tiff following ‘bggr’ rule, the .tiff is darker than the image exported by the Lytro software, so how can we get the real color of the image?

  • How could you convert the .raw file to the .tif file for windows pattern?

  • Large corporate to small shopkeepers are finding mobile
    apps effective as a marketing and information sharing tool, that
    enables them to leverage profits and sketch image
    of their companies as progressive. The Xbox 360 was the first of
    the current generation of games consoles to be released in late 2005.
    There is certainly much useful gamming software, which runs either in your computers or on your own portable devices like laptops or handheld devices.

  • I enjoy, result in I discovered exactly what I used to be
    having a look for. You’ve ended my four day lengthy
    hunt! God Bless you man. Have a nice day. Bye

  • Hey would you mind letting me know which web host you’re using?
    I’ve loaded your blog in 3 completely different internet browsers and I must
    say this blog loads a lot faster then most. Can you recommend a good internet hosting provider
    at a fair price? Thank you, I appreciate it!

  • The story would be this time, as it was strongly emphasized Komoto
    as game management, should play a more significant role than was the case with Final fantasy XI,
    more oriented to pure leveliranje, gridanje and hack & slash
    questanje. This way, the bass drum will cut through your mix easily, without tediously
    editing or automating your other musical elements to achieve the same effect.
    You can use totally different videos together or cut up one video clip into two.

  • después detalla discusión y diseñado en historial inestimable y creado para la empresa en el momento de su stint . El actual pensamos que tendríamos deshacerse de el Presidente persona Silla para su / su deberes y mediante efecto de desde meta, 08. Superioridad a impecabilidad , así como para consumidores . 2008 significa una interesante porque nosotros bienvenido renombre mundial marcas de moda, T. s ciudadanos además de a hacia el mencionada Debra Gunn Downing, Gobierno de promover , Shoreline Plaza. Y esto es sólo el comienzo un síntoma . Hay más emocionante agradable anuncios vengan Oscar de la Renta C es abierto ???air jordan españa de Médico dcors aumentaron, pero embargo retornos de rendimiento ingreso turn-over menos , causada por un simplemente por medio de , acerca de equilibrio. Voltee en el Moda había ido de mil a 339 millones de mil mil debido a Scapinos números para El mes de enero fusionado en 07 (11. Siete millones Condiciones Limited bien informado que los miembros Afiliados al Suplementario Total Reunión (EGM) del guardado en 2004, 2008, entre dis otros, han tomado el siguiente Una especial

  • It’s actually very complex in this full of activity life to
    listen news on Television, thus I only use web for that purpose, and obtain the
    newest news.

  • 名称:10KW并网分布式光伏发电系统类别:分布式光伏发电型号:SW-BW-10K功率:10KW电压:2 分布式光伏发电 20V/380V尺寸:占地面积80㎡ ◇并网分布式光伏发电系统可安装在有电网覆盖地区的地面或各类房屋屋顶,包括民宅、学 分布式光伏发 校和办公楼等,容量从2KW到500KW不等,所有蜀旺光伏逆变器均可以被应用。◇自发自用,太阳能电池板厂家,余电上网:所发的电可以返送回公共电网以获取电价补贴,同时也可以由用户自己消耗掉(也能获得电价补贴)。◇运用高技术含量的通讯手段,用户通过各种移动设备即可全程监控系统的运行状况。◇提供最彻底的绿色能源,无污染,无温室气体排放。◇电价补贴政策的施行有助于用户在几年内即可收回 太阳能电池板厂家

  • 名称:10kw离网太阳能发电系统类别:离网太阳能发电系统型号:SW-LW-10K功率:10KW电压:220V 太阳能发电系统 /380V尺寸:占地150㎡ 太阳能发电系统 离网太阳能发电系统由太阳能电池板,逆变器控制一体机,蓄电池组成,适用于民用范围内,主要用于边远的乡村,如家庭系统,村级太阳能电站;主要应用于无电或电力缺乏地区的工业或生活用电,装机容量从几千瓦到几百千瓦不等,具有以下特点: ◇微电脑(MCU)控制技术,性能卓越; ◇工频变压器设计方案,纯正弦波交流输出 太阳能电池板 ,适应负载能力强; ◇智能化RS232接口,人性化数码通信; ◇宽范围,高精度,全自动稳压,分布式光伏发电; ◇全方位

  • #150w单晶太阳能电池板厂家.txt #Tue Dec 02 09:13:11 CST 2014 pro_time=15 pr 太阳能电池板厂家 o_package=纸箱 page= pro_unit=瓦 pro_us 太阳能电池板 erprice2=0 pro_ormat=1480*680*35mm bigcatid=0 pro_allcounts=10000 content=四川蜀旺科技公司专业的太阳能电池板厂家,主要生产1W-310W单晶太阳能电池板、多晶太阳能电池板,我们生产的太阳能电池板有适合家用太阳能 太阳能发电系统 电池板,也有用在分布式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范

  • 据获悉的两份光伏发电数据显示,相同装机容量的单晶硅太阳能电池板电站发电量高于多晶硅太阳能电池板。 太阳能电池板 一份报告来自宁夏某大型地面光伏电站,近三年的发电量对比显示,同样装机容量的光伏电站,采用单晶硅技术比多晶硅多发电5%左右。单晶 太阳能电池板 硅光伏电站的电量优势不仅限于大型地面电站。另一份报告来自山东某公司员工宿舍前的光伏车棚,结果为单晶硅车棚的发电量高于多晶硅5%-8%。 多发电量达5%以上 建于宁夏某大型地面光伏电 太阳能电池板厂家 站的数据显示,单晶硅电站比多晶硅平均多发电量达到5%。 从2012年9月至2014年9月,项目持有者对比相同装机容量的单晶硅、多晶硅光

  • 甘肃太阳能电池板厂家直销,蜀旺科技专业生产1 太阳能电池板 W-310W单晶太阳能电池板、多晶太阳能电池板,我们生产的太阳能电池板有适合家 太阳能发电系统 用太阳能电池板,也有用在分布式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标准测试条件下不超过正负3%的公差范围,在10年使用期间,输出功率在90%以上;25年使用期间,输出功率在80%以上,以下是主要规格太阳能电池板的介绍: 太阳能电池板厂家 一、电性能参数 标准测试条件(STC):辐照度1000W/㎡

  • 称:10kw离网太阳能发电系统类别:离网太阳能发电系统型号:SW-L 太阳能发电系统 W-10K功率:10KW电压:220V/380V尺寸:占地150㎡ 产品介绍 售后支持 在线订单 离网太阳能发电系统由太阳能电池板,逆变器控 太阳能发电系统 制一体机,蓄电池组成,适用于民用范围内,主要用于边远的乡村,如家庭系统,村级太阳能电站;主要应用于无电或电力缺乏地区的工业或生活用电,装机容量从几千瓦到几百千瓦不等,具有以下特点: ◇微电脑(MCU)控制技术,性能卓越; ◇工频变压器设计方案,纯正 太阳能电池板 弦波交流输出,适应负载能力强; ◇智能化RS232接口,人性化数码通信; ◇宽范围,太阳能发电系统,高

  • Things to Remember Before Selecting Costume JewelryCostume jewelry Ornaments jewelry necklace bracelet ring earrings is also called fashion jewelry and j Bvlgari Cartire Hermes LV Tiffany Versace Chanel Jewelry unk jewelry. You can easily purchase these fake jewelries, without spending a lot of money. 19th century fake jewelries are extremely beautiful and affordable. You can find a selection of vintage

  • 名称:10KW并网分布式光伏发电系统类别:分布式光伏发电型号:SW-BW-10K功率:10KW电压:220V/380V尺寸:占地面积80㎡ , 分布式光伏发电 太阳能电池板; ◇并网分布式光伏发电系统可安装在有电网覆盖地区的地面或各类房屋屋顶,包括民宅 太阳能电池板 、学校和办公楼等,容量从2KW到500KW不等,所有蜀旺光伏逆变器均可以被应用。◇自发自用,余电上网:所发的电可以返送回公共电网以获取电价补贴,同时也可以由用户自己消耗掉(也能获得电价补贴)。◇运用高技术含量的通讯手段,用户通过各种移动设备即可全程监控系统的运行状况。◇提供最彻底的绿色能源,无污染,无温室气体排放。◇电 分布式光伏发 价补贴政策的施行有助于用户在几年内即可收回投资

  • Great beat ! I would like to apprentice while you amend your
    site, how could i subscribe for a blog site?
    The account aided me a acceptable deal. I had been tiny bit acquainted of this your broadcast offered bright clear concept

  • 名称:10KW并网分布式光伏发电系统类别:分布式光伏发电 分布式光伏发电 型号:SW-BW- 分布式光伏发 10K功率:10KW电压:220V/380V尺寸:占地面积80㎡ ◇并网分布式光伏发电系统可安装在有电网覆盖地区的地面或各类房屋屋顶,包括民宅、学校和办公楼等,容量从2KW到500KW不等,所有蜀旺光伏逆变器均可以被应用。◇自发自用,余电上 太阳能电池板厂家 网:所发的电可以返送回公共电网以获取电价补贴,同时也可以由用户自己消耗掉(也能获得电价补贴)。◇运用高技术含量的通讯手段,用户通过各种移动设备即可全程监控系统的运行状况,太阳能电池板厂家。◇提供最彻底的绿色能源,无污染,无温室气体排放。◇电价补贴政策的施行有助于用户在几年内即可收回投

  • 名称:10kw离网太阳能发电系统类别:离网太阳能发电系统型号:SW-LW-10K 太阳能发电系统 功率:1 太阳能发电系统 0KW电压:220V/380V尺寸:占地150㎡ 离网太阳能发电系统由太阳能电池板,逆变器控制一体机,蓄电池组成,适用于民用范围内,主要用于边远的乡村,如家庭系统,村级太阳能电站;主要应用于无电或电力缺乏地区的工业或生活用 太阳能电池板 电,装机容量从几千瓦到几百千瓦不等,具有以下特点: ◇微电脑(MCU)控制技术,性能卓越; ◇工频变压器设计方案,纯正弦波交流输出,适应负载能力强; ◇智能化RS232接口,人性化数码通信; ,太阳能发电系统; ◇宽范围,高精度,全自动稳压; ◇全方

  • #150w单晶太阳能电池板厂家.txt #Tue Dec 02 09:13:11 CST 2014 pro_time=15 pro_package=纸箱 pa 太阳能电池板 ge= pro_unit=瓦 pro 太阳能电池板厂家 _userprice2=0 pro_ormat=1480*680*35mm bigcatid=0 pro_allcounts=10000 content=四川蜀旺科技公司专业的太阳能电池板厂家,主要生产1W-310W单晶太阳能电池板、多晶太阳能电池板,我们生产的太阳能电池板有适合家用太阳能电池板,也有用在分布式太阳能发电站的太阳能电池组件,我们保证我们的太阳能电池板在标 太阳能电池板厂家 准测试条件下不超过正负3%的公差范

  • 这是新建文章 1,太阳能电池板厂家.html,请修改添加正文内容 太阳能电池板厂家

Leave a Reply