I’ve been trying to figure out how to get my flash off-camera on my X-Series cameras but still retain TTL function for a while now. I’ve asked around everywhere but either nobody knows, or nobody seems to agree how to get it done. Fuji don’t produce a TTL flash cable themselves, and there are no 3rd party solutions either for the Fujifilm X-Series cameras. I have a radio flash sync system, which is fine when I’m taking photos in a studio type situation where TTL metering doesn’t matter, but I wanted something to get the flash off the camera when I was out and about, an easy TTL solution that meant I didn’t have to try too hard for quick snaps. Sometimes by the time you’ve got the flash power right the moment has gone. I’ve been using the EF-20 and EFX-20 flashes with the X100 off-camera by activating the on-camera flash and firing the EFX-20 flash as a slave (the EF-20 doesn’t have a slave mode). This works well in many situations but has a few disadvantages. Firstly, you might not want the on-camera flash to fire – you may only want light from your main flash. Secondly, it’s not always 100% reliable, and finally, it doesn’t work on the X-Pro1 as it doesn’t have an on-board flash!
I’ve been doing a lot of work improving my flash techniques recently (a long post will be coming up about that soon) and really wanted this sorted out so I decided to take matters into my own hands! I tried out a supposedly universal cable in-store that said it worked with Nikon, Canon and Fujifilm, but it didn’t work at all. I wondered if one of the cables from another main manufacturer would work on the X-Series cameras. The two candidates being Nikon and Canon of course. Given the historic connection Fuji had with Nikon producing the S2 and S5 DSLRs I thought that a Nikon lead would be the obvious choice, but having had a look at the two, the connection pin placements on the Canon cables seemed to match better with the pins on the Fuji hot-shoe. With the genuine Canon cables around £50 I just couldn’t justify buying one on the off-chance that it worked, but after a search around I found a 3rd party Canon compatible cable by Pixel on Amazon at £16.99 – at that price it was worth a shot! This is the cable I bought – Pixel FC311/s Compact TTL Sync Cord for Canon….
See on www.photomadd.com
Most people think the digital cameras histogram only shows you your exposure. Few realize that it can also tell you a lot about image quality. Some of you may have heard the saying: “Exposing to the right“ But what does it really mean? If I told you that the histogram was a graphical representation of your digital cameras sensor and the distribution of tones in an image you would probably reply ………“Well So What!” But if I told you you that, by understanding how it works, you could possibly increase the image quality of your
picture by another 10-20% then that might surprize you! Digital cameras can’t see as well as the human eye. Where you and I can see up to 16 stops of light and shade,Black & White film can see around 11 stops (see diag. below), but the digital cameras censor, (like slide film) can only see around 5 stops. Even if this number increases as technology advances, the theory behind how to expose an image, based on the histogram is still sound. What you must first understand is, the histogram is divided up into five bands of equal width. So you could suggest that each band represents one stop of dynamic range. If these bands all recorded the same amount of information, then life might be a lot simpler, but they don’t! Instead 50% of the tonal values are recoded in the brightest stop of the histogram. (zone VII on our diagram) half as many in the second stop and so on. With most digital cameras recording RAW images in 12 bit, this gives us 4096 potential tonal values in an image. If 50% are in the brightest stop, that = 2048 tonal values. In Zone VI the are 1024, Zone V = 512 Zone IV = 256 and Zone III only 128……
See on www.my-photo-school.com
There are no rules for good photographs, there are only good photographs – Ansel Adams
In a previous article, I discussed several so-called ‘rules of composition’. Compositional rules, however, can be polarizing and divisive. Is this because as artists, we prize independence and don’t like to, ‘color in between the lines’? Or is it because we’ve all experienced disappointment when slavish application of the Golden Ratio still produces drab and lifeless images? Certainly, great works of art have been produced throughout history that paid no heed to pre-determined compositional rules. You may ask then, if compelling art is not created by simply following rules, what’s the point of learning the rules in the first place? That’s a great question.
Now this is not going to be an article suggesting that all compositional rules are ‘bad’ or ‘wrong’. Instead, what follows is a look at the rationale behind some established compositional rules. I’d argue that by understanding the intent behind a rule, we can subvert or break the rule to create drama or focus the viewer’s attention in creative and novel ways. Let’s begin with an example from another visual medium: drawing.
A story about eyes
Many years ago, my great-uncle – an accomplished painter and sculptor – was teaching me how to draw portraits. He suggested placing the eyes at the vertical midway point of the head. This ‘rule’ won’t be surprising for anyone with a drawing background, but for many people, the idea that the eyes are halfway down the face is unintuitive – it seems too low! I recently had a conversation with a friend who received the same advice from his father, despite the fact that he and I grew up in different countries. The fact that two artists from opposite sides of the planet were taught the same ‘rule of eyes’ points to one source of artistic rules: observations about the natural world. In reality, are everybody’s eyes exactly halfway down their face? No, but it’s a good starting point that is visually pleasing and conforms to our expectations of illustrated portraits. In fact, a distinguishing feature of children’s drawings of people is that the eyes are placed ‘too high up’ on the face. This is a simple rule that helps us to draw a more realistic portrait. Just as importantly, however, understanding this rule allows us to make deliberate choices. We can draw a face with the eyes in the middle of the face for a natural look. We can instead place the eyes above the midway mark to give the drawing a more child-like quality. Or we can place the eyes below the midway mark to make the drawing look furtive or comical.
See on www.dpreview.com
White balance (WB) is the process of removing unrealistic color casts, so that objects which appear white in person are rendered white in your photo. Proper camera white balance has to take into account the “color temperature” of a light source, which refers to the relative warmth or coolness of white light. Our eyes are very good at judging what is white under different light sources, but digital cameras often have great difficulty with auto white balance (AWB) — and can create unsightly blue, orange, or even green color casts. Understanding digital white balance can help you avoid these color casts, thereby improving your photos under a wider range of lighting conditions. Color temperature describes the spectrum of light which is radiated from a “blackbody” with that surface temperature. A blackbody is an object which absorbs all incident light — neither reflecting it nor allowing it to pass through. A rough analogue of blackbody radiation in our day to day experience might be in heating a metal or stone: these are said to become “red hot” when they attain one temperature, and then “white hot” for even higher temperatures. Similarly, blackbodies at different temperatures also have varying color temperatures of “white light.” Despite its name, light which may appear white does not necessarily contain an even distribution of colors across the visible spectrum….
See on www.cambridgeincolour.com
See on Scoop.it – Fuji X-Pro1
Controlling the light and mastering the exposure is the mantra for success as a photographer. Understanding the camera’s metering is akin to mastering the light & exposure. Getting to know how the camera reads the light and translates this information into shutter speed, aperture value and ISO is what determines the exposure; optimal exposure to be precise.In case of digital cameras, it is an in-built light meter that works as the brain behind the camera’s calculation for determining the shutter speed, ISO and aperture values for a particular scene. The in-built light meter takes the guesswork out of the field and enables you to get optimum exposure in almost all the cases when the camera is operating in auto and semi auto modes. I have personally experienced this neat camera trick of determining the exposure and would like to encourage you to test it out for yourself if you haven’t yet played around with it. Turn your camera to one of the semi-auto modes; aperture priority or shutter priority. Let’s take shutter priority for instance. Set the shutter speed to any arbitrary value say 1/30sec. and keep the ISO to ISO 100. Mount the camera on the tripod.
Now rotate the camera and make a note of the aperture values as the camera pans. The camera meters and automatically sets the appropriate aperture value depending upon the amount of light entering the lens. This explains the basic functionality of a light meter — the ability to determine the proper exposure (in terms of aperture f-number, shutter speed and ISO) for a photograph.
While the sophisticated digital beasts nowadays have in-built light meters, professional studio photographers still keep the external light meters handy. These hand-held light meters enable them to proficiently work in the manual mode and yet get the exposures that they want (and the final result as visualized by the client). The hand-held meters help them get the accurate exposures under various lighting conditions. But before looking at the how and why of the light meters, let’s start with what a light meter basically is….
See on www.apnphotographyschool.com
You can find many articles online discussing the benefits of shooting in RAW and probably an equal number full of counter arguments stating that it is possible to obtain equally good results shooting in JPEG. Whilst that is definitely true, I want to discuss the reasons that pushed me to exclusively use RAW in the hope that it can persuade others to do the same. I liken RAW processing to taking the camera off ‘auto’ and shooting in ‘manual’ mode. When people are starting out in digital photography, it can seem like another area full of technical jargon that forms a barrier preventing its uptake. However, once you have an small understanding of the processes involved and how different settings can impact your results, you will find that letting your camera do the processing can be the limiting factor in achieving your photographic vision.
What is RAW?
A RAW file is an uncompressed image file that records the data from the sensor ‘as is’, with minimal processing. Depending on your camera, this file will most likely contain either 12-bit or 14-bit data. When shooting in JPEG, the camera will take the RAW file, process it with a number of generic actions (typically contrast/saturation adjustments, correcting for white balance and sharpening) before compressing the image down to an 8-bit JPEG file. That difference in ‘bit depth’ is the key here. The 12-bit image will contain 2^12=4096 tones per channel. Given that there are three channels per pixel (red, green and blue), that equates to 4096x4096x4096= 69 billion possible tones per pixel….
Now those numbers are almost too large to comprehend, however it is quite simple to consider in context. When you take a JPEG file from your camera into Photoshop to process, there are only 256 possible tones to define the colour for each red, green or blue channel, which means that when you start apply changes to contrast or brightness, there are a very limited number of possible tones for each pixel, which can result in obvious image degradation if pushed too far. With a RAW image, the number of possible tones is that much greater that more significant changes to can be made without any impact on the final image quality. This doesn’t come without a cost though. Due to the increased bit depth of RAW files, they are anywhere from 2-6 times larger than the corresponding JPEG when recorded in camera. This will make your vast memory card seem very limited. Additionally, where as a JPEG is typically printer-ready straight out of the camera, a RAW file will need to be manually processed in your digital darkroom. So, to answer the obvious question of ‘is it worth it?’, lets consider the benefits…
Continuing in this mini-series on street photography, there are a number of techniques that I use while shooting. Although it’s possible to describe most of them in some detail, full understanding requires both demonstration and practice – this is where joining one of my workshops is ideal Together with the basic principles of balance, perspective, composition and what makes a good image – these techniques may be used singly or in combination to generate strong street images. In fact, they also apply to documentary and reportage work, too; the only difference between good street photography and photojournalism is that the latter has a consistent theme and subject. It’s important to note that not every technique is suitable for every situation, and vice versa; as always, a good portion of making a strong image is knowing what to leave out. In a photographic situation where you have effectively zero influence of any of the elements in your frame except the composition and exposure, timing is the one key bastion of control that remains in the hands of the photographer. By making a conscious choice of when you push the shutter, you decide when each and every single one of the moving elements in the frame is in the position you want them to be in. However, it is too late to react only at the exact instant you see the composition you want. It is therefore important for photographers to be able to see a scene, visualize the potential contained there, and be able to imagine what the finished frame will look like once all of the desired elements are in place. It is then a matter of simply waiting for those elements to all come together, and being ready with the camera when they are. No matter how fast reflexes, or your camera, the fact is that if you react off to you see something, it’s too late; training yourself to anticipate action is something that can give you the critical second or half-second which can make all the difference between getting the that and missing it completely…..
See on blog.mingthein.com
So after playing with this for weeks, I believe this is probably the maximum that we can get out of the Fuji RAF files until the other developers come up with better understanding of the unique X-Trans CMOS sensor. Now this is still not the most ideal workflow for most people. Pixel Peeping aside, the Fuji X files are fantastic, even in Adobe Lightroom. My goal in this was to get a better understanding of what is going on. I wish I knew how to program, because I’d love to create a simpler way to do this. If there’s anyone out there that is interested in taking what I’ve done and turning into a nice little drag and drop application, I think you’d get a lot of fans.
- Using command line DCRAW: dcraw -a -H 0 -o 4 -q 2 -f -m 15 -g 2.4 12.9 -6 -T
- Convert TIFF file to LAB file in Photoshop
- Resize image 200% with Bicubic Smoother
- Select Lightness Channel under channel panel.
- Select Median filter under Noise in Filter. Select 1 pixel
- Resize image 50% with Bicubic Sharper (Nearest Neighbour is actually a more subtle effect which I kind of prefer)
SilkyPix and RPP both process very similar files and although I know for certain that RPP uses DCRAW, SilkyPix I believe is a proprietary RAW engine. What I do speculate is the chroma smearing is a result of interpolation errors. Much of it can be suppressed with chroma noise reduction without loss of image quality. However one of the big nagging issues was this ‘zipper’ aliasing that was happening. After analyzing the files, it seems specifically the red sub-pixels are causing much of this zipper effect, but also part of the interpolation issues. I was able to get rid of a good portion of the chroma smearing by doing 3×3 multi-pass median filtering through DCRAW…..
Full article on following Website:
See on www.flickr.com
this is also known as Evaluative metering. The camera meters across the frame, and then averages the overall brightness and chooses the exposure accordingly.
This metering mode will be fine in the majority of situations where there aren’t too many bright highlights or dark shadows, or backlighting, and the scene you are photographing is evenly lit.
in this mode the camera still meters the entire frame, but assigns the greatest weight to an area in the centre of the frame. So this would be the mode to choose when your subject is in the centre of the frame, and you don’t want the exposure to be affected by very dark or bright areas around the edges.
Centre-weighted is often referred to as the classic metering mode for portraits, as it exposes for the subject rather than the background (assuming your subject is in the centre of the frame!).
However, this wouldn’t be the mode to choose if you’ve composed your picture with your main subject off-centre.
here the camera meters only a small part of the frame. On my Nikon D700 the area is a circle 4mm in diameter – the exact amount will vary from one camera to another. This means that you can take a very precise reading, getting the exposure correct for your subject, and ignoring brighter or darker areas in the rest of the frame.
See on www.my-photo-school.com
You’ve spent a small, or even a large fortune on your camera, it’s state of the art, has bells, whistles, and even built-in metering.
You head-out to take photos, secure in the knowledge that some boffin engineers have programmed your camera’s metering system to give you perfect exposure every time. You set up your shot – choose your aperture, and click – the camera has chosen a shutter speed and your shot is in the bag. Here’s what my X-Pro1 came up with: 1/160 f/8 ISO200. But what if this exposure wasn’t ‘correct’, or I should say optimal…..
How else could we judge the correct exposure for this scene?
You can use a hand-held light meter to set your exposure – an incident meter measures the light falling on it, and gives you an exposure value. It has a little white dome which you point at your light source – in this case the Sun, and you can set ISO and in this case f/8 for aperture, and the meter provides an optimal value for shutter speed. My meter in full Sun gave me a value of 1/500 f/8 ISO200. I set my X-Pro1 to those settings and got this shot: As you’d expect, the change in shutter speed has produced a darker image – the colors are more saturated, the highlights are muted, and the shadows are deep black. If you compare detail from the camera exposure and incident exposure, you can really see the difference…
See on www.fujix-forum.com