My name is Tom Palmaers, and I’m a press photographer based in Belgium. I started as a wedding photographer when I was still a student around 2004, documenting weddings with a photojournalistic style. I quickly found myself with a lot of work because it wasn’t common at that time in Belgium to make wedding photos in that style.
When Canon came out with the 5D Mark II, I immediately purchased it and shot short video snippets during a wedding without my clients’ knowledge — they loved it. Again I experienced how video could help bring my photography to life, and after this experience, I concentrated more on filming with a photo camera.
In January 2011, I started working as a full-time freelance press photographer for a local newspaper (Het Belang van Limburg). Video on their website was still non-existent, so I created my own YouTube channel, and the newspaper placed a link on their website.
In August 2011, I decided on my own initiative to travel to the Dadaab Refugee Camp in Kenya to report for the newspaper. I came in contact with a Belgian producer from the European Broadcasting Union (EBU) who found it fantastic that a freelance photographer from a local newspaper came on his own to Dadaab. I was surrounded by professionals from the US, Portugal, France, from photographers to camera crews to editors.
While there, I received tips from cameramen and I was allowed to watch how the news was edited. Because many news crews from around the world kept coming to Dadaab, the EBU crew had to stay for a longer time than they expected. But one of their cameramen was getting married soon and had to leave, and it would take a few days to fly another cameraman over, so they asked me if I could take his place if I had another day of extra training to film with an electronic news-gathering (ENG) camera and to prepare live feeds. That worked well, and I continued to work for a full month when I had only planned to stay there for a week.
Upon returning to Belgium, I bought my own ENG camera and got assignments for EBU in Mali, European sporting events, and (my highlight) the London Olympic Games in 2012.
In the meantime, I still continued to work as a press photographer for Het Belang van Limburg. I also concluded a contract with the local TV station, TV Limburg. Through restructuring, it was difficult to get foreign assignments via the EBU, so my attention turned to hard news and breaking news, some politics, and local sports in my area.
I later used the Canon 5D Mark III before switching to my first Canon cinema camera, the C100 Mark I — I sold the ENG camera at this time. Then it was onto the C100 Mark II, and my 5D Mark III and 1D Mark II were swapped for a Fuji camera so that I could work lighter and more discretely. The C100 Mark II was fantastic to work with for local and (sometimes) national TV. Yet the quality went down because I had to work with 2 devices to keep photo and video at a good level. Later I swapped my Fuji for a Leica M9 and Leica M10.
In 2019, my colleagues also started filming with their DSLR cameras or with their iPhones. That was a signal for me to go one step further. I believed that the future for photojournalism would be to capture footage with high-resolution cameras in RAW format. I just had to wait until the C500 Mark II came out, and I was one of the first in Belgium to purchase one in February 2020. For me, it was the start of a new journey and evolution, just like when I got the 5D Mark II.
In April 2020, I made a mini-documentary in the COVID intensive care unit in a local hospital, my first big assignment with this new workflow. My story got 2 full pages in the paper, and the front page photos were all stills extracted from the C500 Mark II footage.
My challenge was now figuring out how I could process everything with decent hardware.
I had taken out a loan to purchase the C500 Mark II, a new MacBook Pro, super-fast external SSD drives, and an external GPU to be able to process the 5.9K Cinema RAW Light.
I first experimented with Final Cut Pro X, but I soon switched to DaVinci Resolve Studio, which continues to be my preference. I film in full-frame with my existing photography lenses, mostly with autofocus lenses such as the EF 24-105mm f/4L II, EF 24mm f/1.4. II, EF 70-200mm f/2.8III, EF 100-400mm f/4.5-5.6L II, EF-S 17-55mm f/2.8 IS (in crop mode for shots in the dark), Laowa 15mm macro, and Laowa 12mm T2.9 Zero-D.
The downside of the C500MKII is the proxy codec — it’s all in MXF, which means that I can no longer load the 2K proxy files that are recorded on an SD card directly into my iPhone or iPad to quickly edit via LUMA Fusion and send directly for breaking news. The C100 Mark II and C300 I and II series have MP4 proxy files. I sent feedback to Canon requesting to have MP4 enabled as a file type option via firmware, but so far I haven’t heard anything back.
So for now, I either have to bring my laptop everywhere or load the proxies via a GNABOX 2.0 external disk, where I can convert the files to MP4 via their app, and edit them over Wi-Fi with the LumaFusion app, but the problem with this second option is that the quality is 3-4Mbit/s, which is not very good. I have also sent a suggestion to be able to choose a better compression as an option, but again I have yet to receive a reply.
Having 6K XF-AVC as an option would also help to speed up the workflow by handling most internal processing such as noise reduction and compression in-camera. But no response on this either.
My Current Workflow
I film in Clog2 on two 1TB CFexoress cards and monitor with the default Display in REC709 Wide Dynamic Range preview.
An important detail to keep in mind is to set the shutter speed to a little higher than normal. 25FPS best fits a shutter speed of 1/50 to get a natural movement in video, but this is usually too low to get a good photo. That is why I try to maintain at least 1/100s to 1/150s.
I even experimented with filming a soccer game to extract still photos. It works, but the autofocus system of Canon cinema cameras has not improved much, in my opinion.
At home, I use a MacBook Pro 15-inch 2019 2.3GHz 8-core Intel Core i9 32GB and iGPU RAZERCORE Studio chrome with AMD Radeon VII 16 GB GPU. When I’m in a rush, I mount a Blackjet TX-1CXQ CFExpress Card Reader with Thunderbolt 3 Connection. The reader can read up to 1700MB/s and that is sufficient.
With larger projects or if I have more time to edit, then I transfer the images to my two OWC 4T Thunderblades in RAID0 that reads up to 3150MB/s and writes 3500MB/S (depending on which Thunderbolt bus you connect them to). In DaVinci Resolve, using the Loupedeck CT scroll wheel, I quickly select which video clips are interesting to extract still photos from.
What’s very important is to convert the RAW Decode Quality to Canon or Davinci Resolve and not the standard timeline option to avoid artifact issues in your stills. I first ensure that everything is on an HD timeline to be able to scroll and cut smoothly.
On the color page, I add a few nodes in the node tree for the LUT, primaries, and noise reduction. I use the standard Canon LUTs. Through the RAW controls, it is easy to adjust the white balance and highlights, midtone details, etc. You also have to set the sharpness settings to zero — it’s “10” by default. Sharpening and noise reduction only come in once the stills are loaded into Lightroom.
After the color grading, I return the timeline to 6K. I created a shortcut button on the Loupedeck to turn off Bypass Color Grades and effects to scroll smoothly to the correct still frame. The other shortcut button I use is for making the still. I then export the stills in TIFF and import them in Lightroom to adjust noise, sharpness, and contrast.
A Gallery of Still Photos from 6K Video
Here are more of my news photos extracted from footage:
Thoughts on the Future
I was one of the earlier backers of the AI Alice Camera. I believe that all cameras, in the long run, will be equipped with AI technology. With the Alice Camera, this technology will be open-sourced for the most part so that everyone can train their camera to its work area or subject.
I’m currently diving deeper into learning about how these deep learning and machine learning technologies work and how they will evolve.