3D scanning specialists Clear Angle Studios worked with the producers of F1: The Movie to seamlessly integrate the fictional drivers (played by Brad Pitt and Damson Idris) into real F1 tracks.
During the 2023 and 2024 seasons, Clear Angle’s team traveled to six different racetracks including Hungaroring in Hungary, Silverstone in the UK, Monza in Italy, Las Vegas in the US, Yas Marina in Abu Dhabi, and even the NASCAR famous Daytona Speedway to capture the environments that would host both the real and CG cars.

Marco Lee, technical operations manager at Clear Angle Studios, talks TVBEurope through the process.
How many shots did you work on for the production?
We don’t usually receive a definitive count of how many individual shots our work appears in. In most projects, the environments that we capture are broken into components, which can be inserted wherever needed by the visual effects vendors. This means that our contributions are often spread across a large portion of the final production, but the exact number of shots can vary depending on editorial decisions made later in post production.
How long did the process take?
The time required for each track varied significantly depending on size, complexity, and location-specific challenges. On average, each track took the team between two and six days to capture fully. Yas Marina Circuit ended up being the most time-intensive track, but Silverstone involved the largest volume of raw data. When capturing Silverstone, we collected nearly two terabytes of data from over 1,800 individual LiDAR scans and more than 13,000 high-resolution drone photos.
How big was the team that worked on the project?
We had a specialised team assembled for this project, with each member playing a key role. On the environment capture side, we deployed nine technicians, five based in the UK to handle tracks across the Eastern Hemisphere, and four based in the US who covered the Vegas and Daytona circuits.
Additionally, we had two full-body capture technicians on-site at Silverstone. These technicians set up one of our several mobile multi-camera rigs, affectionately named Jean Claude Van Scan, and our “P-series” prop rig to capture high-detail models of actors and props to be integrated into various scenes.
On the back end, we had a dedicated group of nine processing artists who handled everything from meshing the photogrammetry data, aligning the LiDAR scans, cleaning the point clouds, and producing the final 3D assets used in production.

Can you explain the process involved in Drone and LiDAR scanning the tracks?
To gain the most complete capture of the location our process involved two separate data collection methods: terrestrial LiDAR scanning from the ground and aerial photogrammetry using a drone.
At the start of the day, the team laid out physical markers around the track. These markers were essential for aligning the datasets later on, but more importantly, they allowed us to merge the ground based LiDAR scans with the drone photogrammetry.
The LiDAR scanner on the ground was used to capture high-resolution 3D point cloud data of the track, this is especially valuable for capturing fine surface details and features close to ground level. Meanwhile, the drone captured overlapping images of the entire track from above, which we later processed using photogrammetry to generate a 3D mesh and texture.
By referencing the shared markers in both the LiDAR and drone datasets, we were able to blend them into a single, cohesive 3D model. This gave us the detailed accuracy of LiDAR at ground level, combined with the broader context and visual detail from the aerial imagery.
What technology was used?
We used a combination of ground LiDAR scanning and aerial photogrammetry to capture the race tracks in high detail. For the LiDAR component, we used both the Leica RTC360 and the Leica P50 scanners. The RTC360 was ideal for quick, high-resolution scans in more confined areas, while the P50 allowed us to cover longer distances and larger sections of the track.

For aerial imagery, we used two different drone platforms depending on the site: the DJI Matrice 600 Pro and the DJI Matrice 350 RTK. Both drones were equipped with a Sony A7R IV camera paired with a 35mm lens, which allowed us to capture high-resolution, overlapping images suitable for photogrammetry.
At the start of each scanning session, we placed ground markers around the track. These markers were visible in both the LiDAR scans and drone imagery, and they played a key role in aligning and merging the two datasets accurately.
What happened to the photogrammetry footage once it had been captured?
Once captured, the LiDAR and photogrammetry data is uploaded onto our secure offline servers in our UK office at Pinewood. Then, once an asset is requested, our 3D artists process and carry out quality assurance checks on the models before delivering them to the production. These are then passed onto the VFX vendors where they are used to create the shots that viewers see in the final product.

Did AI play any role?
No, AI did not play a role in our part of the project. All data processing, alignment, and modelling was completed using traditional photogrammetry workflows, LiDAR processing software, and manual input. The accuracy and quality of the final model relied on the team’s experience with precise data capture, control point alignment, and established software tools.
What was the most complicated sequence that you worked on, and why?
The most complicated sequence that we worked on was scanning Yas Marina Circuit around the time of the Grand Prix. Due to event restrictions, we weren’t allowed to use the drone at all. We had to wait until the track was cleared after the race and then begin scanning with ground-based LiDAR through the night, continuing into the early hours of the morning.
We were able to return a few months later to capture the aerial photogrammetry, but because of the location, we had a much lower flight height restriction than usual. On top of that, the track is surrounded by large lighting towers, so we had to carefully manoeuvre the drone between them to get the best coverage. That combination of access limitations and complex flight conditions made Yas Marina a challenging but rewarding track to capture.

What were the biggest challenges you faced throughout the production?
One of the biggest challenges during the F1 film project was working around the strict time and access limitations imposed by shooting on location at active tracks. Capturing data in such dynamic, high-security environments demanded careful coordination and fast, efficient workflows.
The Las Vegas Grand Prix track in particular had unique obstacles, because much of the circuit runs along public roads. For this location, we were only able to capture sections of the track that were not on public streets and timing was critical. Our team began scanning immediately after the race concluded on Sunday night, while crews were already hard at work dismantling the infrastructure used for the event and reopening public access.Our experience in capturing real-world locations under pressure played a key role in ensuring that no critical data was missed, even within the narrow windows at play.