Caterham: How Tech Fuelled An F1 Challenge
What IT does an F1 team need to operate both trackside and in the factory and what does the future hold for Caterham F1?
Formula One is the most technologically advanced sport in the world, with IT central to how the car is built, run and analysed.
Considerable resources are devoted to the research and development of components, while the telemetry collected and analysed is central to the operation of the car and the strategy employed by the team.
The role of IT to an F1 team is becoming increasingly important thanks to regulations that aim to make it easier and cheaper to participate in the world’s premier racing series. Reduced in-season testing, limits on the number of people that can attend each race and financial constraints mean teams must operate more efficiently.
Start from scratch
Caterham, along with HRT and Marussia, was one of three new teams to join F1 in 2010 as part of Bernie Ecclestone’s plans to attract new teams to the sport. Their application was accepted in September 2010, but with no infrastructure in place, it was a race against time to get everything in place for the first race in Australia in March 2010.
Bill Peters, former McLaren Group CIO and F1 veteran of 20 years, was appointed CIO of Caterham in October 2009 and was the fifth employee to be appointed by the team, then owned by Tony Fernandes.
“On the day I joined we doubled the workforce to eight people,” he told TechWeekEurope at the British Grand Prix in Silverstone. “The opportunity to start a Formula One team from scratch is something not to be passed up. To be honest, you can get institutionalised if you’re with an organisation for a long while and I was feeling a bit that way and I wanted something fresher and something that was going to give me a buzz.”
Race against time
Peters had half the budget he had enjoyed at McLaren and no team with which to build both the trackside and factory infrastructure from scratch. He had a “pretty clear” vision of what he wanted and searched for a supplier that would be a technical fit, was cost effective, and could deliver in time for the 2010 season.
“We had to do a very quick and dirty supplier selection because we needed to get people on board as quickly as possible,” he explained. “We needed the suppliers by December 2009 at the latest so we could actually stand a chance of getting the trackside infrastructure up and running for March 2010.
“If we didn’t get the cars on the grid in March, we would lose millions and we wouldn’t be able to go racing that year. It wasn’t an option not to do it.
“We went to all the usual suppliers, all the big boys. IBM ruled themselves out on cost, they were a bit too expensive, HP probably could deliver what we wanted but couldn’t do it in the timeframe and Dell basically came along and ticked all the boxes. It was really a no-brainer in that respect.”
Trackside focus
Peters says being able to get most of the equipment from one supplier was a bonus. Although he had used Dell’s client-side equipment at McLaren and knew the Austin-based firm’s server offering was “quite solid” he didn’t realise it was heavily into networking and software.
“We had to make sure the trackside systems were in place first,” he said. “That was the number one priority because we could muddle through in the factory without systems.”
At Caterham’s first test day, the team had no equipment or servers and ran the session using laptops to capture and analyse telemetry. Simply put, without IT, it is simply impossible to run the cars.
Once the trackside infrastructure was in place, attention turned to the factory, where car and component development takes place.
“We built up the factory infrastructure (virtualised) so we could support all the usual business systems such as email and Internet, that sort of thing, plus Computer Aided Design (CAD) product lifecycle management (PLM),” recalled Peters. “To all intents and purposes, we are a manufacturing and engineering business, so we need all those systems. And on top of that we need very specialist simulation and engineering tools.
“That’s where, ultimately, my job is focused: make tools that will make a competitive difference to us, work at their best. The business systems just need to be efficient.”
Other tools, such as Microsoft Sharepoint for internal intranet and Lync for unified communications support hundreds of engineers at Caterham’s base in Oxfordshire, who are continuously designing new car parts to make the car faster and lighter. However their ability is limited by time and money, making the use of technology even more important during the design process.
Supercomputing power
When a new component is created, it is tested in a virtual windtunnel to simulate the aerodynamics. If it works, a model of the part will be built and tested in an actual windtunnel to see if it will work on the track.
“We’re constantly striving to get a correlation between the virtual environment and the windtunnel,” explained Peters. “The closer we can get those results, the less reliant we are on a very expensive windtunnel. We’re also trying to get a correlation between the windtunnel model and real-sized cars.”
Caterham has a Dell supercomputer that is used solely to simulate the windtunnel. The team initially used the Cambridge University cluster before implementing its own cluster in the factory. It started off with a 1,500 core, 15 teraflop capability with 100TB of storage, but this year refreshed that to 5000 cores, 30 teraflops and 1PB of storage. This has increased Caterham’s aerodynamic capability ten-fold.
Virtualisation pioneers
Although creating an IT environment from scratch was a significant undertaking, it offered some advantages, such as the ability to virtualise everything from day one. Peters says when he was at McLaren, they used to take five small racks to each race – single workload servers with a backup server in case it failed.
“Some of the other teams, I know are taking eight or nine full sized racks,” he claimed. “We’re quite ahead of the game.”
Innovations like virtualisation are necessary if Caterham is going to compete with the traditionally bigger teams like McLaren and Ferrari, who have significantly more resources.
“The big teams have the bigger budgets and twice as many people to do the jobs we do,” he continued. “We have to try and optimise as much performance as we possibly can. Probably more so than the bigger teams who can just throw more money and computer power. We have to get the most out of what we’ve got.”
Race weekend
The busiest time of the year from an IT perspective is the off-season, when next year’s car is being tested and prepared. By March, attention turns to racing and the 19 races around the world that constitutes the F1 season.
Antony Smith, now senior IT support engineer at Caterham, joined the team as trackside electronics engineer in January 2010 and was attracted to the team after helping set up systems for Tyrell, BAR and Honda in the past.
“It really was a mad dash to get everything done,” he said. “We worked some crazy hours and there was stuff arriving all the time.”
The IT team arrives at the track garage on the Monday of each race week to get the equipment ready and working for when the rest of the team arrives later in the week. By Thursday, the IT guys enter support mode to make sure everything’s working and the data connections are ready.
European races are easier because Caterham can just send trucks, but for those in Asia and the Americas, it’s a bit more difficult because the garages are just empty shells and cables must be laid.
“At the end of it, you’ve got to be the last man standing to pack everything up because everyone needs data until the very end and they need their links,” Smith told us. “We have links back to the factory so whichever circuit we’re at, we can send stuff back.”
Communication is key
Communications are essential for a Formula One team. During a practice session, Caterham will have anything between 150 and 180 sensors on the car sending anything up to 1,000 channels of data back to the garage. Up to 50GB of telemetry per car per weekend is generated.
This provides information for race engineers who can see whether the driver is braking too soon or damaging his tyres, and for control engineers who are monitoring the car to see whether it can last the race. This information must also be sent back to Oxfordshire where more performance engineers can see it, with 20,000 possible outcomes simulated during every single lap, making a fast connection vital for strategy.
All the teams go through a company called Bespoke General Networks (BGN), which liaise with providers like AT&T to provide such a link.
“There are two main companies which cover the whole paddock,” said Smith. “We get the same connection wherever we go in the world, it’s just the same piece of cable that comes out of the wall. It’s all the same settings, the only difference is that at Silverstone it might be 10 milliseconds back to the factory and in Australia it might be 400 milliseconds.”
If nothing goes wrong during the race weekend, Smith said the experience can be quite fun, but this isn’t always the case.
Disaster Recovery System
“Lots of things have gone wrong, we’ve had all sorts,” he said, showing us a slideshow of problems the team have faced during its three years, including floods in Italy and Germany, a typhoon in Malaysia and tropical storms in China.
Dust, pollen and sand can get on servers, while equipment can be damaged during transportation, especially if it’s wet.
“It’s not normal IT,” Smith noted, adding that the team had learned to cope with extreme conditions over the years and paid tribute to Dell, which can provide it with spare parts and equipment within four hours no matter where it is in the world.
The team also has a Dell Disaster Recovery System in place should the worst happen, with the team keen to insure itself against a catastrophic loss following a fire in the Williams garage following the Spanish Grand Prix in 2012.
“We were next door to Williams when they had their big fire and they lost everything,” said Smith. “We lost equipment due to damage from fire extinguishers, the heat and the smoke
The future of Caterham F1 IT
Smith has since left his role at trackside, explaining that the constant travel associated with Formula One isn’t as glamorous as it might appear. In his new role, he looks after Caterham’s infrastructure and explores the future direction the team might take. So where does he see the team going next?
“We’re looking at this converged infrastructure where you get rid of separate servers, storage and switches and you roll it into one box,” he answered. “Then we can shrink stuff down even further. At the moment we’ve gone down to one rack but what we can do is make, for the same size and weight, two identical systems. So if we lose one to a fire or flood, we can still run cars.”
Caterham currently uses an Intel Xeon-powered Dell PowerEdge-VRTX system in its factory but plans to use it trackside from next season.
“Beyond that, we’re looking at trying to maximise the use of the data we’ve got,” Smith continued. “We have our own base tools but we’re looking at more resources for big data and analytics and making more use of the historical data we have to find links and patterns that we can’t pull manually.”
“We were using big data before we knew it was big data,” quipped Peters.
Although Caterham’s IT operation is impressive, this has not led to success on the track, with the team failing to score a single point in the four seasons leading up to this one. The result at Silverstone was also disappointing, was Marcus Ericsson retired after 11 laps, while Kamui Kobayashi finished 15th. The next date in the calender is this weekend’s German Grand Prix at Hockenheim, where the team will be hoping for a bit more luck.
Are you a tech Olympian? Find out with our sporting IT quiz!