ChatGPT caught regulators by surprise when it set off a new AI race. As companies have rushed to develop and release ever more powerful models, lawmakers and regulators around the world have sought to catch up and rein in development.
As governments spin up new AI programs, regulators around the world are urgently trying to hire AI experts. But some of the job ads are raising eyebrows and even chuckles among AI researchers and engineers for offering wages that, amid the current AI boom, look pitiful.
The European AI Office, which will be central to the implementation of the EU’s AI Act, listed vacancies early this month and wants applicants to begin work in the fall. They include openings for technology specialists in AI with a master’s degree in computer science or engineering and at least one year’s experience, at a seniority level that suggests an annual salary from €47,320 ($51,730).
Across La Manche, the UK government’s Department for Science, Innovation & Technology is also seeking AI experts. One open position is Head of the International AI Safety Report, who would help shepherd a landmark global report that stems from the UK’s global AI Safety Summit last year. The ad says “expertise in frontier AI safety and/or demonstrable experience of upskilling quickly in a complex new policy area” is essential. The salary offered is £64,660 ($82,730) a year.
Although the EU listing is net of tax, the salaries are far lower than the eye-watering sums being offered within the industry. Levels.fyi, which compiles verified tech industry compensation data, reports that the median total compensation for workers at OpenAI is $560,000, including stock grants, as is common in the tech industry. The lowest compensation it has verified at the ChatGPT maker, for a recruiter, is $190,000.
is $172,500, according to Levels.fyi. Stock grants included in tech industry compensation packages can turn into huge windfalls if a company’s value increases. OpenAI is currently valued at $80 billion following a February 2024 share tender first reported by The New York Times.
“There’s a brain drain happening across every government across the world,” says Nolan Church, cofounder and CEO at FairComp, a company tracking salary data to help workers negotiate better pay. “Part of the reason why is that private companies not only have a better working environment, but also will offer significantly higher salaries.”
Church worries that competition between private companies will also widen the gap further between the private and public sector. “I personally believe the government should be attracting the best and the brightest,” he says, “but how can you convince the best and the brightest to take a massive pay cut?”
Outside the Ballpark
It’s not new for government jobs to pay significantly less than those in industry, but in the current AI boom the disconnect is potentially more significant and urgent. Tech companies and corporations in other industries rushing to embrace the technology are competing fiercely for AI-savvy talent. The rapid pace of developments in AI means regulators need to move fast.
Jack Clark, a cofounder of Anthropic, posted on X comparing the EU AI Office’s salary offer unfavorably to tech industry internships. “I appreciate governments are working within their own constraints, but if you want to carry out some ambitious regulation of the AI sector then you need to pay a decent wage,” he wrote. “You don’t need to be competitive with industry, but you definitely need to be in the ballpark.”
The European AI Office did not respond to a request for comment by the time of publication. A statement from Ian Hogarth, chair of the UK’s AI Safety Institute, provided by the Department for Science, Innovation & Technology says that his organization has “rapidly” recruited 31 engineers and researchers from companies including OpenAI and DeepMind. “Demand to take part in our work evaluating frontier AI models is not slowing down,” he said. “While we do benchmark our salaries against those on offer in industry, the technical experts that are joining us from the top of their fields do so seeking more than a high salary. They are joining to contribute to a critical mission to make sure these models are safe.”
Others hoping regulators and public bodies like the AI Safety Institute can provide a counterweight to tech industry power are less upbeat. The AI Safety Institute, for instance, wants to probe how models work to ensure they’re operating safely, “Empowering government assessors to directly test models is a promising approach to surfacing safety issues, but the success of any program is dependent on the resources it is provided with,” says Harry Law, a researcher in the history and philosophy of AI at the University of Cambridge. “Governments clearly cannot always offer competitive compensation for those with experience working with AI in industry, but they can relax existing rules to meet people in the middle.”
Law says the AI Safety Institute appears to be attempting to offer compensation that falls in between low government salaries and the more attractive packages found in industry, “but only partially closes the gap.” Targeting recent graduates, as the European office specifically does, could also help augment more experienced candidates willing to compromise pay for principles.
Government and public sector AI organizations can offer candidates unique non-monetary benefits. “It’s obvious the public sector can’t compete with OpenAI but, combined with goals to ‘not be evil,’ it could be less laughable,” says Lilian Edwards, professor of law at Newcastle University. She thinks that “principle premium” makes the EU and UK agencies’ offers more attractive.
Not everyone is convinced that method will work. “I think what’s going to end up happening is you’re going to have these archaic governments with poor talent,” says Church. “This talent is at a premium today. And, you know, the brain drain will continue.”
The Importance of AI Expertise for Regulators and the Financial Constraints They Face
Artificial Intelligence (AI) has become an integral part of various industries, including finance. As AI technology continues to advance, regulators are faced with the challenge of keeping up with the rapid pace of innovation while also dealing with financial constraints. In this article, we will explore the importance of AI expertise for regulators and the financial constraints they face.
AI technology has revolutionized the financial sector, enabling automation, data analysis, and risk management at an unprecedented scale. However, this rapid advancement also brings new challenges for regulators. Traditional regulatory frameworks were not designed to handle the complexities and risks associated with AI systems. Therefore, it is crucial for regulators to possess a deep understanding of AI technology to effectively oversee its implementation in the financial industry.
One of the primary reasons why AI expertise is essential for regulators is to ensure the safety and stability of financial markets. AI systems have the potential to disrupt markets and create unforeseen risks. Regulators need to be able to identify and mitigate these risks to protect investors and maintain market integrity. Without adequate AI expertise, regulators may struggle to understand the intricacies of AI algorithms, making it difficult to effectively regulate and supervise AI-driven financial activities.
Moreover, AI expertise is crucial for regulators to keep up with the evolving landscape of AI technology. AI algorithms are constantly evolving and becoming more sophisticated. Regulators need to stay informed about the latest advancements in AI to effectively assess their impact on financial markets. By understanding the capabilities and limitations of AI systems, regulators can develop appropriate regulations and guidelines that promote innovation while also safeguarding against potential risks.
However, despite the importance of AI expertise, regulators often face financial constraints that limit their ability to acquire and retain top AI talent. The field of AI is highly competitive, with private sector companies offering lucrative salaries and benefits to attract top talent. Regulators, on the other hand, often struggle to compete with these offers due to limited budgets and bureaucratic processes.
To overcome these financial constraints, regulators can explore partnerships with academic institutions and industry experts. Collaborating with universities and research institutions can provide regulators with access to cutting-edge AI research and expertise. Additionally, regulators can leverage industry partnerships to gain insights from AI practitioners and stay updated on the latest developments in the field.
Another approach is to invest in training programs and workshops to enhance the AI expertise of existing regulatory staff. By providing continuous learning opportunities, regulators can bridge the knowledge gap and develop a competent workforce capable of understanding and regulating AI-driven financial activities.
Furthermore, regulators can also leverage AI technology itself to enhance their regulatory capabilities. AI-powered tools can assist regulators in analyzing large volumes of data, identifying patterns, and detecting potential risks or anomalies. These tools can help regulators streamline their processes, improve efficiency, and make more informed decisions.
In conclusion, AI expertise is crucial for regulators to effectively oversee the implementation of AI technology in the financial industry. Regulators need to understand the complexities and risks associated with AI systems to ensure market safety and stability. However, financial constraints often pose challenges for regulators in acquiring and retaining AI talent. By exploring partnerships, investing in training programs, and leveraging AI-powered tools, regulators can overcome these constraints and enhance their regulatory capabilities in the era of AI-driven finance.