The EU A.I. Act can get democratic control of artificial intelligence–but only if open-source developers get a seat at the table

Members of the European Parliament take part in a voting session about the Artificial Intelligence Act during a plenary session at the European Parliament in Strasbourg on Jun. 14.
Frederick Florin—AFP/Getty Images

Policymakers in the EU are taking final steps towards the world’s first comprehensive A.I. regulation. In the years to come, the A.I. Act will shape how artificial intelligence is built, deployed, and regulated globally. The EU established itself as a trendsetter for tech regulation with the GDPR which spread around the world in what became known as the “Brussels Effect.” This track record means policymakers around the world are taking note–as are open-source software developers.

The A.I. Act presents an opportunity to get democratic oversight of A.I. right by encouraging responsible A.I. innovation while mitigating the technology’s risks. The need is clear: Without responsible development, deployment, and use, A.I. systems can result in real harm, ranging from biased algorithmic decisions that impact access to important life opportunities to the proliferation of misinformation. The Act provides a risk-based approach to regulating A.I. across all sectors: Before systems can be sold or deployed, they must meet a series of risk-management, data, transparency, documentation, oversight, and quality requirements. These requirements vary depending on the system and use-case. Using A.I. in sensitive areas like critical infrastructure, determining access to life opportunities in education and employment, and more is deemed high-risk. Generative A.I., which made its way into mainstream culture with the launch of ChatGPT, is likely to be subject to regulation too, with the EU Parliament expressly including new provisions to address it. The next step is to finalize the law in negotiations–among the three EU policymaking bodies, the Parliament, Council, and Commission–that are expected to conclude by the end of the year.

The Act reflects many tools for responsible A.I. that have been pioneered in the open-source community for years, including model cards and datasheets for datasets. These are invaluable tools for documenting the capabilities, limitations, and design decisions that went into building a particular A.I. system. Developers are continuing to drive A.I. innovation, with contributions ranging from maintaining open source frameworks (like PyTorch and TensorFlow) essential to train A.I., to creating plugins that unlock new capabilities for ChatGPT.

In the last year, the global developer community has rallied around open-source A.I. models, which contribute to the sustainability, inclusivity, and transparency of A.I.–for example by making useful models efficient enough to run on a phone, supporting neglected languages, and allowing academic researchers access to study the innards of A.I. models. The A.I. Act must recognize this open innovation on A.I. and how it can encourage the responsible development of A.I. products offered in the single market.

Encouragingly, the A.I. Act may well recognize these important contributions of the open-source developer community. The Parliament text includes a risk-based exemption for open-source developers. Collaborating and building A.I. components in the open is protected. While open source developers are encouraged to implement documentation best practices such as model and data cards, compliance is appropriately the responsibility of entities that incorporate open source components in A.I. applications. The global open source ecosystem has powered widespread software innovation, and we share Parliament’s excitement about what open source can contribute to A.I. These provisions offer important clarity for developers and should be adopted in the final law.

Further steps could also improve the Act. Reasonably, policymakers wished to respond to the rise of generative A.I. The Commission proposed its text in 2021 and the Council adopted its position in December 2022, one week after the launch of ChatGPT. The Parliament has adopted provisions for providers of foundation models that underpin generative A.I. systems. However, it is unclear how open-source developers, academics, and non-profits will be able to comply with obligations tailored for products. As the final negotiations begin, these questions should be considered as EU policymakers set the rules of the road so that any requirements created for foundation models are calibrated to model risk level and are capable of being implemented.

These rules may well reach beyond the single market. Countries from Canada to Brazil and the United States to China have proposed or are considering A.I. regulation. There is increasing recognition of the value that open-source innovation brings. In 2018, it was estimated to contribute up to €95 billion to EU GDP–and that figure will only rise with the development of A.I.

The EU’s experience can be an important example for others to learn from, particularly how rules can be tailored to support open-source developers. U.S. Senate Majority Leader Schumer has unveiled a bipartisan framework for comprehensive A.I. regulation and plans forums to hear from a wide range of stakeholders in the fall. This is promising, and we encourage organizers to include open-source developers in these discussions.

Since last year, Canadian policymakers have been hard at work on legislation to create an A.I. regulator that takes a similarly risk-based approach as the EU while protecting developers collaborating on A.I. development.

As A.I. technology and policy evolve rapidly around the world, one unifying thread will be a risk-based approach that allows us to responsibly collaborate on A.I. development.

Shelley McKinley is GitHub’s chief legal officer.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More must-read commentary published by Fortune:

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.