In 2024, the ongoing process of digitalization further enhances the efficiency of government programs and the effectiveness of policies, as detailed in a previous white paper. Two critical elements driving this digital transformation are data and artificial intelligence (AI). AI plays a pivotal role in unlocking value from data and gaining deeper insights into the extensive information that governments collect to serve their citizens.
As the demand for generative AI is expected to grow this year, it becomes imperative for the public sector to embrace responsible use of this technology. Only by doing so can governments establish themselves as trustworthy stewards.
The differences between generative AI and traditional AI
To understand the unique challenges that are posed by generative AI compared to traditional AI, it helps to understand their fundamental differences. Traditional AI primarily relies on algorithms and extensive labeled data sets to train models through machine learning. These models can provide recommendations or identify certain behaviors by recognizing patterns and adhering to predefined rules. For instance, traditional AI is used to improve the effectiveness of spam email filtering, enhance movie or product recommendations for consumers and enable virtual assistants to help individuals in seeking information.
Generative AI is emerging as a valuable solution for automating and improving routine administrative and repetitive tasks. This technology excels at applying foundation models, which are large neural networks trained on extensive unlabeled data and fine-tuned for various tasks. It can effectively identify, summarize, convert, predict and generate content from large data sets. Implementing this technology in the public sector can significantly improve efficiency, enabling organizations to complete their daily tasks with a fraction of the resources.
Generative AI presents an unprecedented opportunity to enhance various aspects of government operations and improve services for citizens. It can empower government workers with more powerful tools for answering questions and conducting research. Tasks such as contract writing and management, which are both time-consuming and crucial, might greatly benefit from the application of generative AI.
Last year, the US Department of State sought feedback on the challenges and security considerations of introducing generative and natural language processing AI into its network. A June request for information from the State Department revealed their aim to improve worker efficiency and accuracy in repetitive tasks related to market research and acquisition planning for contract writing. Machine learning-trained generative AI might help in drafting new contracts based on this research.
Implementing generative AI responsibly
The remarkable generative capabilities of this emerging AI technology raise questions about its responsible use in the public sector. For example, contract managers need to know that the original research is faithfully converted into a legally binding contract for two or more parties.
The public recently encountered generative AI primarily through tools that use pre-existing text, images, videos and audio to create customized content on demand. However, the level of detail regarding the training of some of these models might be insufficient, especially for large enterprises or highly regulated industries reliant on public trust.
To develop responsible AI, government leaders must carefully prepare their internal data to harness the full potential of both AI and generative AI. Setting responsible standards is a crucial government role, requiring the integration of responsibility from the start, rather than as an afterthought. This includes maintaining human oversight to help ensure AI-generated content accuracy and prevent biases, among other considerations.
Key pillars for responsible AI in government
IBM’s AI development centers around 5 fundamental pillars to help ensure trustworthy AI. Government leaders should prioritize these pillars when considering the responsible development, training and deployment of AI:
- Fairness in an AI system refers to its ability to treat individuals or groups equitably, depending on the context in which the AI system is used. That means countering biases and preventing discrimination that is related to protected characteristics, such as gender, race, age and veteran status.
- Privacy pertains to an AI system’s ability to prioritize and safeguard consumers’ privacy and data rights while complying with existing regulations related to data collection, storage, access and disclosure.
- Explainability is important because an AI system must be able to provide a human-interpretable explanation for its predictions and insights in a way that does not hide behind technical jargon.
- Transparency means that an AI system must include and share information on how it was designed and developed and the data or data sources used to feed the system.
- Robustness is an AI system’s ability to effectively handle exceptional conditions, such as abnormalities in input. It helps to ensure consistent outputs.
IBM watsonx™, an integrated AI, data and governance platform, embodies these principles by offering a seamless, efficient and responsible approach to AI development across various environments. More specifically, the recent launch of IBM® watsonx.governance™ helps public sector teams automate and address these areas, enabling them to direct, manage and monitor their organization’s AI activities. This tool facilitates clear processes so organizations can proactively detect and mitigate risks while supporting their compliance programs for internal AI policies and industry standards.
As the public sector continues to embrace AI and automation to solve problems and improve efficiency, it is crucial to maintain trust and transparency in any AI solution. Teams should have the ability to comprehend and manage the AI lifecycle effectively. Proactively adopting responsible AI practices is an opportunity for all of us to improve.