GenAI Assessment

The GenAI Assessment

GenAI Assessment

The U.S. government must understand both the upside potential as well as the limitations and threats posed by GenAI. We cannot afford endless debate and hype-driven paralysis – this technology is moving much too fast and we risk missing the opportunity to steer GenAI’s trajectory. Decisions made at this early stage in its development will have a marked impact on the geopolitical balance of power as GenAI changes the calculus for military, diplomatic, and economic power, as well as societal cohesion.

Some applications of GenAI could exacerbate existing dangers, generate novel threats, and enable state and non-state adversaries to manipulate weak spots in open societies. Bad actors could employ this powerful technology to amplify cyberattacks, digital disinformation campaigns, and manufacture new ways of targeting individuals. GenAI could also aid in the creation of meticulously engineered biological agents. Moreover, adversaries will exploit the GenAI systems we will increasingly depend upon.

Defining Generative AI

GenAI is a category of algorithms that finds patterns in training datasets and extrapolates from them to generate content such as text, images, or audio, given natural language or multimedia input. The underpinning architecture of the Large Language Model (LLM) is a type of neural network called a transformer. LLMs are trained to predict the next word in each sequence provided as input. However, as a byproduct of this process, they also learn to develop a sophisticated internal representation of the meaning of the input text sequence, leading to surprisingly sophisticated capabilities in a range of tasks. Individuals can interact directly with GenAI tools via natural language interface chatbots like ChatGPT or Bard, or through application portal interfaces (APIs) that connect software systems.1  

Today’s generative AI models can already perform a wide range of tasks. In the text modality alone, LLMs can generate stories and news articles, analyze meaning, translate between different languages and writing styles, and extract information for tasks such as sentiment analysis.2 Looking forward, we will likely continue to see GenAI models stitched together with a wide range of other software tools, or “plug-ins.” Some of these tools will be geared towards improving models’ overall performance and accuracy, such as external information retrieval via search engines and calculator sidecars. Others will enhance models’ capabilities for specific sub-tasks, such as route planners, scheduling algorithms, and access to proprietary or domain-specific databases. We are also seeing a second trend in which larger software systems call out to multiple GenAI models. For example, a system can prompt a specialized GenAI model trained to perform a specific task (e.g. generate a plan to code a website) and a second GenAI model that will critique that plan (e.g. check that code for cybersecurity vulnerabilities).

Limitations and Technology Trajectories

While today’s models produce content that feels very human and have proven themselves to be more capable than humans in an astonishing array of tasks, they have considerable limitations. The technical limitations relate to the component parts of a GenAI model: the algorithms underpinning its architecture and how the model processes data, the data upon which it is trained, and the compute resources required to train and use it. 

We can expect to overcome some limitations, while others will likely endure in varying degrees. Limitations of a purely technical nature are poised to change, if not disappear entirely, through potential disruptions to each layer of the generative AI “stack.” The rate of technical progress will depend on barriers to entry, the business models adopted by major AI players, whether and how governments choose to regulate AI, and whether universities remain a significant player in AI research. Across the GenAI ecosystem, current trends suggest rapid progress on a number of fronts at each layer of the technical stack. 

Short of the frontier of technical achievements, we should expect most if not all of these capabilities to diffuse rapidly to a wide set of actors. Even at the frontier, while certain capabilities may remain limited to a small set of actors at first, we should expect them to diffuse over time, whether by way of deliberate open-source strategy, leakage, or some combination thereof.

Models

Since transformer models are based on next-word prediction and do not possess any fundamental “ground truth,” today’s preeminent GenAI models can “hallucinate” plausible-sounding but factually incorrect answers.3 They cannot, at an architectural level, distinguish between queries soliciting purely factual responses and those soliciting creative ones. Additionally, while current generative AI models can sometimes explain the chain of reasoning by which they reached a particular answer, in other cases they cannot, or they provide unfounded explanations.  

Transformer architectures and compute-based scaling laws will likely endure as a prominent vector for generative AI progress, with a number of algorithmic improvements poised to expand these models’ capabilities and help overcome their current limitations. New techniques are already beginning to emerge for improving model inference. These include expanding the size of the context window on which a model conditions its response,4 self-reflection capabilities that enhance models’ reasoning5 and could reduce their propensity to hallucinate,6 chain-of-thought prompting to elicit model reasoning,7 and rapid training techniques to achieve near- or real-time data awareness.8

We can also expect technical tools to help mitigate some of the socio-technical limitations of today’s generative AI models. To identify and reduce potentially harmful9 responses, many model developers conduct red-teaming exercises to identify and mitigate problem areas before releasing models to the public.10 Some have also imbued AI models with “constitutions” that function like an ethical cross-checking system for model outputs,11 and further calibrated model outputs through techniques such as reinforcement learning with human feedback (RLHF)12 and automated improvement approaches.13 Some model developers are now making their content moderation software directly accessible over the internet.14 There is also a growing suite of technical tools – and industry promises15 – to help identify and label synthetic media, such as digital watermarking.16 

Finally, we cannot rule out the emergence of novel algorithmic architectures. While unlikely to displace the transformer model entirely or imminently, research into approaches such as liquid neural networks with as few neurons as possible17 and bayesian models that imbue a model with the ability to perform probabilistic reasoning18 suggest the possibility of alternative models. These alternative architectures may prove appealing for certain uses, such as autonomous systems where efficiency will be paramount.

Data 

Data is the fuel for GenAI models throughout their life cycles, from the initial pre-training corpus to subsequent domain- and task-specific fine-tuning to instrumentalization once models are operating in the real world. Today’s GenAI models are limited by the quality, timeframe, and availability of relevant training data. Data, particularly at Internet-scale, carries with it human biases, inaccuracies, and falsehoods, meaning today’s GenAI models can generate biased, inaccurate, and offensive responses.19 While increasingly chained to tools for external information retrieval, GenAI models cannot on their own generate accurate, up-to-date information about the world after the date they were last trained. There are also areas for which high-quality data is scarce or non-existent, limiting models’ ability to generate output relating to certain emerging fields or phenomena. Finally, as models continue to scale, we may run out of new data on which to train them,20 and could encounter model degradation as a result of datasets that include some amount of AI-generated text.21

Data access also will impact future developments in GenAI.22 While a great deal of openly accessible data exists for training GenAI systems, some of it is subject to various terms of use (e.g., Copyright or contractual violations). The U.S. Copyright Office and the courts are grappling with how to address these challenges.23  

Model developers will continue to develop techniques to address data availability challenges. One major front is the use of high-quality datasets to train smaller models for specific tasks and domains.24 Other techniques include the development of fully or partially synthetic datasets25 as well as data-lite techniques such as few-shot learning, in which a model can learn underlying patterns from a few training samples and generalize those insights to a broader set of concepts.26 Additionally, governments and other actors may curate and/or decide to publish unique datasets to fill current data gaps and drive research on specific topics such as the workforce or healthcare. 

Hardware 

Compute – the specialized computational hardware and infrastructure that allows companies to seamlessly train, deploy, and run inference on their models – has played a definitive role in driving the step-change in AI performance over the past decade. Since the dawn of the deep learning era, GPU performance has improved by roughly 1,000x due to a combination of advances in systems engineering and continued progress in microelectronics fabrication.27 Today, the growth of the AI field remains contingent on continued advances in compute hardware, among other layers of the AI stack. So-called “scaling laws” predict that additional computational power is the chief limiting factor for building larger and more powerful AI systems.28 

Training foundational models now requires thousands of GPUs, representing tremendous amounts of computing power rivaling the world’s largest supercomputers – which are backed by nation-states.29 What’s more, the energy needed to run these supercomputers has dramatically increased, prompting frontier firms to build on-site power plants while investing in alternative long-term energy sources, such as fusion.30 Continuing the current hardware trajectory will not only limit the range of actors capable of building massive-scale foundation models (absent policies designed to expand compute access), but also threaten to slow down or cap AI progress in the intermediate term.31

Even as the compute requirements for training massive scale models continue to grow, we should expect to see efficiencies at various stages of training, deployment, and hardware engineering. Model developers, particularly in the open-source ecosystem, will likely continue to devise methods for reproducing cutting-edge capabilities with as little compute power as possible.31 We are already seeing the emergence of methods to efficiently finetune large language models on small amounts of hardware.33 In addition, alternative compression techniques such as quantization34 and pruning35 may offer entirely novel ways to distill generative AI models to the essential parameter layers, data, and model weights for specific tasks and domains. We are already starting to see these techniques deliver. Falcon-40B, for example, achieved state-of-the-art model performance with only 75 percent of the compute needed for GPT-3 through algorithmic efficiencies and high-quality data.36 Finally, the microelectronics industry and various R&D programs are also exploring novel, extremely low-power computational approaches – including analog and neuromorphic methods – that would enable the deployment of AI models that can process data and run inference locally on the edge.

Potential Threats

As with any new technological tool, GenAI models could cause harm as much as create opportunities and benefits. Many of these dangers are not novel, as GenAI will augment already existing threats. Nonetheless, the GenAI revolution marks a qualitative change in these challenges — one of scope, scale, and speed. GenAI’s ability to lower barriers to action can expand the points of intersection between threat domains, extending the scope of certain types of threats. A growing number of malign actors, given access to GenAI-enabled tools with wide reach, heightens the scale of many existing challenges. And GenAI’s swiftness in use is accelerating the speed at which harms can unfold. Together, these aspects paint a global threat landscape undergoing a foundational transformation.  

These augmented harms will, at times, be both distinct and intertwined, creating complex webs of overlapping issues. Policymakers require at least foundational paradigms distinguishing classes of harms in order to devise appropriate, effective responses for specific circumstances. While understanding of the full scope of risks that GenAI presents is still nascent, we offer the three categories below to that end.  

First, GenAI is coming into existence in an era of great power competition. GenAI’s geopolitical challenge is a pressing one. The PRC is a global power with the will, resources, and focus to rival the United States – this essential context cannot be overstated. GenAI will be a tool in this techno-economic competition and it has the potential to alter the global balance of power, requiring strategic orientation, concerted effort, and diplomacy on the part of the United States to navigate the challenges and opportunities that come. By successfully adopting and integrating GenAI systems into its economy, defense industrial base, and innovation (S&T) ecosystem, the United States can better position itself to counter a powerful and ambitious competitor. At the same time, an unrestrained arms race between global powers for AI dominance could also be destabilizing, particularly if nation states resort to coercive measures to develop and protect their AI advantages over rivals.  

Second, as a tool that will diffuse across multiple sectors of the global economy and our lives, GenAI is susceptible to misuse or malign use challenges. Malign actors could use GenAI to disrupt markets, conduct fraud, and enhance the risk and potency of cyberattacks, among other actions. Simultaneously, human misuse of GenAI tools could result in harm to oneself or others in circumstances where the human, due to lack of understanding or training, does not comprehend a tool’s limitations. These threats pose severe risks to the security, stability, and prosperity of societies, particularly democracies, requiring a robust and adaptable set of policy defenses.

Third, misalignment challenges in GenAI refer to discrepancies between the user’s intentions and the content the AI model generates. This can take various forms, such as ethical misalignment where the AI produces content that is inappropriate or harmful to a select population due to bias. Additionally, factual inaccuracies, where the AI disseminates incorrect information, are also a key concern. Inaccuracies need not rise to the level of intentional disinformation for a person to be harmed in relying on that generated information. Finally, in contextual misalignment, where the AI, despite being trained on extensive datasets, fails to grasp the deep contextual cues that would be evident to a human, generates responses or activities with undesired consequences..

A few of the most challenging threats in the national security space include: 

  • The Problem of Disinformation. As the 2024 elections draw closer, the most immediate threat posed by GenAI lies within the realm of disinformation. GenAI can generate text-based, audio, and visual content that is alarmingly convincing, and yet entirely fabricated. This capability can serve as a potent force multiplier for foreign troll farms, which have been systematically sowing discord within our public discourse.37 Our institutions, tasked with the mission to unveil and counter these disinformation campaigns, are being stretched thin. Hostile actors are likely to employ GenAI to escalate the intensity and sophistication of their disinformation operations. This risk may grow as GenAI increasingly becomes open-source, widely accessible, and modifiable, thereby empowering less sophisticated actors to wage disinformation warfare. Without proactive countermeasures, GenAI may pose severe challenges to the robustness of our democratic institutions.
  • Cybersecurity: A Shifting Battleground. GenAI holds significant potential to intensify cyber threats. They can be harnessed to provide detailed tutorials and even generate malware codes, significantly lowering the entry barriers for both private and state-sponsored hackers.38 They could enable such actors to employ hacking tools with unprecedented scale and precision. The mounting volume, speed, and sophistication of cyberattacks may soon outpace human defenses, making automated cyber defenses facilitated by GenAI an imperative. GenAI could thus escalate the costs and risks associated with cybersecurity for companies and governments alike, exacerbating the instability of cyberspace in times of crisis or war.
  • The Intersection of GenAI and Biosciences. GenAI can significantly influence the landscape of biology, chemistry, and medicine, presenting unique risks. As pharmaceutical companies and research laboratories begin to utilize GenAI to generate novel ingredients for vaccines and therapies, the risk of accidental releases of toxic substances, antibiotic-resistant superbugs, or highly contagious viruses increases. As GenAI fosters an expansion of the number of actors across the globe capable of working on synthetic biology, it is reasonable to assume that not all of them will adhere to the highest safety standards. In addition, malevolent non-state actors or foreign regimes with covert biological and chemical weapons programs might exploit GenAI to create lethal agents custom-made for assassinations or even ethnic cleansing.

Open-Source AI Models

Open-source models uniquely illustrate the promise and perils of GenAI. Open-source GenAI models can be built and accessed with limited compute resources and even lower proficiency. They hold great promise to spread technological progress. Allowing a wider set of actors to contribute to, learn from, and govern a shared knowledge base will both expand the number of innovators and spur economic gains. Some private companies, including major actors in the space,39 are staking their GenAI business model on open-source AI. While open-source models currently lag behind the frontier, many are quite capable.40 

As is the case with the enterprise software ecosystem, companies can choose41 whether it is in their best interest to go the proprietary or open-source route.42 Open-source models also could offer a way for the academic community to remain a key driver of AI research as commercial competition closes off a historically open field. The cooperative research promoted through open-source GenAI could expand the range and diversity of actors driving the development and adoption of this technology, as well as the number of experts who can contribute to governance of these tools.

At the same time, by lowering the barriers to entry for developing and using GenAI, the diffusion of open-source models raises potential perils in the hands of malign actors, whether non-state entities or geopolitical rivals. These risks exacerbate existing social cohesion-based threats. Widening the group of actors capable of spreading synthetic media at speed and scale amplifies the already present disinformation challenge. Such broad access also deepens the challenge of monitoring for and preventing long-term risks of model misalignment. Complicating matters further, it appears to be technically possible to strip nearly any open-source GenAI model of guardrails against harmful outputs, posing considerable challenges to effective governance. 

Certainly, GenAI is not the first time we have confronted technologies with such simultaneous potential for benefits and harms. However, by combining tremendous power with wide accessibility, GenAI signals a wave of empowerment so significant that it alters how we assess risk. Previously, we understood technologies would yield harms and built resilience systems to manage those impacts. Those systems acted as levees against potential floods. Today, the widespread access brought by open-source GenAI could yield changes that current systems were not designed to handle.

The diffusion of open-source generative AI models is irreversible. A more pragmatic and forward-thinking strategy is necessary if the government is to effectively manage and navigate the widespread dissemination of these GenAI models. Understanding GenAI’s mechanics, dynamics, and future trajectory will enable the formulation of informed strategies and responsive measures to enhance our resilience and cultivate an environment where the positive elements of open-source collaboration can flourish, rather than chasing unobtainable controls. The United States is not currently positioned to mitigate potential risks and harmful uses, exploit opportunities, and guide the development of GenAI technologies in a manner consistent with our democratic values.


Endnotes

  1. This definition for generative AI comes from SCSP’s Generative AI Task Force. SCSP formed a Generative AI Task Force designed to provide options for the U.S. government, allies and partners, industry, and academia to address the challenges and opportunities that generative AI presents for national competitiveness. Over the course of three meetings in the Spring of 2023, the task force developed recommendations for how to foster responsible innovation and harness the transformative power of generative AI, while addressing ethical concerns and potential risks. Many of their findings informed this report.
  2. Introducing ChatGPT, OpenAI (2022).
  3. James Vincent, Google’s AI Chatbot Bard Makes Factual Error in First Demo, The Verge (2023). 
  4. Benj Edwards, Anthropic’s Claude AI Can Now Digest an Entire Book like The Great Gatsby in Seconds, Ars Technica (2023).
  5. Isaac Kauvar, et al., Curious Replay for Model-Based Adaptation, arXiv (2023). 
  6. Noah Shinn, et al., Reflexion: Language Agents with Verbal Reinforcement Learning, arXiv (2022). 
  7. Jason Wei, et al., Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, arXiv (2022).
  8. Adam Zewe, Learning to Grow Machine-Learning Models, MIT News Office (2023); Armen Aghajanyan, et al., Scaling Laws for Generative Mixed-Modal Language Models, arXiv (2023).
  9. Robert Huben, Testing Ways to Bypass ChatGPT’s Safety Features, LessWrong (2022).
  10. Ethan Perez, et al., Red Teaming Language Models with Language Models, arXiv (2022).
  11. Yuntao Bai, et al., Constitutional AI: Harmlessness from AI Feedback, arXiv (2022).
  12. Nathan Lambert, et al., Illustrating Reinforcement Learning from Human Feedback (RLHF), Hugging Face (2022). 
  13. Jing Xu, Improving Open Language Models by Learning from Organic Interactions, arXiv (2023); Jack Clark, Import AI 332: Mini-AI; safety through evals; Facebook releases a RLHF dataset, ImportAI (2023).
  14. See The Moderation Object, OpenAI (last accessed 2023).
  15. Diane Bartz & Krystal Hu, OpenAI, Google, Others Pledge to Watermark AI Content for Safety, White House Says, Reuters (2023).
  16. Kyle Wiggers, Microsoft Pledges to Watermark AI-Generated Images and Videos, TechCrunch (2023). 
  17. Payal Dhar, “Liquid” Neural Network Adapts on the Go, IEEE Spectrum (2023). 
  18. Yeung Wong, Why You Should Use Bayesian Neural Network, Towards Data Science (2021).
  19. Madhumita Murgia, OpenAI’s Red Team: The Experts Hired to ‘Break’ ChatGPT, Financial Times (2023).
  20. The Bigger-is-Better Approach to AI is Running Out of Road, The Economist (2023).
  21. Carl Franzen, The AI Feedback Loop: Researchers Warn of ‘Model Collapse’ as AI Trains on AI-Generated Content, Venture Beat (2023).
  22. SCSP issued a National Data Action Plan that discusses data availability across the public and private sectors and provides recommendations for maximizing accessibility in a responsible manner. See National Data Action Plan, Special Competitive Studies Project (2022).
  23. See e.g., Blake Brittain, Getty Images Lawsuit Says Stability AI Misused Photos to Train AI, Reuters (2023).
  24. Oliver Whang, The Race to Make A.I. Smaller (and Smarter), New York Times (2023); Katyanna Quach, Small Custom AI Models are Cheap to Train and Can Keep Data Private, Says Startup, The Register (2023).  
  25.  Synthetic Data Generation: Definition, Types, Techniques, & Tools, Turing (last accessed 2023).
  26. Archit Parnami & Minwoo Lee, Learning from Few Examples: A Summary of Approaches to Few-Shot Learning, arXiv (2022).
  27. John Russell, What’s Stirring in Nvidia’s R&D Lab? Chief Scientist Bill Dally Provides a Peek, HPCWire (2023).
  28. Jared Kaplan, et al., Scaling Laws for Neural Language Models, arXiv (2020).
  29. Jonathan Vanian & Kif Leswing, ChatGPT and Generative AI are Booming, But the Costs Can Be Extraordinary, CNBC (2023).
  30. Andrew Paul, Microsoft Thinks This Startup Can Deliver on Nuclear Fusion by 2028, Popular Science (2023).
  31. Andrew J. Lohn & Micah Musser, AI and Compute: How Much Longer Can Computing Power Drive Artificial Intelligence Progress?, Center for Strategic & Emerging Technology (2022); Lennart Heim, This Can’t Go On(?) – AI Training Compute Costs, Lennert Heim (2023).
  32. See, for example, Rohan Taori, et al., Alpaca: A Strong, Replicable Instruction-Following Model, Stanford University (2023).
  33. Tim Dettmers, et al., QLoRA: Efficient Finetuning of Quantized LLMs, ArXiv (2023). 
  34. Zhewei Yao, et al., ZeroQuant-V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation, ArXiv (2023); Bringing Hardware Accelerated Language Models to Android Devices, Machine Learning Compilation (2023).  
  35. Elias Frantar & Dan Alistarh, SparseGPT: Massive Language Models Can be Accurately Pruned in One-Shot, ArXiv (2023). 
  36. Cameron R. Wolfe, Falcon: The Pinnacle of Open-Source LLMs, Deep (Learning) Focus (2023). 
  37. Karen Hao, Troll Farms Reached 140 Million Americans a Month on Facebook Before 2020 Election, Internal Report Shows, MIT Technology Review (2021).
  38. Jonathan Barney, et al., GenAI Will Amplify Cybersecurity Threats, But There’s Hope, Security Magazine (2023). 
  39. Nick Clegg, Openness on AI is the Way Forward for Tech, Financial Times (2023). 
  40. Jon Victor, Open-Source AI Is Gaining on Google and ChatGPT, The Information (2023).
  41. Introducing BloombergGPT, Bloomberg’s 50-Billion Parameter Large Language Model, Purpose-Built From Scratch for Finance, Bloomberg (2023); Jamiel Sheikh, Bloomberg Uses Its Vast Data To Create New Finance AI, Forbes (2023).
  42. Open-source is not a binary condition of a GenAI model but rather occurs along a spectrum based on multiple criteria related to the model’s availability, documentation, and access methods. See Andreas Liesenfeld, et al., Opening Up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators, arXiv (2023); Irene Solaiman, The Gradient of Generative AI Release: Methods and Considerations, arXiv (2023). 

Close

Click to access the login or register cheese
Next