[Solved] ARC5205: Regulating Artificial Intelligence Research Paper


Regulating Artificial Intelligence Essay Prompt Must have at least 2 books as a source (more, the better); it can be APA, MLA, or Chicago styles. Can you please also mention the architecture profession in the topic, as it pertains to my course? The Course: ARC5205/ Advance Design Theory Professor: Neil LeachI would like to focus […]

Regulating Artificial Intelligence Academic Paper

The laws have promoted prosperity and commerce for millennia, kept people safe, and ordered society. However, until presently, the regulations have only concentrated on humans. The advancement of artificial intelligence (AI) has led to novel concerns for which the present-day legal systems are only partially equipped (Turner 7). AI is swiftly changing from the science fiction domain to the reality of daily lives; the devices people use differently translate languages with ever-increasing fluency, speak to people and understand what individuals say (Scherer 354). Besides, autonomous machines can perform document reviews, use facial-recognition software to flag terrorists, and execute complex financial transactions. The projected further growth in AI has impelled expressions of anxiety from various sectors, including some calls for government restrictions on AI operations and regulations of AI development. In this way, AI is becoming a real general-purpose technology because it will likely conquer almost every aspect of culture, society, science, economy, and industry.

To begin with, although the existential anxieties regarding AI do not materialize, there are sufficient concrete instances of challenges linked to the existing implementation of the technology to warrant alarm concerning the degree of control over AI development. Therefore, protecting society from the risk of harm requires some form of regulation. Nonetheless, the growth in the regulatory capacity has not kept pace with technology advancements, such as AI (Guihot et al. 385). The reason for this is the decentered regulation, whereby the traditional duties of public regulators, including the commanding law by the government, have been dissipated. Other contributing dynamics are the increased power of technology organizations and the dwindling of government resources. The mentioned factors have left the AI development sector relatively unregulated. As evident in the field of architecture, the regulation of AI is decentralized in all industries, and AIDA, as a proposed legislative regime, leverages the respective organizational authorities of legislatures, agencies, and courts to control undesirable risks resulting from AI.

AI and Architecture Profession

It is crucial to note that with AI, architects have perceived major transformations coming to their discipline, but not all of which are positive. Architects are proud to be creative individuals. However, considering the concept of the technology in question, one fear that often arises is that AI can take the portion of the job that architects like the most, namely the creative part. Architecture is about how a building is translated, prefigured, and imagined as it is about its formation (Steenson 7). Thus, the main issue is based on the notion that super-intelligent robots would replace the professionals in designing vehicles, creating art, and designing buildings. However, even as AI advances across various design-related sectors, computers could do better than worse, thus handling the mundane so that the architects can augment their creative process. The machines are not efficient at innovative, open-ended solutions; it is still reserved for people. Nevertheless, architects could save more time performing repetitive tasks and use the extra time in design through automation.

At present, the function of AI in architecture has become useful and more crucial than ever, leading to many advantages for designers and architects. Implementing big data is one of the most significant benefits of AI in architecture. A challenging concept that engineering and architectural companies face compose evaluating and gathering massive amounts of data in as little time as possible. Nevertheless, due to big data analytics, organizations can operate such assessments more practically than ever. Apart from that, parametric architecture is another fascinating outcome of the role of AI in architecture. It is one of the most innovative and latest approaches to contemporary architecture. Buildings are designed using different parameters and algorithms through this technique (Eltaweel and Yuehong 1094). The concept disregards the traditional dynamics of architecture, such as repetition and symmetry; instead, it presents the likelihood of forming flexible and hybrid structures and forms. Last but not least, big data offers architects valuable information when assessing the possibility of a project, allowing them to take care of the construction and the planning of buildings altogether.

However, just like in all other sectors, the development of AI in architecture has transpired in a regulatory vacuum. Very few regulations or laws exist to explicitly address the unique challenges that AI raises (Reed 1). More so, virtually, no courts seem to have formed standards accurately handling who should be held liable as a result of harm caused by the technology. Apart from that, the old-fashioned regulatory procedures, including tort liability, research and development oversight, and product licensing, appear generally incompatible with handling the threats associated with autonomous and intelligent machines. Regardless of the challenging features of AI in various sectors, including architecture, there is a good reason to believe that applying legal dynamics reduces the risks that AI presents to the public without stifling innovation.

Artificial Intelligence Development Act

For regulating AI, the first step should be an act that institutes the general principles for the technology statute. In this regard, the Artificial Intelligence Development Act (AIDA) should be considered the regulation for AI. AIDA is a regulatory regime that would codify the broad AI principles and form a bifurcated tort liability system. AIDA’s primary objective is to establish laws for regulating AI and the agency that would govern it, AIRA, which is the acronym for Artificial Intelligence Regulatory Agency (Fitsilis 56). This legislative regime aims to ensure that AI is aligned with the interest of humans, susceptible to human control, secure, and safe by encouraging the manufacturing of AI that consists of those features and deterring the formation of machines that lack those features. Furthermore, AIDA would obligate AIRA to promulgate directions that define AI and occasionally appraise those rules of definition. Therefore, AIDA gives AIRA the power to elucidate and specify most dynamics of the AI regulatory context.

AIDA functions by utilizing a branched tort liability scheme to encourage AI certification and ensure the machines’ security and safety. Some other principles include limiting AI’s negative impact and ensuring its benefits are spread widely (Barfield and Pagallo 56). Based on this undertaking, the AI manufacturers and designers implementing agency-certified AIs would face limited tort liability. On the other hand, manufacturers, designers, and owners of uncertified AIs would be subject to several joint responsibilities. Without outlawing uncertified AIs, this regulatory regime aims to direct and encourage manufacturers and designers to audit their machines by putting them through the process of certification, thus limiting their liability burden. Therefore, the approach would foster the creation of AI in a controllable, secure, and safe environment to minimize competition law-related risks.

Equally important, AIDA should be considered since it leverages the institutional strengths of courts, agencies, and legislature while taking into account the unique AI aspects that make it challenging to regulate. Apart from that, it capitalizes on the democratic legitimacy of parliaments by assigning them the task of proposing the ideas and objectives that guide AI regulation (Scherer 396). The regulatory regime also delegates the functional responsibility of evaluating the safety of AI systems to the independent organization staffed with experts. Agencies are assigned with these critical tasks since they are better equipped than courts to analyze the security of specific AI systems, primarily as a result of the uneven incentives of the court system. Last but not least, AIDA leverages the experience of the courts in arbitrating individual disagreements by assigning courts the role of determining if the operations of the AI fall within the range of a design certified by an agency. In brief, the court would assign obligations when tortious harm arises due to the interaction between multiple components of the system of an AI.

The Agency

As tasked with AIDA, AIRA consists of two components: certification and policymaking. The latter establishes the process of AI certification and makes exemptions permitting AI research to be carried out in specific environments without a researcher considered subject to strict liability. Certification requires the developers of AI to conduct safety testing and present the outcome along with documentation submission. Furthermore, traditional regulation approaches, such as tort liability, oversight of research and development, and product licensing, are unsuited to managing the risks linked to AI (Barfield and Pagallo 196). To this end, the certification of AI through AIRA is crucial since the agency certifies (or not) and reviews the safety of the autonomous and intelligent machines based on the set standards, including dynamics for ensuring control by humans, goals alignment, as well as the risk of causing physical harm.

The Courts

The courts would also play a vital role in the regulation and governance of AI. Under the AIDA framework, the courts’ responsibility would be to arbitrate individual tort claims resulting from the harm caused by AI, harnessing experience in fact-finding and institutional strength of the court (Scherer 397). Based on the liability framework of AIDA, the courts would apply the strict liability rules for cases concerning uncertified AI and regulations governing negligence claims to matters relating to certified AI. In the former classification of cases, the most vital aspect of this task entails assigning responsibilities between operators, distributors, manufacturers, and designers of harm-causing AI (Scherer 398). Lastly, the determination of accountability allocation may occur similarly to ordinary tort cases for multiple-defendant actions and claims for contribution or indemnity.

Apart from that, the court system at the federal and state levels would operate as a kind of indirect regulation, governing the autonomous and intelligent machines on a case-by-case basis. The essence of that control would be reactive and not proactive, as it is with state legislatures and AIRA (Scherer 398). The nature of that governing body would largely be determined by the patterns and individual cases presented before the courts. The latter would be able to perform the following three functions while regulating and governing: the resolution of search and seizure disputes, resolution of federalism disputes, and resolution of torts disputes. The decision of tort disputes is centered on claims of traditional tort claims, including negligence and product and product liability involving AI products and their developers and manufacturers would be inevitable in the courts. In such cases, the verdict would affect future behavior mainly through the restraining liability effect.

The pace of change in the AI sector is one likely cause for concern in the resolution of tort disputes since it may make this constraining influence stunt investment in unfamiliar but valuable new technologies, as courts are inclined to focus more on the risk of new technology and give less attention its advantages. In essence, this propensity poses a severe challenge to the courts, as they aim to regulate AI, a sector that faces challenges due to its pre-existing notions of what programs and machines can and should do (Barfield and Pagallo 208). The tendency also hampers the categories of industry expectations that the courts can conduct solely. The stated limitations should not be overlooked; however, in the tort system, the courts are well-positioned to operate as fact-finders, which is essential to each case. In brief, it has the potential to disclose vital information that state legislatures and AIRA can rely on, how AI operates in specific scenarios, and how decisions are made within the AI industry, which are vital when regulating AI.

One more function of the court involves the resolution of federalism disputes. In this regard, the attempts by AIRA to regulate AI in a manner that impinges on the state’s traditional authority would lead to the government seeking resolutions from the courts on the disagreements those regulations create. For instance, states have the power to govern trespass and privacy within their borders. However, AIRA would likely aim to regulate the use of personal data in AI in a manner that forms a new privacy standard for users while similarly feasibly developing a new lower standard of trespass for developers of AI (Barfield and Pagallo 208). Lastly, the courts play a substantial role in resolving search and seizure disagreements. Currently, the institution is laying the groundwork for issues of search and seizure resulting from various forms of AI. Therefore, the courts would be called to standardize how law enforcement may access AI for evidence in any organization.

Impact of the Regulation


AI products like drones or robots that use facial recognition can pose safety and privacy hazards. The security threat in such AI products includes the risk of accidents that may not have occurred. The reason for this is that the accidents could be created by unethical decision-making, deficient or flawed software programming, or even minor hardware or software errors in the face of a multi-risk and high-risk scenario (Guihot et al. 385). Thus, regulation must offer an environment that would allow the technology to operate at its full potential while safeguarding humans from unacceptable risks.

Concerning safety regulation, there is a need for a law that would ensure that people are safe when using AI. However, it may be difficult for AIRA to appropriately anticipate all the safety requirements when new machines become available. Based on promulgating regulations that handle results, AIRA considers requiring any organization presenting AI-enabled merchandise with no pre-existing safety requirements to file a manuscript highlighting the practical safety anticipations that users can have for the product (Barfield and Pagallo 198). Other companies must file similar documents with complementary or competing products entering the field. The materials, collectively, become the standards of safety that courts and AIRA enforce in the new domain. The establishments can modernize those documents as companies upgrade their products. Essentially, the approach acts as progressing, obligatory self-governing safety regulations for each new sector until it is mature enough that AIRA forms its technology-specific guideline for the industry. In brief, the objective of this method is to concurrently standardize the safety of the AI product while pinpointing possible risks and offering greater transparency of those risks for the final regulations.


AI applications can detect and track people across various devices in public spaces, at work, and at home. Another approach by which people are identified and tracked is through facial recognition, and it raises concerns due to the likelihood of transforming anticipations of anonymity in public space (Agrawal 424). Additionally, AI-driven automated decision-making, profiling, and identification may lead to biased, discriminatory, or unfair outcomes. People can be judged negatively, misidentified, or misclassified, which, in turn, disproportionately affects specific groups of individuals. Apart from that, it is often not clear how much and what kinds of data AI platforms, networks, and devices share, process, or generate. Therefore, as people bring connected and smart devices into workplaces, homes, and public spaces, educating the public regarding such data exploitation becomes increasingly pressing.

Concerning privacy regulation, personal data is the primary driver of AI. It makes such information particularly valuable and mainly susceptible to security breaches. AIRA, in this regard, could present order and direction for the life of users’ data, including destruction, collection, and generation, in the same manner, applied by the General Data Protection Regulation (Barfield and Pagallo 200). Nowadays, the regulatory and statutory focus is on releasing personal data and the back end of that lifecycle. Under the stated guidelines, establishments must inform affected individuals when their records are exposed due to a security breach. Subsequently, failing to reveal such violations in the statutory period can lead to criminal or civil penalties. In this way, AIRA would go more in-depth, obligating organizations to provide users with the rights to control and monitor their data after collecting and disclosing their personal information. While this form of regulation would affect more devices, applications, and entities than only AI-enabled ones, AIRA can defend its position by highlighting the significance of personal data in AI devices.

Notice to Users

Equally important, another aspect is the notice to user regulation. A developing concern is that people would become confused with the AI they interact with; as a result, they would mistake it for a real human being. The AI chatbots interacted as “people” online with human users in the 2016 US presidential election (Barfield and Pagallo 201). Likewise, in the same year, an AI chatbot in IBM’s Watson platform functioned as an education aide for an online class at Georgia Tech; in this case, some learners believed that it was a real person (Barfield and Pagallo 201). In such cases, there is a worry that mistaken identity could reasonably be expected without rules preventing AI from simulating humans for coercion and deception, thus imposing the machines to identify as such. Avoiding mistaken identity, in this regard, would require any AI to reveal that it is not human. In this way, preventing potential problems and confusion related to this issue requires AIRA to consider a pointed law.


The world is entering an era where humankind depends on learning and autonomous machines to carry out an ever-increasing variety of tasks. For this reason, some forms of regulation are essential to protect society from the risks that are likely to arise from AI. The law has been decentralized, and thus, AIDA offers a regulatory framework that would improve the perception of people on safety as well as the view that humans are in control. The general approach of this law is that innovation is permitted without obstruction, but if that innovation causes certain types of harm, those responsible must bear the consequences. By implementing this method, all stakeholders involved, together with sector managers, regulators, and legislators, would guarantee that when regulating AI, the rewards are as extensively spread as possible while reducing the undesirable externalities to the maximum degree possible. In the end, this means promoting economic benefits and safety to people in all socio-economic demographics.

Works Cited

Agrawal, Ajay, Gans, Joshua, and Avi Goldfarb. The Economics of Artificial Intelligence: An Agenda. The University of Chicago Press, 2019.

Barfield, Woodrow, and Ugo Pagallo. Research Handbook on the Law of Artificial Intelligence. Edward Elgar Pub., Inc., 2018.

Eltaweel, Ahmad, and S. U. Yuehong. “Parametric Design and Daylighting: A Literature Review.” Renewable and Sustainable Energy Reviews, vol. 73, 2017, pp.1086-1103.

Fitsilis, Fotios. Imposing Regulation on Advanced Algorithms. Springer, 2019.

Guihot, Michael, Matthew, Anne F., and Nicolas P. Suzor. “Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence.” Vanderbilt Journal of Entertainment and Technology Law, vol. 20, no. 2, 2017, pp.385-456.

Reed, Chris. “How Should We Regulate Artificial Intelligence?” Philosophical Transactions of The Royal Society A: Mathematical, Physical And Engineering Sciences, vol. 376, no. 2128, 2018, p.1.

Scherer, Matthew U. “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.” Harvard Journal of Law & Technology. vol. 29, no. 2015, pp.354-398.

Steenson, Molly Wright. Architecting Intelligence: How Designers and Architects Created The Digital Landscape. The MIT Press, 2017.

Turner, Jacob. Robot Rules: Regulating Artificial Intelligence. Springer, 2018.

« »

Customer's Feedback Review

Published On: 01-01-1970

Writer Response

Analysis (any type)

  • Papers
  • Views
  • Followers
Get Access
Order Similar Paper

Related Papers