Facebook rebranding leaves many demanding protection against Artificial Intelligence

Over the past several years, periods of distinct human achievement have been defined to help us talk about the past a bit easier. These time blocks are divided into ages: The stone age, the bronze age, the iron age, and so forth. In the eyes of many, the last few years have marked the beginning of a new age, The age of information.

“Ages” can typically be pinpointed to the discovery of a technology that revolutionized the world. Tools, agriculture, metallurgy. In the information age, what separates us from the past is data. Today information is collected, transferred, and read at such a rate it makes the early internet look like physical encyclopedias by comparison; humans have automated the flow of information to the point where people don’t need to be involved. The technology of the information age is something that processes this data better than humans ever possibly could: artificial intelligence. The AI of today can not only collect information and make decisions based on the input, but they can learn. Through data collection and eventually AI, data has become one of the most valuable commodities. Today, some of the largest most profitable businesses offer free services to the public, an unprecedented and previously ridiculous business model. Yet Facebook and Google are both worth billions of dollars; the data they collect and the platform they do it on have proven so valuable for marketing that they have made billions selling information and ad space alone.

As will most innovations, data collection is progressing too fast for legislation to keep pace. Facebook is notorious for collecting data without consent, accidentally leaking information, and otherwise unethical collection of data. Many consumers don’t fully understand what data is being collected, or why, or that it’s happening at all. Many others find themselves worried at the rate of progression and feel technology encroaching on their freedoms and privacy. With the advent of Facebook's rebranding to Meta, a new wave of propaganda has flooded the internet, top science advisers to Joe Biden calling for an AI-Centered bill of rights to protect against potential harms. What might these rights look like? And how might this affect businesses of tomorrow?

The future of AI

Fear of new technology is not limited to the United States; the European Union proposed their own set of regulations around AI. The proposal divides AI into 3 risk categories: low, high, and unacceptable. Low risk has minimal requirements, high has many. Unacceptable risk means AI is not allowed to be used.  Some regulations that will restrict high risk AI include:

  • an obligation to transparency (AI must be explainable, properly documented)
  • An obligation to accuracy / effectiveness (AI must be tested and deemed fit for purpose)
  • An obligation to government involvement (AI must be approved, Issues must be reported)
  • An obligation to human oversight (Staff must be involved, AI can not be completely isolated)
  • An obligation to security (strict cybersecurity requirements, and penalties for data loss)

The U.S has their own bill of rights underway. Though nothing has been defined as of yet, the white house office of science and technology policy (OSTP) has outlined a set of key issues that need to be tackled.

  • Poor representation within data must be solved. (Facial recognition failing to recognize certain races, Various medical tests not accounting for racial differences, etc)
  • Public concerns about privacy loss and invasion of rights must be satisfied.
  • Strict regulation on biometric usage.

In short, governments are becoming privy to both the positive and negative consequences of AI, and as people become more informed alarm bells are beginning to ring. For businesses, this means more overhead through AI production; It will be the organizations duty to follow and staff additional procedures. Due to the already risky nature of AI development, partnering with AI risk management specialists such as FAIRLY early will provide a massive edge over competition, as infrastructure will already be in place by the time regulations hit to ensure business continuity. Nevertheless, regardless of regulations, it is important to ensure that any AI adoption has proper model risk management to avoid financial and reputation harm.


FAIRLY’s mission is to support the broad use of fair and responsible AI by helping organizations accelerate safer and better models-to-market. We offer an award-winning AI Governance, Risk and Compliance solution for automating Model Risk Management. Our enterprise-scale, API-driven productivity and risk management tools feature automated and transparent reporting, bias detection, and continuous explainable model risk monitoring, thus streamlining AI development, validation, audit and documentation processes across the three lines of defense for financial institutions around the world. Visit us at https://www.fairly.ai or follow us on Twitter and LinkedIn.

Recent Posts