Ensuring Trust with AI in the Metaverse

Chris Duffey
11 min readFeb 13, 2023

Psychologist, behaviorist and inventor B.F. Skinner once said, “ The real problem is not whether machines think but whether men and women do” — we must keep this in in mind when thinking about creating guardrails to ensure trust not only in AI but the use of AI. Specifically with AI Security, Privacy and Ethics.

As businesses become more reliant upon AI in their strategies as we enter the metaverse, it becomes vitally important that security be at the forefront from the beginning. Everything from huge databases to fast networks to computer systems must be designed and built with strong security policies in mind.

AI is becoming mission-critical to the success of businesses, governments, and individuals. Many of the initiatives currently being designed and rolled out involve aiding in the decision-making process by analyzing extremely large amounts of data to produce reports and even recommendations used to drive a company or the government.

Compromising the decision-making process of businesses and government is of high priority to attackers whether they are engaged in industrial espionage or attacks on a nation state. By modifying or controlling the data that is fed to AI systems, decisions made by people based on that information can be controlled and altered. Taking control of the AI systems themselves gives an attacker access not only to the decision-making process, but to the confidential data or the models used to make decisions.

In the past, applications ran on top of commercial operating systems such as OpenVMS, Linux, Unix, or Windows. These computers were typically housed in a large room or facility protected by firewalls and security procedures. The security of the facilities was under the control of the business and was successful or not based upon the actions of direct employees or consultants.

Those times have changed in that businesses are now running their AI in the cloud, which is a set of resources they don’t control. The computers and other hardware in the cloud typically run applications on virtual machines and are considerably more complex than in the past.

To make it even more complex, many organizations use a hybrid model in which some resources are housed locally at the business and other resources are housed in the cloud, or even across more than one cloud provider. This introduces many potential security problems, dependent on the security policies of different organizations spread over wide geographical areas. A weakness in any one of them could be leveraged to infiltrate some or all the systems.

Companies that provide cloud services such as Amazon, IBM and Google maintain their equipment in hardened, highly secure, multiple locations and follow best security practices as a rule. Before investing in their cloud solutions, thoroughly investigate the steps and technology these organizations use to ensure the security of their platforms.

The security of infrastructure hosted locally is the responsibility of the business and must be a primary focus to be successful. Security is not an afterthought that can be tacked on or handled in an offhand manner. Trained security professionals must be focused on security, enforce security rules, and audit compliance. Additionally, penetration and other forms of testing must be performed periodically.

The best practice is to design security into your infrastructure from initial conception. Retrofitting can be a difficult, time-consuming and error-prone process because existing applications may not have been created with security in mind. This can be even more challenging when applications have been designed, created and installed without concern or a focus on security.

Standards for strong security begin with the infrastructure. Is the equipment in the computer rooms physically secure? Are there locks on the doors and is access limited to authorized personnel?

The network must also be secure, with strong encryption enforced on both wireless and wired connections. Beyond that, the physical optical and copper cables need to be secured so that intruders cannot easily tap into the network by accessing an exposed line directly.

That’s why it’s wise to create a multilayered defense, beginning with the hardware, working up to encryption, to the security of the machines themselves, to operating system security, all the way up to training users so they understand basic security rules. All levels must be part of the security plan because a breach can occur anywhere, but in a multilevel plan, attackers probably won’t be able to penetrate through all layers of security.

An often-overlooked component of security is ensuring that personnel are properly vetted as part of the onboarding process. Background checks are an essential part of good security policies.

It’s been said that the weakest link in security is the human element, intentional or unintentional. For instance, people will click on seemingly innocent links in emails that cause a virus to be downloaded which results in a security breach. Training can eliminate much of this problem, but the security plan must include the probability that these kinds of accidents will happen. Potentially malicious employees create an even greater risk which must also be planned for.

It’s important to understand that the interfaces between local computers and cloud services are potential weak points because of the differences in architectures, protocols, and procedures between the cloud services, the application providers, and the local computer infrastructure.

This is one of the biggest areas for concern because the components of many vendors come together and each potentially comes with own security flaws. Often, these linkages are best hosted in a network area called a DMZ, which effectively isolates them from the rest of your network.

Good communication between the cloud vendor, the application vendors, any consulting firms, and the business is essential towards ensuring good security.

The business is responsible for the security of their data and services, regardless of where it is hosted. While it’s true that cloud and other vendors also have responsibility in that area, the best practice is for those responsible for security in the business to understand, document, implement and audit the security regardless of the location of the key equipment and applications.

Implementing security algorithms that use machine learning is dramatically improving the detection of security breaches and vulnerabilities. New malware and attacks are rapidly evolving, so more flexible approaches are necessary.

It’s no longer enough to scan systems for signatures of viruses or perform penetration testing for known vulnerabilities. AI must be involved to detect penetrations and breaches based on a database of the history of past vulnerabilities, and an understanding of the behavior of malware and attacks on systems.

This is one of those areas where humans and AI need to work together, because machine learning can only go so far. This is because “…as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field we call adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models — what we call poisoning the models, or machine learning poisoning — or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.”

A closely related subject has to do with privacy. The challenges of data privacy related to computing and AI is difficult to overstate. Not only is it technically challenging, but quite often those speaking about the subject are prone to rhetoric and highly emotional discussions. Complex privacy agreements written in legalese don’t make the topic any easier to understand.

As the Internet of things grows almost exponentially, and companies make more use of massive amounts of big data for artificial intelligence and other purposes, keeping data private becomes challenging, to say the least.

As with security, privacy should be engineered into the design of databases, systems and applications. In fact, ideally privacy and security should be part of how you run a business. In other words, organize your business procedures and your operations around privacy and security.

When we speak of privacy, generally we’re referring to protecting sensitive and private information on the Internet. Individuals, businesses and the government are concerned with ensuring that information about them is only shared in a manner that they have approved.

People are concerned with the privacy of the data that they posted to social media such as photos, videos and text. They want to control who can see that data, either the general public, just friends, or members of a specific group.

However, there’s much more to it than just information that people post themselves.

Let’s take a simple example of the GPS unit in the navigation system in your car. The information about everywhere you’ve driven is, or potentially could be, stored in memory on the GPS, and could even be kept in the cloud. Who owns this data? Is it the car manufacturer? The GPS unit vendor? The owner of the car?

Do the police need a search warrant to access this data? If the data is stored in the cloud and not on the GPS unit itself, who can access it? These and other questions come to mind for every smart device, from your smart television to your smart coffee pot to your smart phone and your smart home video camera.

Regardless of who owns the data, how is it kept private? If your smart coffee pot records the date, time, and type of coffee every time you brew a cup, and sends that data to the cloud, can the coffee pot manufacturer use that information?

As you can tell, managing data privacy is becoming a huge challenge for large, multinational corporations with different silos of data located in different geographic areas.

One of the most important trends for data privacy is the concept of anonymization. This is a technique that is used to protect privacy while still allowing the data to be used. The idea is that any identifying information in the data is removed or obfuscated so that it cannot be traced back to the individual. Unfortunately, perfectly anonymized data with no risk of identifying an individual is probably useless. Thus, the data cannot be scrubbed so completely that it is no longer valuable.

Single data points are generally not valuable. Rather, the value of data increases as the number of connectable data points grows. Knowing someone is a man doesn’t do a lot of good. However, that information combined with their location and what they bought in the past 30 days can be used to predict or target products that they need and want.

Anonymizing Data is not perfect in of itself, and research has shown that an individual can be identified 87% of the time just by knowing their ZIP Code, birthdate and gender. Similarly, researchers on Netflix discovered that they could identify a friend who rated six movies in a two-week period, 99% of the time, even though Netflix reviews are posted anonymously.

There are four types of data anonymization, which is the removal of personally identifiable information. You can completely remove any information that can be used to identify a person. You can redact, which means to blackout the data on paper with a marker. You can encrypt the data, or you can mask the personally identifiable information.

Pseudonymization replaces identifiable parts of the data in such a way that it can’t be used to re-identify a person without additional information. Anonymization destroys data that can be used to identify an individual.

Those concepts are important for the General Data Protection Regulation (GDPR) requirements, which is a regulation designed to protect the personal data and privacy of EU citizens for any transaction that occurs within EU member states. This law, which goes into effect around May 2018, says that companies must provide reasonable protections of personal data. Unfortunately, they did not define the word “reasonable”, leaving a lot of room for interpretation.

This law came about because of public concern over privacy, which has grown significantly and continues to grow with each highly publicized data breach. According to the RSA Data Privacy and Security Report, which surveyed 7,500 consumers in France, Germany, Italy, the United Kingdom and the United States, 80% of the respondents named lost banking and financial data as their biggest concern and 62% said they would blame the company and not the hacker.

GDPR protects basic identifiable information such as a person’s name, address, any identification numbers, web data such as their location, IP address, any health and genetic data, biometric data, racial or ethnic data, political opinions and sexual orientation.

Companies do not have to have a business presence within the EU to fall under these regulations. The law applies if they store or process personal identification about EU citizens.

There are many ramifications to the GDPR law that affects any organization that does any kind of business in the EU. Simply creating the reports to prove compliance can be a costly exercise. The penalties for not being compliant are very high, up to €20 million or 4% of global annual turnover, whichever is higher.

There are other laws that apply to privacy. In the United States, the Health Insurance Portability and Accountability Act (HIPPA) requires that any health-related information be protected to ensure confidentiality of patients. Any AI applications that access health related data must ensure that they are in compliance with these laws.

The best practice for privacy to put the customer first and have a transparent policy that is based on an equal value exchange.

Another consideration in building trust into AI systems is how do we stay in control of an increasingly complicated intelligent system or globally intelligent network? These are some of the questions posed by the Future of Life Institute in the 23 Asilomar Principles (Future of Life), signed by over 3,800 AI experts and leaders such as Stephen Hawking and Elon Musk. Their purpose is to guide the development of safe AI, and they touch on research issues, ethics and values, and longer-term issues.

Often, we make the assumption that AI is some kind of super intelligent or infallible machine. The decisions made by AI are based upon learning, and if the learning is incorrect, then the decisions may be wrong. How do we guard against that contingency?

The intelligence of AI is biased based on what it learns. But another question is how do machines affect human behavior and social interaction? Even today, you see the effects of AI on social media platforms. What do we do when artificial intelligence is ubiquitous, and the behavior is unrecognizable or even superior to human beings?

Then there is a question of jobs; it’s an undeniable fact that the advent of industry 4.0, or the fourth Industrial Revolution, will result in changes in all areas of the workforce. It is important to understand automation and AI will create more jobs, yet the types of jobs may be changed.

Economist David Autor said, “Job tasks are changing. In many cases that automation is complementary to the tasks that people do. For instance, doctors’ work is becoming more automated, but that doesn’t reduce the need for their expertise. (For instance, testing gets automated, but that generates data that doctors need to interpret.) So, the impact of automation is much harder to predict than any of us have a handle on”

In 1986, the space shuttle Challenger exploded 90 seconds after liftoff because of a rubber gasket called an O-ring froze the night before. This led to a catastrophic failure that destroyed a multi-billion-dollar shuttle and the loss of the crew. One of the lessons learned is that for any mission to be successful, all the components must work together. Everything else in the shuttle worked as expected but was still destroyed because of the failure of a single tiny part.

The promise of AI is virtually unlimited, and will need to be properly managed. The key point is that security, privacy and ethics need to be part of any AI implementation. They will become even more important as AI become ubiquitous and fundamental as the physical and digital worlds continue to fuse in the so called metaverse.