KB-Logo

Pioneer Global Regulation of AI

Introduction

Is Artificial Intelligence (AI) the new oil? AI data has become an essential resource to businesses and
economies, some calling it “the ‘new oil’ to illustrate the tremendous opportunities it creates in terms of how it eases and drives efficiency, innovation of solutions, private and public services.” 2 AI is a momentous human achievement, and a realistic and pivotal business opportunity. Messrs. Kasirye, Byaruhanga & Co. Advocates3 (KB) is a proud member of Mackrell International (MI), a
global network of independent law firms. Mackrell International has 27 Practice Groups of cross border legal insight and excellence, which form an important part of its legal practice ecosystem, and practice group three (3) is the Artificial Intelligence Practice Group.

“Collectively, the group understands the practical and regulatory challenges posed by AI technologies and offers appropriate counsel to clients navigating a new and complex landscape, including issues related to data privacy, intellectual property protection, algorithmic
transparency, product liability and accountability, ethics, and regulatory compliance. We stay abreast of the latest developments in AI law, regulation, and industry trends, enabling us to offer
business strategies and practical advice to our clients.” – Mackrell International’s AI Insight Group.

Artificial Intelligence, a technological discipline that originally was so much a distant aim of computer science, focuses on creating intelligent tech or machines that can perform tasks that ordinarily require human intelligence. However, “AI does not have to confine itself to methods that are biologically human observable.” 4 In a simplistic explanation, AI involves the development of intelligent machines or systems that can simulate many kinds of cognitive processes of humans, animals, birds and other species or creatures, in the forms of movement, reasoning and decision making, answering, problem-solving, natural language processing, entertainment, vision, etc.

The main categories of AI a business can thrive in are; text, visual, interactive, and analytic AI. The form of business AI takes two-fold; the first is simple artificial intelligence tools designed to perform simplistic and specific tasks such as voice assistants, conversational assistants, image recognition systems, and self-driving automobiles, whilst the second is cognitive technology, which aims to highly replicate human-level intelligence, machines built and trained to understand, reason, learn, and apply human behavioral repertoires and knowledge across various domains. These include; education, healthcare, finance, transportation, manufacturing, customer service, medical practice, law practice, and many other areas.

Although it appears that AI has potential to revolutionize various aspects of our lives, it raises several legal, ethical, social, and sustainability questions, human and civil rights shortfalls such as privacy concerns, discrimination, job displacement, and political bias. For example, so far, AI usage has shown incidents of algorithmic discrimination;

“Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impact disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age,
national origin, disability, veteran status, genetic information, or any other classification protected by law.”

As society and organizations are swiftly switching to automated workflows and data driven decision making, KB’s Patrick Lubwama, a listed member of MI’s ‘Artificial-Intelligence-Insight-Specialist-Group’6, is here to help clients and you the reader, harness the suitability, the power and scale of AI technological acceleration while navigating and managing the ethical, regulatory, and legal complexities7 associated with this transformative technology. Almost entirely, cognitive AI technology is so far assumed to have wide-reaching risks and costly effects on business and society, and on that note, in March this year some industry stakeholders called for AI development pause. This is why our MI-AI Insight Group renders assistance to help you responsibly structure, develop, adopt and use AI tech, guide you through negotiating AI-related contracts and licenses, and addressing potential legal risks involved and disputes management, and will also find amicable, and best solutions for you. Altogether, our efforts mirror the current industrial moves that form the foundation for the future of AI.

It is safe to say that AI should impress as a human assistant rather than being a target of competition. For
this article, among other aspects, I will share some highlights on a key concept, ‘Sustainability of AI’. Is AI
sustainable for business? Our practice group intends to share insights on AI in global business to keep our clients, colleagues, and the interested market updated on the latest legal and regulatory developments in this field. We also bring together legal and industry specialists from within and outside MI’s global network to stay connected and at the forefront of AI-related developments.
‘Tech leaders sign letter calling for a six months moratorium ‘Pause’ to Artificial Intelligence’ VOA news, March 30th 2023. It seemed difficult and unlikely that consensus would be reached by any or all stakeholders for this industrial proposal for many reasons.

Sustainability of Business-Grade AI

Business-Grade AI aims at powering business strategies, and resolving the real-world business issues. Sustainability is a key concept in business-grade AI. Sustainable AI is “the use of artificial intelligence systems that operate in ways contingent on sustainable-business-practices”
, and a stable circular economy. Today, sustainability is the number one decisive factor for investors, consumers, and various stakeholders in determining suitability of an investment, choice of product, and regulatory accommodation and decisions. Sustainability recognizes interdependence between the current and long-term environmental care, social wellbeing, and circular economic growth aspects. It follows, therefore, that AI tech capable of productively, safely, and responsibly assisting, facilitating, and interacting with humans, society and our environment should qualify to be sustainable AI. What do the stakeholders say about this?

“AI models require human beings to keep feeding them, for these AI models to get better, so unless there is cooperation, you can only do so much to cannibalize your own data source, when these AI models start to hurt the very people who generate the data that it feeds on — the artists — it’s destroying its own future, so really, when you think about it, it is in the best interest of AI models and model creators to help preserve these industries so that there is a sustainable cycle of creativity and improvement for the models.”

“We’re developing software, middleware, and hardware to bring frictionless, cloud-native development and use of foundation models to enterprise AI.” – IBM

“Earth’s climate is changing…IBM’s new geospatial foundation model could help track and adapt to a new landscape…built from IBM’s collaboration with NASA, the watsonx.ai model is designed to convert satellite data into high-resolution maps of floods, fires, and other landscape changes to reveal our planet’s past and hint at its future…the ability to accurately map flooding events can be key to not only protecting people and property now but steering development to less-risky areas in the future.”

“While many of the concerns addressed in this framework derive from the use of AI, the technical capabilities, and specific definitions of such systems, change with the speed of innovation, and the potential harms of their use occur even with less technologically sophisticated tools. Thus, this framework uses a two-part test to determine what systems are in scope. This framework applies to (1) automated systems that (2) have the potential to meaningfully impact the Americans public’s
rights, opportunities, or access to critical resources or services. These rights, opportunities, or access to critical resources or services should be enjoyed equally and be fully protected regardless of the changing role that automated systems may play in our lives.”

Without a doubt, artificial-intelligence13 is the staple of this era, promising to redefine our society and make our lives, environment, and methods of work, and business better. It is a huge human achievement though some corners of society ‘didn’t’ and others still ‘haven’t’ entirely appreciated its ascendancy over our traditional ways of life and work, generally. I wish to relay an interesting example of two chess grandmasters who played the chess computer game IBM’s DeepBlue-199714, and agreed that ‘it was like a wall coming at you’. Gary Kasparov called it an ‘alien opponent’ and later belittled it to just ‘as intelligent as your alarm clock’. DeepBlue, is a chess computer that was trained by human experts and built to an absolute expert level, the first computer to win a competitive game in a match against a reigning world champion under real regular time controls, and registered as a world record of remarkable achievement.

DeepBlue made a respected world record by beating celebrated world chess champion Gary Kasparov. What is more interesting about this AI achievement is that DeepBlue’s computer system operated solely upon commands finetuned by both computer and chess experts. It was a fine progressive interaction between technology and human nature by DeepBlue, then a state-of-the-art AI model. Today, we are boasting advanced chess models like the brilliant Leela Chess Zero, which contrary to the DeepBlue, is an AI neural network chess computer built to solely rely on its own logic. As of 2023, the best advanced AI breakthroughs are OpenAI’s ChatGPT-3 (its iteration GPT-4 was released in March this year), Google’s or DeepMind’s AlphaGo, IBM’s Watson, the sandbox Minecraft bot, among others.

No wonder, there exists real fear of AI outcompeting humans in various aspects of our lives. For example, ChatGPT has already significantly altered (but not replaced) jobs in real estate industry, a business challenge common in many sectors today.

“In a city of a very near future, a citizen looking to buy a home will simply explain their requirements to a property AI-agent or assistant, which will orchestrate the entire selection and buying process without involving a human property agent and the commission commanded by human agents.”

“The large majority of independent artists make their living through commissioned works, and it is absolutely essential for them to keep posting samples of their art but the websites they post their work on are being scraped by AI models in order to learn and then mimic that particular style…artists are literally being replaced by models that have been trained on their own work…I and my team have designed a new tool called Glaze, which aims to prevent AI models from being able to learn a particular artist’s style…if an artist wants to put a creation online without the threat of an image generator copying their style, they can simply upload it to Glaze first and choose an art style different from their own…the software then makes mathematical changes to the artist’s work on a pixel level so that it looks different to a computer…to the human eye, the Glaze-d image looks no different from the original, but an AI model will read it as something completely different, rendering it useless as an effective piece of training data.”

As a business strategist and an anti-fraud enthusiast, I landed on an interesting study about nanotech in combination with different aspects of AI. A real-time connection between the brain and cloud, which can read brain waves of criminal suspects. A “Human Brain/Cloud Interface” 17 (B/CI), which connects the human brain to a cloud computing network system. In terms of business development, this tremendous tech achievement is crucial for efficiently facilitating real-time business data analyses, processes, and operations, will quicken identification and selection of scattered business opportunities around the world, proficiently help to solve cross-border business challenges, and simplify business administration, decision making, and profiting.

“This real-time connection between the brain and the cloud would include neural nanobots that monitor and control the neural connections, making it possible to connect to vast data networks
and other humans simply by thinking about it, and knowledge would be acquired by downloading directly to the brain. The system mediated by neural nanorobotics could empower individuals with
instantaneous access to all cumulative human knowledge available in the cloud, while significantly improving human learning capacities and intelligence.”

18

That said, although AI versus human-IQ is a big debate, what I understand to be the real concern of this AI-predominant era is that “human genes can pre-dispose, but they don’t predetermine” 19 whereas, AI is interestingly proving to yield towards at least both pre-determination and pre-disposition of any results including those beyond imagination, call it ‘superintelligence’. 20 There is a premature concern in some corners about AI’s apparent negative impact on the human job market and opportunities, however, as I said before, some business quarters have already condemned AI development to closure.

So, what then? We are just in the middle of the AI adoption and adaptation stage, and already past making the AI choice, or the argument of suitability and democratization of AI. Particularly about, which AI model is more suitable for a given business or task, such as a model developed to run upon rules and commands of human experts or that which is created by human experts to act, react and perform using its logic by naturally making predictions and decisions without being
programmed to do so.

Technology by its nature has a universal impact and has less of the component of retraction, and I agree that it is crucial to escalate AI regulation. According to a study in the oil and gas sector, it is estimated that 40% of existing human jobs are susceptible to automation in a future fossil-free energy world.21 Progressive business-grade AI should not be at the cost of any human right and
opportunity, or should not put society and our environment at sufferance. Sustainable AI must come with a universal guarantee of; continuous income for human labor, health for mothers at birth and new borns, affordable clean water and energy, education for children, and a healthy circular economy. While sustaining a healthy working environment, proper regulation should
require employers embracing AI tools to conduct AI bias and discrimination audits to identify any errors or biases, to build confidence, trust in workplace-AI-tools, and eliminate or mitigate possible associated risks.

Examples of such automated workplace-AI-tools are, screening tools for CVs or resumes and tools that rank candidates for interviewing, employment, promotion or even for disciplinary purposes.
As we embrace AI, there is an automatic responsibility and requirement (not just need) to deal with the challenges that come with AI, be it the skilled labor gap, costly development cycle, adoption and adaptation to AI and change management, safety, security, and regulatory concerns among others.

State of AI Regulation

Undeniably, there is enormous increase in AI tech creation and use around the world, though coupled with a growing awareness of its misuse. Take a simple example of someone created an AI tech tool that can speak to mimic your exact voice in your language to say, “mother, I have
been kidnapped and my kidnappers are asking for a ransom of USD$10,000.” Considering that more convincing information is used and communicated to “your mother” in the scam, chances are that she may end up being robbed of USD$10,000. Here, the AI tech tool is used with a subliminal objective or a criminal mind. Others create or use AI tech for other societal harms such as discrimination, and violence.

Legally, and anywhere around the world at least for now, generative AI models or tech including image generators, music generators, text chatbots, and more, are not considered authors of anything they produce as a result of their mechanism or own logic. It is argued that the work
these AI systems produce culminates from the genius of humans and human-made effort, and which effort is portrayed in the work. Several legal questions arise in such circumstances. What is the legal standard for creative work that is the result of a collaboration between a human
and an AI machine? Or, what becomes of novel work that is created out of the sole logic of an AI model? Or, who is liable or responsible for a driverless car accident? Currently, the answers vary, and remedies may not be necessarily adequate, satisfactory, or available at all.

In order to formulate effective global AI regulation, we can start with an International AI Convention, an AI model law to set in play a global standard framework for responsible AI, to guarantee, protect, and provide for rights of all AI stakeholders including, developers, funders, investors, traders, users, consumers, researchers, facilitators, regulators, policy makers, dispute resolution service providers, and other line stakeholders.

Today, for example, there is neither a unified global standard for AI copyright nor a settled and protective one. In the US, what is currently passing as the rule relating to AI copyright protection is that no copyright protection is given to works created by non-humans, and that includes AI machines. Does this mean that the product of an AI model cannot be copyrighted? I say it depends.

“…If a machine and a human work together, but you can separate what each of them has done, then copyright will only focus on the human part… If the human and machine’s contributions are more intertwined, a work’s eligibility for copyright depends on how much control or influence the human author had on the machine’s outputs… It really needs to be an authorial kind of contribution, and in that case, the fact that you worked with a machine would not exclude copyright protection.”

The United States Copyright Office, in September 2022, put the above position to test in two phases with different outcomes. First, the office granted copyright to (historically the world’s first-ever registered graphic AI work) the graphic novel ‘Zarya-of-the-Dawn’ created by Midjourney, an AI tech generator of text-to-image content.

However, shortly after the copyright registration, the office reviewed its copyright registration decision, and partially canceled the copyright protection. But why could this be? The reasoning for the office’s decision was that during its deliberations for registration of the work, the
office did not smartly consider the traditional element of ‘human authorship’, that the work had ‘non-human authorship’. Therefore, what remained protected were the training data sets and prompts used to generate the unpredictable graphic outputs, and the book’s text particularly the ‘selection, coordination, and assembly’ of its written and visual elements. What this meant was that
the images generated by Midjourney, the AI tech, did not get protection because they are not a human authored product. The office added, with emphasis that even the entire editing made by the author, Kashtanova, on the images generated by Midjourney, was (in their opinion)
‘too small, negligible, and imperceptible’ to qualify the work for copyright protection. The decision has so far set a policy standard on AI-human collaborative work in the US.

What this simply means is that the term ‘author’ does not extend to non-humans. The decision also signifies that if a person prompts a machine by typing a text, and the machine in reaction generates a written text, graphic, or visual, or musical work, the resultant work being a creation
of a non-human, is therefore not the subject of copyright protection.

“Until now, when a purchaser seeks a new image ‘in the style’ of a given artist, they must pay to commission or license an original image from that artist…now, those purchasers can use the artist’s works contained in Stable Diffusion along with the artist’s name to generate new works in the artist’s style without compensating the artist at all, the complaint reads…the harm to artists is not hypothetical, works generated by AI image products ‘in the style’ of a particular artist are already sold on the internet, siphoning commissions from the artist’s themselves.”

A key committee of lawmakers in the European Parliament has approved a first-of-its-kind artificial
intelligence regulation, making it closer to becoming law later this year. The approval marks a landmark development in the race among different countries to get their handle on AI, which is evolving with breakneck speed. The proposed law is known as the European AI Act, the first law for AI systems in the West aimed to set the AI standard and formulating harmonized standards other
jurisdictions can borrow a leaf or implement.

The proposal of the EU AI Act will become law once both the Council (representing the 27 EU Member States) and the European Parliament agree on a common version of the text. It provides a copyright framework for text and data training or copying that allows only nonprofits and
universities (not companies) to freely use original work to train models from the internet without consent. In the recently concluded May 2023 US and EU Trade and Technology Council meeting, a resolution was reached that the Council will produce a draft ‘Code of Conduct’ for AI in weeks.

The US White House Office of Science and Technology Policy, published a blueprint for ‘Development, Use and Deployment’ of automated systems, called ‘Blueprint for an AI Bill of Rights’. The Bill, differs from the European AI Act in a significant way. Whereas the proposed EU AI Act is intended to be binding, the Bill of Rights is non-binding.

“Considered together, the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms. This purposefully overlapping framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to people’s rights, opportunities, and access.”

Here below are the five critical principles provided for in the Bill;

Safe and Effective Systems,
Algorithmic Discrimination Protections,
Data Privacy.

Notice and Explanation.
Human Alternatives, Consideration, and Fallback.

To advance the US AI vision, particularly that of President Biden, the White House Office of Science and Technology Policy stated that;

“The five principles should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence…the Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats and uses technologies in ways that reinforce our highest values…responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by ‘From Principles to Practice’ a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process…these principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.”

On a small scale, on July 5th 2023, the New York City Department of Consumer and Worker Protection shall commence enforcement of its newly passed AI bias audit law. According to its section 20-871 a, it shall be unlawful in the city for an employer or an employment agency to use an automated employment decision tool to screen a candidate or employee for an employment decision
unless;

  1. Such a tool has been the subject of a bias audit conducted no more than one year prior to the use of such tool; and
  2. A summary of the results of the most recent bias audit of such tool as well as the distribution date of such tool to which such audit applies has been made publicly available on the website of the employer or employment agency prior to the use of such tool.

Section 20-871 b, on Notices, requires that in the City, any employer or employment agency that uses an automated employment decision tool to screen an employee or a
candidate who has applied for a position for employment shall notify each such employee or candidate who resides in the city of the following;

  1. That an automated employment decision tool will be used in connection with the assessment or evaluation of such employee or candidate that resides in the city. Such notice shall be made no less than ten business days before use and allow a candidate to request for an alternative
    selection process or accommodation;
  2. The job qualifications and characteristics that such automated employment decision tool will use in the assessment of such candidate or employee. Such notice shall be made no less than ten business days before use; and
  3. If no such disclosure is made, the source of such data and the employer or employment agency’s data retention policy shall be available upon written request by a candidate or
    employee. Such information shall be provided within thirty days of the written request. Information pursuant to this section shall not be disclosed where such disclosure would
    violate local, state, or federal laws, or interfere with a law enforcement investigation.

The NYC AI bias law also provides for penalties; a civil penalty of USD$500 for a first violation, and each additional violation occurring on the same day as the first violation, and not less than USD$500 not more than USD$1,500 for each subsequent violation. Other jurisdictions can be persuaded to borrow a leaf from this pioneer law to design their own suitable policies and law
to cater for fundamental AI requirements such as AI statutory testing and compliance.

On April 11th 2023, the Cyberspace Administration of China (CAC) developed draft Measures30 or Rules designed to manage how companies develop and provide generative AI products like Midjourney, and OpenAI’s DALL-E and ChatGPT31. The Proposed Rules were out for public
comment through May 10 and are expected to go into effect sometime before the end of 2023. The Rules though not yet implemented, take a risk-based approach to regulating AI, where the obligations for a system are proportionate to the level of risk that it poses.

The rules also specify requirements for creators and providers of so-called “foundation products” such as the said ChatGPT and others, which have become a key concern for
regulators and users around the globe. On a smaller scale, China gazetted its first ever local government regulations in its main tech hub, Shenzhen, for purposes of supercharging and guiding its AI development and privacy sectors. The Regulations encourage local government agencies in Shenzhen to adopt AI methods in business and work environments. The Regulations also
established an ethics committee concerned with making relevant safety guidelines.

In June 2022, Canada tabled its Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022, and the AIDA is a great milestone in ensuring that Canadians can trust the digital technologies that they use every day.

In March 2023, Italy banned OpenAI’s ChatGPT citing privacy concerns, and later lifted the ban upon data privacy improvements.

“…ChatGPT is available again for our users in Italy. We are delighted to welcome them back and remain committed to protecting their personal data.” – From an OpenAI spokesperson.

In Africa, there is no specific AI regulation yet, however, a recent UNESCO World Heritage Convention survey recommends fostering legal and regulatory frameworks for AI governance. Africa, one of UNESCO’s global priorities, covers 47 States registered as party to the World
Heritage Convention in the Sub-Saharan Africa.

“The Recommendation on the Ethics of Artificial Intelligence, adopted by all 193 UNESCO Member States in November 2021, also provides an excellent opportunity to consider what regulatory steps countries can consider to ensure that the design, development and application of AI is done in an ethical manner.”

The Indian Commission NITI Aayog has produced working papers on the ‘National Strategy for Artificial Intelligence’ for 2018, 2020, 2021 and 2022 but the government think tank is quickly working towards introducing its AI supervisory authority before putting in place all-round solid AI regulations.

In 2020, Saudi Arabia (SA), started its AI framework by establishing its National Strategy for Data and AI. Latest Developments from there show that SA set in place its own AI ethics standards called the ‘Saudia Arabia AI Ethics Principles 2022’ to help the Kingdom avoid or reduce
technology limitations. In April 2023, SA announced an approval of its most recent AI legislative development, the ‘Amendment to Personal Data Protection Law’ (PDPL) mirroring most of the proposals made by its own Saudi Data & Artificial Intelligence Authority (SDAIA). These
amendments are aligned with some of EU’s international standard general data protection regulations.

On May 12th 2023, the Brazilian Senate announced Bill No.2338/2023 (so far, only the Portuguese version) to regulate AI systems in Brazil. The Bill introduces the rules that make AI systems available in Brazil, establishing rights for individuals affected by AI operations, and further provides for remedies for violations, and information about Brazil’s AI supervising Authority. The Bill specifically requires that AI systems be subjected to preliminary assessments to be conducted by suppliers, for purposes of grading and classifying the AI as either low or high/excessive risk.

The Bill lists AI systems considered to be ‘high risk’, which are categorized in the following services; credit grading, identification of persons, administration of justice, implementation of automobiles,
medical diagnoses, and procedures, decision-making during access to health services, education, employment, and other essential public (and private related) services, evaluation of labor or students, critical infrastructural management, such as telecommunication, traffic control,
electric and water supply facilities and systems, and determination of crime and personality traits.

The Bill also provides for restrictions on using AI for subliminal techniques harmful to safety and health of persons, exploitation of vulnerable groups. On individual rights, the bill provides for various rights, such as the right; to contest and ask for explanations on a result/decision made by the AI systems, to ask for human participation during operation or decision making by the
AI system in certain situations, to be availed information about its functionality, to be protected from discrimination and request for correction of any discriminatory bias, and to disclose or inform an individual whenever an AI system is used.

Other countries have taken a legislative supplementary approach by applying existing laws (with amendments where necessary) while closely following the various global approaches, opting not to rush in with creating entirely new regulations for AI. Switzerland has adopted this approach.
Effort for Regulation – Industry Insights – Best Practices by Major AI Companies: IBM, Shutterstock Inc., Alibaba, Casetext with OpenAI, and MITGAS;

As ever leading, IBM has introduced arguably the best catalogue of AI governance models and tools, for example, Regions Bank, Innocens BV, and Change-Machine. IMB’s ‘Cloud Pak for Data’ is a highly rated AI tool for mortgage approvals (you will find its ‘Loan-Automation-Use-Case’ very interesting), and so much more. Such a dynamic and customizable tool that allows real-time statuses, proficiently supports user collaboration during decision-making, simplifies risk and regulation management, tracks regulatory compliance, and general AI governance (including AI visibility and enterprise monitoring services). IBM’s customers are using these AI governance tool to create innovative solutions, for example, IBM’s Innocens BV tool uses predictive AI to protect the most vulnerable newborns, and has made a great breakthrough in neonatal care with Cloud, Data and AI.

‘Operationalize AI across your business to deliver benefits quickly and ethically.’– IBM AI slogan.

IBM’s Chief Privacy & Trust Officer, Christina Montgomery, on May 16 2023, testified before US Senate Judiciary Committee at the first ever US Senate hearing on ‘Oversight of AI: Rules for Artificial Intelligence’. Two other witnesses, Sam Altman, Chief Executive Officer, OpenAI, and
Gary Marcus, Professor Emeritus, New York University, testified. IBM has developed a set of focus areas to guide the responsible adoption of AI technologies, which include;

  1. Respect for persons: mainly recognizes autonomy and consent of
    individuals;
  2. Beneficence: the principle of ‘do no harm’; and
  3. Justice: dealing with fairness and equality.
    Shutterstock, in October 2022, unveiled its AI-generated
    content capability plan in a way that is responsible and
    transparent to its customers, launched a fund to
    compensate artists for their effort to creating works. It
    positions itself as the vanguard of new creative storytelling
    technology.

Noteworthy is;

  • The relationship between two key aspects, international tax gap and international tax
    fraud, and;
  • The effectiveness of the existing framework of international tax laws and regulations.

The International Center for Tax and Development – ICTD estimates that almost three-
quarters of countries in the world are 80% dependent on their tax revenue, and attributes 85% of the tax gap to tax fraud. International Tax-gap information on offshore and cross-border tax revenue is a little bit murky. However, the ICTD reports that one-third of the tax gap is international today. What is a tax gap? Simply put, its a cultural phenomenon marking the difference between actual tax collected and what ought to have been collected resulting from non-compliance. Noteworthy is the depth of impact of the multifarious issues, culture, processes, rules, law, politics, and statistics involved in international tax fraud, a risky tax area that continues to slew tax authorities and a huge challenge to the global economy.

Tax Fraud is committed against a government (and tax-paying nationals) of any country across the globe. The important adage of tax fraud is a (sometimes intentional) surreptitious violation of a known legal duty to pay taxes. Certainly, every country has some form of laws prohibiting tax fraud and regulating its compliance, and the best possible approach to be used in understanding all about particular tax laws, compliance or fraud is by consulting a local tax expert legal counsel of a given country or jurisdiction for guidance.

Any actions or omissions typically involving concealed information or false claims committed to defraud a government of owed tax money is Tax Fraud. It is also a fundamental element of the informal economy,
call it the grey economy. The Association of Certified Fraud Examiners – ACFE research shows that anyone with sufficient pressure, adequate opportunity, and the ability to rationalize a dishonest act is at risk of committing any kind of fraud.

“A typical tax evaders apparent knowledge about whistleblowing schemes does not deter them from evading tax even with the assurance of anonymity and the likelihood of being caught.”

Tax Evasion is tax fraud and thus illegal. It is any fraudulent intentional action that is committed to avoiding reporting or paying tax but it is not to be confused with tax avoidance or violation of proper tax procedures, which can result in fees and interest penalties.

Tax Avoidance is an actual lawful method of lowering ones tax bill by legitimate deductions, credits, and shelters mainly made possible by structured practices of domestic tax base erosion and profit shifting (BEPS). In global business, it is the multinational enterprises that exploit BEPS, the lapses in tax systems of different countries, and the non-coherent international tax rules, which more often than not, contribute to tax evasion.

Tax evasion is fraudulent in a way characterized by the perpetrators inadequate moral development, however, don’t be mistaken, more often than not the perpetrator possesses measured intellectual development which vastly aids fraudulent schemes to a level of resilience.
No surprise that a well-designed inexpensive whistleblowing scheme put in place to catch organized group tax evaders does not significantly alter their inner cooperation and operations. More so, a typical evaders apparent knowledge about whistleblowing schemes does not deter them from evading tax even with the assurance of anonymity and the likelihood of being caught. Long-standing tax evaders seek legal advice on all loopholes in the law and the tax structural setup before the decision to avoid tax. A night-mare for law enforcement and tax capacity-building pillars for enforcement of tax fraud.

Until recently, with exception of some jurisdictions, the regular determinant of tax evasion was acting with criminal intent, and to determine such intent, the jurisdictions where it works require a willful act or attempt rather than an honest mistake, to constitute tax evasion. The United Kingdom did away with the requirement of proving intent to evade tax and thus turned tax evasion into a strict-liability offense. Strict liability offenses do not require proof of the element of intent.

The International Tax Regulatory Framework works to determine how a country collects and manages tax revenue from the cross-border movement of capital, technology, goods, and services supplemented with Territorial Tax Policy Frameworks that impact international taxation. The Framework includes;

  • Territorial and International Rules that define and determine what income will be taxed by the source country and Rules intended to minimize double taxation, and tax avoidance by multinationals;
  • Over 3900 specific Bilateral and Unilateral Tax Treaties in force worldwide;
  • The prospective evolutionary and revolutionary multilateral BEPS 2.0 (Pillar one & Pillar two) Rules to be implemented in 2023 (Including; the Global anti-base Erosion Rules (GloBE Rules) such as; the IIR Income Inclusion Rule, and the Undertaxed Payment Rule UTPR, and a treaty-based rule termed as Subject to Tax Rule STTR)

Tax laws are concerned more with legalistic aspects of tax rather than taxs financial, economic, or administrative aspects, however, sometimes its hard not to correlate those aspects all together. Here, what is of interest regards how to manage cross-over tax revenue of an individual, organization, or country, involving different national tax systems and international transactions including income from highly intangible assets such as patents and trademarks among others.

Each cross-border transaction attracts a certain tax and triggers one or more international tax rules as cross-border tax rules are understood to exist not only for purposes of limiting gaps that multinational corporations use to minimize their cross-border tax obligation but also to regulate tax crime.

“The rules must be aligned with what makes the most sense from a tax perspective because they impact the behavior and reaction of multinational corporations to tax compliance.”

Under the American 2017 Tax Cuts and Jobs Act – TCJA, it is mandatory that the taxable income of goods manufactured partly within, and partly out of the USA, be apportioned and allocated in both or all countries involved in the manufacturing activities.

Estonia has some proven and authentic tax perspectives. Arguably, today Estonia has the best territorial tax system around the world attributed to its least compliance burden, zero property transfer tax, an allowance to reinvest corporate profits tax-free, almost zero tax on foreign profits earned by a resident or domestic corporation, and low marginal tax rates in other aspects, encouraging investment and business with high returns after-tax.

Despite the extensive network of tax treaties existing the world over, and largely whose intention is to prevent tax cheating by closing tax loopholes, treaty abuse has risen by way of treaty-shopping.

Tax treaty-shopping is classic treaty abuse that happens when one taps the benefits of a tax treaty while being neither an intended beneficiary by design nor a member of the tax treaty, and as a result it;

  • Abuses the first bite at the apple rule in international taxation regarding primacy in taxation by member countries;
  • Brings about the unquantifiable political damage issue;
  • Deprives intended treaty members of their negotiated tax revenue supremacy;
  • Alters the agreed balance of concessions among members;
  • Causes inadequate taxation or no taxation at all; and
  • Exacerbates resident members loss of incentive to remain a party to the treaty.

To that end, the OECD BEPS Action 6 Review Reports suggest some recommendations for reforms and establishing minimum standard measures to curb tax treaty abuses and in the end, facilitate the effort against tax evasion.

Tax evasion more often than not transcends national boundaries mostly due to investigative and jurisdictional limits of a countrys revenue authority. The main types of such jurisdictional limits are; secrecy jurisdictions, tax shelters, and tax havens.

“Tax Shelters are designed to yield benefits to multinational investors and result in tax write-offs, deductions, and conversion of taxable income to capital gains taxed at minimal rates.”