WELLBEING AI RESEARCH INSTITUTE

WELLBEING AI
RESEARCH INSTITUTE

WELLBEING AI PRINCIPLES

Cognitive AI – PROTECTING THE HUMAN MIND

Artificial intelligence should be built around the human mind, in order to respect, support and expand cognitive capacities, rather than take advantage of cognitive vulnerabilities and cognitive biases to just sell products to people. We support technology that adds value to human lives by being mindful of cognitive biases, understanding cognitive processes and supporting them. 

ATTENTION SAVING, NOT CONSUMING

Attention is a very important human cognitive resource, like time, and should be treated as such. In the current attention economy, apps, websites and ads all fight for human attention in an attempt to increase income. Aggressive technologies speculate cognitive vulnerabilities and leave the human distracted, interrupted and unfocused. AI and other technologies ought to protect human attention and mindfulness. We make it our mission to show how this is possible, and support tech companies that promote attention saving AI technologies.

ADAPTIVE TO HUMANS, NOT ADAPTING HUMANS

Various current AI technologies are built without knowledge of what is supportive to humans, or how humans process information well. They maximize number of users, rather than information assimilation depth, or cognitive smooth processing.  In interacting with such technology, the human in the loop adapts to it, learning to behave in ways that serve the technology design but are unnatural, clumsy or harmful to humans. We design and support AI tech that adapts to humans, rather than using humans to gather data, modify human behaviour for  company purposes or in order to accommodate the technology itself. 

MINDFUL OF HUMAN RESOURCES USE – ESPECIALLY TIME

Many technologies do not care about their users’goals or life balance, but rather aim to capture their time and maximize app use and time spent as a success factor. We propose other success factors and support and design AI solutions that are mindful of human resource use, especially time  (possibly the most limited human resource). The same principle applies to other resources – like data. Is the human data required from users finally employed with human wellbeing principles in mind, or just business principles?

Emotional and Social AI

Human relationships and human emotions are important even if they reflect “soft” factors. We welcome AI technology cases that want to solve or support soft human factors, not just abstract and/or logical problems. We reflect an empathic and inclusive philosophy of all of humanity’s cognitive capacities and needs. AI technology focused on emotion should not just speculate shallow positive or negative attitudes to sell product, but support healthy human emotion management. Similarly, wellbeing AI should not just support shallow social features, but care instead for healthy social interaction.

SIMPLICITY

We believe less features, reflecting essential wellbeing support, are better than more features giving an illusion of limitless choice. We adopt a simplicity strategy when it comes to AI technology design. We believe understanding what the essential is and focusing on it is key to productive, time saving AI.

Wellbeing Test

A key question for Wellbeing AI is: Is human wellbeing improved as a result of this technology? In our experience, technology is often a catalyst for progress and moving forward, bringing about both positive and negative results. When creating any technology, we need to think about and enable a deep conversation on what its dark facets are. How can these be prevented or limited? Are the good effects worth the negative side effects?

We use the term wellbeing to include human happiness in a longer timeframe, rather than just as a positive short term mood effect.

TESTING WITH HUMAN FACTORS

We believe in the scientific method of testing with humans to explore the effect various AI technologies have on their users’wellbeing. Many companies currently perform shallow testing strategies which care only about supporting product sales through seeing which product is more attractive to customers or users on the short term. We support and help design fully fledged statistical analysis of effect on human behaviour and wellbeing, on all tech and AI products. 

INCLUSIVE

We know that the data and design of our technology profoundly influences people’s lives.  AI algorithms can dictate whom gets access to home financing and to which rates, advise on the length of sentences for prisoners, and direct the information a user receives on the web, biasing her perception of reality. We support the design and use of AI tech which constantly questions such algorithms and data biases. We believe making sure all humans get access to financing, justice and balanced information is our civic duty as AI tech makers. 

POSITIVE BIAS OR NO BIAS

If a lack of bias is impossible when presenting data, because a choice needs to be made on what data is presented and how it is framed, we believe AI needs to reflect the best possible behavioural influences. We believe human wisdom should create not just artificial intelligence, but wise artificial intelligence (sapioAI).

TRANSPARENT and accountable

We understand the need to protect business assets and algorithms. However, we believe transparency and accountability can be accommodated, and the business assets still protected, with the help of independent consulting third parties analysing and reporting existing biases. We offer consulting in the limit of our resources. To be more generative, we support the creation of such consulting entities, and offer our expertise in establishing engagement guidelines for transparent and accountable AI focused on human wellbeing.

SUBSCRIBE

Your subscription could not be saved. Please try again.
Your subscription has been successful, please check your email for confirmation.