Have a personal or library account? Click to login
Digital Innovations and Smart Solutions for Society and Economy: Pros and Cons Cover

Digital Innovations and Smart Solutions for Society and Economy: Pros and Cons

By: Marcin Sikorski  
Open Access
|Jul 2021

Figures & Tables

Categories of preventive policies (Source: Own elaboration)

Policies (level 1)Description (level 2)
Fixing technology
  • Monitoring systems and behaviors, early detection of hackers

  • Adapting cybersecurity techniques to smart systems

  • Compromising attackers (buy-in)

  • AI tools used reversely – for security and defense

  • “Red-teams” forecasting malicious activities for security, fraud, or abuse

Public awareness
  • Educating consumers about threats from “smart” products

  • Expert bodies to be heard louder than now

  • Publishing case studies on incidents and threats affecting real life

  • Presenting AI with a balanced view, objective tone, and no hype

  • Expert bodies answering questions from consumers

  • Promoting consumer rights to have smart systems safe and validated

  • Educating consumers in critical thinking as to biased or fake news

  • Providing free tools for validating credibility of news and media sources

Social approach
  • Promoting ethical AI to engineers and prospective developers (students)

  • Interdisciplinary design teams able to assess social impact

  • Including new (public and social) stakeholders into design process

  • Feeding from social sciences, not only from tech domains

  • Promoting mandatory assessment of the social impact of AI applications

Business governance
  • Rewarding ethical and sustainable governance in AI business companies

  • Implementing supervised design, deployment and operation of AI

  • Assuring AI compliance to regulations (auditing, certificates)

  • Assigning process owners and leadership in AI business governance

  • Company monitoring assessments of the social impact of AI

  • Promoting explainability and traceability of AI algorithms

Regulatory framework
  • Improving the regulatory framework for technological solutions

  • Establishing a repository of AI-related incidents and damages

  • Assigning one major AI-regulatory institution on the national level

  • Formalizing communication: regulators, governments, and AI business

  • Legal requirements for auditing, certification, and verification of AI

  • Intelligence involved in monitoring AI-related incidents and damages

  • Protecting AI against unauthorized reverse engineering and decoding

Controls and measures
  • Hardware supply chain control: hardware manufacturers and distributors

  • Software supply chain control for critical AI components

  • Mandatory registration and insurance for robots/drones/vehicles

  • Regulatory institutions make pressure on governments to update the law

  • Standardized security barriers to airspace and other open spaces

  • Assigning one major AI regulatory institution on the national level

  • Automated detections and automated interventions

  • Surveillance of and moderating social media, public health discourse

  • Banning specific AI technologies from authoritarian governments

  • Pervasive use of total encryption

  • Technical tools for detecting malicious bots, fake news, and forgeries

Categories of triggers (Source: Own elaboration)

Triggers (level 1)Description (level 2)
System malfunction
  • Allowing AI to use incorrect or incomplete input data

  • Technological flaws resulting in suboptimal decisions or control actions

  • Poor quality of AI: faulty machine learning, inadequate supervision

  • Attacks self-initiated by AI, self-initiated modification of software

  • Lack of explainability, transparency, and traceability of AI software

  • Learning and adaptation of AI software is beyond human control

Hacking and hijacking
  • Dual use of AI software: for terrorism, hijacking, overtaking control

  • Automated fabricating of data, news for blackmailing or discrediting

  • Swamping information channels with noise

  • AI-based prioritizing of attack targets, automated vulnerability discovery

  • Open code, open algorithms, destructive tools easier to develop

  • Human reprogramming AI for malicious use

  • Corrupting algorithms by disgusted employees or external foes

  • Hijacking autonomous vehicles or software robots (overtaking control)

  • Building and deploying malicious bots or robots

  • Nanobots for deploying toxins to the environment or living bodies

Social manipulation
  • Fake news for destabilizing, manipulating elections

  • Automated social engineering attacks

  • Malicious chatbots mimicking humans, chatbots pretending as friends

  • Automated influence campaigning (elections, shopping, etc.)

  • Automated scam and targeted blackmail

  • Social bots propagating or draw-in to extreme/hysteric groups

  • Malicious streamlining of users to/from a specific content

Business greed
  • Greed, rush, releasing untested, unvalidated software

  • Ignorance or recklessness of business leaders or companies

  • No governance, no supervision, no ethics related to AI

  • No AI-related risk management activities

  • No recovery plans for AI-related damages/impacts

  • No forecasting/assessment of social effects caused by AI

Regulatory gaps
  • No dedicated consumer protection from AI (smart) products

  • No control/registry of AI software applications

  • Lack of coordinated supervision or one responsible body on a national level

  • Leaders unaware of or ignoring the opinions of experts

  • Poor awareness of customers with regard to AI-caused harms

  • No systematic risk analysis, no forecasting, no foresights

  • No lessons learned from reported incidents

  • No risk identification performed as to the social impact of AI

  • AI-related gaps in the legal system, lacking standards and procedures

Categories of damages (Source: Own elaboration)

Damages (level 1)Description (level 2)
Social and political
  • Undermining public order and trust to state, businesses, and society

  • Affecting AI-based governments, justice, etc.

  • Generating false recommendations, judgments, and decisions

  • State abusing the use of automated electronic surveillance

  • Automated AI-based censorship online

  • Social manipulation for rebel or pro-government campaigns

  • Social trust put on fabricated entities interacting online like humans

  • Malicious hijacking online campaigns

  • Impersonalized, anonymous, distant relation to state or institutions

Physical and material
  • IT-initiated crashes and disruptions (caused or accidental)

  • Generating false alarms and panic

  • Remote or delayed attack operations

  • Robots disabling or entering security zones and damaging infrastructures

  • Machine-based false judgments and decisions leading to material loss

  • Human sabotage and damage of automated surveillance equipment

Business and economic
  • Disruption of markets or regional economies

  • Paralyzing important institutions

  • Manipulations in social media for discrediting business brands

  • Business-oriented manipulations aimed at affecting conjuncture

  • Reputational damages, erosion of trust

  • Financial losses and damages due to malicious activities online

  • Criminal, legal, or insurance problems

Individual and private
  • AI used for streamlining users from/to specific content

  • AI-propelled emotional scam (dating, financial, etc.)

  • Privacy violations, data breach

  • AI-based medical misdiagnosis, physical/health damages

  • AI-based abusive profiling of users, patients, or consumers

  • Undermined personal trust to state, businesses, and society

  • Self-imposed auto-censorship due to ubiquitous online surveillance

  • Fabricated evidences (videos) in media or in judicial cases

  • Personal addiction to digital platforms (social, entertainment, etc.)

Defense and security
  • Using AI to accessing classified information

  • Using AI to attack critical infrastructure, command centers

  • Overtaking control, mimicking human operators

  • Creating a panic, provoking conflicts affecting national security

  • AI-controlled robots disabling national security

DOI: https://doi.org/10.2478/fman-2021-0008 | Journal eISSN: 2300-5661 | Journal ISSN: 2080-7279
Language: English
Page range: 103 - 116
Published on: Jul 25, 2021
Published by: Warsaw University of Technology
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2021 Marcin Sikorski, published by Warsaw University of Technology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.