APPLICATION OF ARTIFICIAL INTELLIGENCE: RISK PERCEPTION AND TRUST IN THE WORK CONTEXT WITH DIFFERENT IMPACT LEVELS AND TASK TYPES

0
81
You can download this material now from our portal

APPLICATION OF ARTIFICIAL INTELLIGENCE: RISK PERCEPTION AND TRUST IN THE WORK CONTEXT WITH DIFFERENT IMPACT LEVELS AND TASK TYPES 

Abstract:

Artificial Intelligence (AI) has become an integral part of various work contexts, transforming the way tasks are performed and decisions are made. However, the successful adoption of AI systems depends not only on their technical capabilities but also on how they are perceived by human users. This study aims to explore the relationship between risk perception, trust, and the application of AI in the work context, considering different impact levels and task types.

The research employs a mixed-methods approach, combining qualitative interviews and quantitative surveys to gather data from employees across various industries. The interviews aim to provide in-depth insights into employees’ perceptions of AI, their understanding of its risks, and the factors influencing their trust in AI systems. The surveys gather quantitative data on risk perception, trust, and the impact level and task types in which AI is applied.

Preliminary findings suggest that risk perception and trust in AI systems vary depending on the impact level and task type. High-impact tasks, such as those involving critical decision-making, are associated with higher risk perceptions and lower initial trust in AI systems. Conversely, low-impact tasks, such as administrative or repetitive tasks, are perceived as less risky, leading to higher initial trust in AI systems.

Additionally, the study reveals that task type plays a crucial role in shaping risk perception and trust. Tasks that require high levels of cognitive reasoning and subjective judgment are perceived as riskier and often result in lower trust in AI systems. On the other hand, tasks that are rule-based and have clear guidelines are perceived as less risky, leading to higher trust in AI systems.

The results of this study have implications for organizations seeking to implement AI systems in the workplace. Understanding the factors that influence risk perception and trust can help organizations design and deploy AI technologies effectively. Strategies for fostering trust, such as transparent communication, user involvement, and explainability of AI systems, can be employed to mitigate the negative effects of risk perception and enhance user acceptance.

Overall, this study contributes to the growing body of knowledge on the application of AI in the work context and provides insights into the complex interplay between risk perception, trust, and different impact levels and task types.

APPLICATION OF ARTIFICIAL INTELLIGENCE: RISK PERCEPTION AND TRUST IN THE WORK CONTEXT WITH DIFFERENT IMPACT LEVELS AND TASK TYPES, GET MORE MASTERS COMPUTER SCIENCE 

Leave a Reply