Overcoming Resistance to Change: Artificial Intelligence in the Energy Sector

Main Article Content

Jerome Lambert

Abstract

Background and Objectives: Artificial Intelligence (AI) promises productivity, safety, and sustainability gains in asset-intensive sectors; however, outcomes in the energy sector remain uneven. The sector's safety-critical operations, capital intensity, and stringent regulatory requirements make it a particularly demanding context for AI adoption, where technical performance alone is insufficient to ensure value. This study treats adoption as a socio-technical process rather than a tooling decision. It addresses three research questions: (RQ1) how workforce development and change leadership shape acceptance and sustained use; (RQ2) which organisational and governance conditions mitigate resistance and enable legitimate deployment; and (RQ3) under what conditions adoption yields operational reliability and environmental performance aligned with decarbonisation goals.


Methodology: A qualitative, multi-case design triangulated a semi-structured interview with a senior manager, Likert-scale surveys of mid-level managers and technical staff, and analysis of internal policies and strategy documents. Data were anonymised, thematically coded using a blended inductive–deductive approach, organised in a shared codebook, and synthesised across cases to map convergences and divergences in readiness, workforce development, and governance. Intercoder reliability was assessed, and disagreements were resolved through adjudication and iterative refinement of the codebook across cases. Triangulation maintained a transparent chain of evidence. Ethical safeguards included obtaining informed consent, maintaining confidentiality, and obtaining prior approval from the relevant institutional authorities.


Main Results: Three reinforcing levers shape adoption outcomes. First, broad-based capability building beyond specialist teams prevents benefits from concentrating in expert enclaves and reduces brittle scale. Second, communicative governance that couples transparency with contestability, through model cards, bias tests, validation reports, and explicit appeal rights, earns trust, curbs shadow workarounds, and improves safety culture. Third, a tight workflow fit that minimises cognitive overhead at the decision point accelerates legitimate use and strengthens links to emissions monitoring and predictive-maintenance outcomes. Thin training coverage fosters anxiety about substitution and slows diffusion; structured upskilling and precise recourse mechanisms are associated with higher confidence, productivity, and clearer sustainability pathways.


Discussions: Algorithmic accuracy alone does not determine value; legitimacy and uptake hinge on people's and process readiness. The three levers translate literature on dynamic capabilities, AI readiness, and human responses to automation into operational guidance: invest in non-specialist literacy, institutionalise assurance and recourse, and engineer for workflow ergonomics in safety-critical contexts. Environmental gains materialise where oversight intensity, data quality, and targeted use cases align, indicating that governance quality conditions the conversion of adoption into credible emissions reductions. A responsible scale is pragmatic: build organisation-wide competence, communicate for legitimacy, and design for workflow fit.


Conclusions: Leaders should fund training coverage and design rather than headline hours, equip non-specialists to interpret model outputs, pair performance artefacts with participatory routines, and treat explainability as a usability requirement. Policymakers can reinforce these conditions by shifting from technology-neutral principles to auditable process standards that couple AI investment with reskilling and data-quality obligations. Future research should extend the design longitudinally and incorporate behavioural metrics to test causal links. The contribution is a field-tested playbook linking human capability, assurance, and workflow design to durable, auditable value in safety-critical energy contexts.

Article Details

Section
Research Articles

References

Akter, S., Sultana, S., Mariani, M., Wamba, S. F., Spanaki, K., & Dwivedi, Y. K. (2023). Advancing algorithmic bias management capabilities in AI-driven marketing analytics research. Industrial Marketing Management, 114, 243–261. https://doi.org/10.1016/j.indmarman.2023.08.013

Alshahrani, A., Dennehy, D., & Mäntymäki, M. (2021). An attention-based view of AI assimilation in public sector organisations: The case of Saudi Arabia. Government Information Quarterly, 39(4), 101617. https://doi.org/10.1016/j.giq.2021.101617

Bai, J. Y., Huan, T. C. T., Leong, A. M. W., Luo, J. M., & Fan, D. X. (2025). Examining the influence of AI event strength on employee performance outcomes: Roles of AI rumination, AI-supported autonomy, and felt obligation for constructive change. International Journal of Hospitality Management, 126, 104111. https://doi.org/10.1016/j.ijhm.2025.104111

Baxter, G., & Sommerville, I. (2011). Socio-technical systems: From design methods to systems engineering. Interacting with Computers, 23(1), 4–17. https://doi.org/10.1016/j.intcom.2010.07.003

Cao, Q., Chi, C., & Shan, J. (2025). Can artificial intelligence technology reduce carbon emissions? A global perspective. Energy Economics, 143, 108285. https://doi.org/10.1016/j.eneco.2025.108285

Deriu, V., Pozharliev, R., & De Angelis, M. (2024). How trust and attachment styles jointly shape job candidates' AI receptivity. Journal of Business Research, 179, 114717. https://doi.org/10.1016/j.jbusres.2024.114717

Feng, L., Qi, J., & Zheng, Y. (2024). How can AI reduce carbon emissions? Insights from a quasi-natural experiment using generalised random forest. Energy Economics, 141, 108040. https://doi.org/10.1016/j.eneco.2024.108040

Gaczek, P., Leszczyński, G., & Mouakher, A. (2023). Collaboration with machines in B2B marketing: Overcoming managers' aversion to AI-CRM with explainability. Industrial Marketing Management, 115, 127–142. https://doi.org/10.1016/j.indmarman.2023.09.007

Gillner, S. (2023). We're implementing AI now, so why not ask us what to do?-How AI providers perceive and navigate the spread of diagnostic AI in complex healthcare systems. Social Science & Medicine, 340, 116442. https://doi.org/10.1016/j.socscimed.2023.116442

Haque, A. B., Islam, A. N., & Mikalef, P. (2022). Explainable artificial intelligence (XAI) from a user perspective: A synthesis of prior literature and problematising avenues for future research. Technological Forecasting and Social Change, 186, 122120. https://doi.org/10.1016/j.techfore.2022.122120

Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial Intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014

Horvath, L., James, O., Banducci, S., & Beduschi, A. (2023). Citizens' acceptance of artificial intelligence in public services: Evidence from a conjoint experiment about processing permit applications. Government Information Quarterly, 40(4), 101876. https://doi.org/10.1016/j.giq.2023.101876

Laine, J., Minkkinen, M., & Mäntymäki, M. (2024). Ethics-based AI auditing: A systematic literature review on conceptualisations of ethical principles and knowledge contributions to stakeholders. Information & Management, 61(5), 103969. https://doi.org/10.1016/j.im.2024.103969

Libai, B., Bart, Y., Gensler, S., Hofacker, C. F., Kaplan, A., Kötterheinrich, K., & Kroll, E. B. (2020). Brave new world? On AI and the management of customer relationships. Journal of Interactive Marketing, 51(1), 44–56. https://doi.org/10.1016/j.intmar.2020.04.002

Lin, Q., & He, L. (2024). Does artificial intelligence (AI) awareness affect employees in giving a voice to their organisation? A cross-level model. International Journal of Hospitality Management, 123, 103947. https://doi.org/10.1016/j.ijhm.2024.103947

Papagiannidis, E., Mikalef, P., & Conboy, K. (2025). Responsible artificial intelligence governance: A review and research framework. The Journal of Strategic Information Systems, 34(2), 101885. https://doi.org/10.1016/j.jsis.2024.101885

Pramanik, P., Jana, R. K., & Ghosh, I. (2024). AI readiness enablers in developed and developing economies: Findings from the XGBoost regression and explainable AI framework. Technological Forecasting and Social Change, 205, 123482. https://doi.org/10.1016/j.techfore.2024.123482

Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2022). Assessing behavioral data science privacy issues in government artificial intelligence deployment. Government Information Quarterly, 39(4), 101679. https://doi.org/10.1016/j.giq.2022.101679

Straub, V. J., Morgan, D., Bright, J., & Margetts, H. (2023). Artificial intelligence in government: Concepts, standards, and a unified framework. Government Information Quarterly, 40(4), 101881. https://doi.org/10.1016/j.giq.2023.101881

Teece, D. J. (2007). Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28(13), 1319–1350. https://doi.org/10.1002/smj.640

Tchuente, D., Lonlac, J., & Kamsu-Foguem, B. (2023). A methodological and theoretical framework for implementing explainable artificial Intelligence (XAI) in business applications. Computers in Industry, 155, 104044. https://doi.org/10.1016/j.compind.2023.104044

Trist, E. L., & Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4(1), 3–38. https://doi.org/10.1177/001872675100400101

Tehrani, A. N., Ray, S., Roy, S. K., Gruner, R. L., & Appio, F. P. (2024). Decoding AI readiness: An in-depth analysis of key dimensions in multinational corporations. Technovation, 131, 102948. https://doi.org/10.1016/j.technovation.2023.102948

Vo, V., Chen, G., Aquino, Y. S. J., Carter, S. M., Do, Q. N., & Woode, M. E. (2023). Multi-stakeholder preferences for the use of artificial Intelligence in healthcare: A systematic review and thematic analysis. Social Science & Medicine, 338, 116357. https://doi.org/10.1016/j.socscimed.2023.116357

Wamba, S. F., Queiroz, M. M., & Trinchera, L. (2023). The role of artificial intelligence-enabled dynamic capability on environmental performance: The mediation effect of a data-driven culture in France and the USA. International Journal of Production Economics, 268, 109131. https://doi.org/10.1016/j.ijpe.2023.109131

Wang, Z., Zhang, T., Ren, X., & Shi, Y. (2024). AI adoption rate and corporate green innovation efficiency: Evidence from Chinese energy companies. Energy Economics, 132, 107499. https://doi.org/10.1016/j.eneco.2024.107499

Wenderott, K., Krups, J., Luetkens, J. A., & Weigl, M. (2024). Radiologists' perspectives on the workflow integration of an artificial intelligence-based computer-aided detection system: A qualitative study. Applied Ergonomics, 117, 104243. https://doi.org/10.1016/j.apergo.2024.104243

Wirtz, B. W., Weyerer, J. C., & Kehl, I. (2022). Governance of artificial Intelligence: A risk and guideline-based integrative framework. Government Information Quarterly, 39(4), 101685. https://doi.org/10.1016/j.giq.2022.101685

Yigitcanlar, T., Li, R. Y. M., Beeramoole, P. B., & Paz, A. (2023). Artificial intelligence in local government services: Public perceptions from Australia and Hong Kong. Government Information Quarterly, 40(3), 101833. https://doi.org/10.1016/j.giq.2023.101833

Zhou, Q., Chen, K., & Cheng, S. (2024). Bringing employee learning to AI stress research: A moderated mediation model. Technological Forecasting and Social Change, 209, 123773. https://doi.org/10.1016/j.techfore.2024.123773