Nobody will ‘escape’ the advent of Artificial Intelligence (AI) being taken up across all industries in the coming years. The field of engineering will be no exception. What do two companies already active with AI have to tell us? Paul Daugherty of Accenture and Warren Bonheim of Zinia give some pointers.
“Invest more in the people than in the technology,” advises Paul Daugherty, CTO of Accenture. “Artificial Intelligence (AI) is set to redefine the essence of work, transforming roles into that of advisors, creators, developers, and protectors within the enterprise sphere. AI can streamline productivity and tackle complex challenges, highlighting a shift towards more engaging and meaningful work.”
The transformation does not stop at job enhancement; it extends to the genesis of new career paths. As generative AI permeates various sectors, there is a burgeoning demand for roles that blend technical savvy with creative and strategic thinking. This shift is catalysing a comprehensive re-skilling movement, spearheaded by organisations keen on nurturing a workforce that is resilient and future ready.
Technical skills: ‘’We need more technical skills especially in fields such as engineering, technology, and trade,” says Paul. Recent academic studies indicate that the demand for IT skills exceeds supply, mainly due to the gap between industry needs and the skills possessed by graduates, negative perceptions of Information and computer science professions among university students, insufficient attention to key technical and organisational issues in academic curricula, as well as a discrepancy between qualifications and competencies of graduates.
Ethics: “However, the journey towards a harmonious human-AI coexistence is paved with ethical considerations. Privacy, transparency, and inclusivity stand at the forefront of this venture. As AI systems become more integrated into daily life and business operations, there is a growing need to ensure these technologies are developed and deployed responsibly. An AI Ethics Officer is responsible for creating guidelines that govern the ethical use of AI within organisations.
Data annotation: The accuracy of AI models depends heavily on the quality of the data on which they are trained. Data Annotation Specialists play a crucial role in the AI development process by labelling data accurately, which is then used to train machine learning models. This can involve anything from identifying objects in images for computer vision tasks to annotating speech for natural language processing systems.
Business integration: Integrating AI into existing business processes requires not just technical knowledge but also a strategic vision. AI Business Integration Specialists work at the intersection of AI technology and business strategy, helping organisations identify opportunities for AI implementation that align with business objectives.
Data, data, data: The cornerstone of any AI system is data. Accurate and high-quality data are essential for deriving meaningful insights and making informed business decisions. Without this foundation, AI’s potential will be severely undermined. Immediate investment in data quality improvement and analytics infrastructure is critical. This foundational work will ensure that any AI initiatives are built on accurate, reliable data, enabling better decision-making and fostering trust in AI-driven processes.
Bite-sized pieces: AI implementation should be approached in manageable stages. Businesses must adopt a process of implementing AI in small, bite-sized pieces, followed by regular reviews and adjustments. This incremental approach allows companies to learn and adapt without the risks associated with large-scale, untested deployments. Furthermore, companies should prioritise the development of incremental AI implementation plans, allowing for flexibility and continuous learning. This approach will help mitigate risks and ensure smoother transitions as new technologies are adopted.
Business case: “For AI to be truly effective, its use must have a clear and quantifiable business value. Organisations should not pursue AI for its own sake but rather focus on how it can enhance decision-making, streamline operations, and provide a competitive advantage,” Paul concludes.
Security
On the subject of cyber security and AI, Warren Bonheim, Managing Director at Zinia, has seven top tips:
“AI loves data. It needs lots of it to work well, which means it can access tons of sensitive info, and we need to make sure this data is locked up tight. Many AI tools use third-party platforms to store or analyse your data, and they may not always have stringent security protocols, leaving sensitive information vulnerable.
“Sometimes, the biggest slip-ups come from our own team. Simple mistakes, like setting up privacy settings incorrectly, losing devices, or unwittingly granting permissions to malicious apps, can invite trouble,” he says.
- Make clear rules about how to use AI safely. By establishing these ‘house rules’ for your technology, you’ll help protect your business from risks and make sure everyone knows how to use AI tools responsibly and effectively. And remember, it’s perfectly okay if you’re not sure how to set all this up on your own. It’s smart to bring in an IT professional to ensure everything is set up correctly and securely.
- It’s super important to make sure everyone on your team knows about the risks and the wrong/right ways to use AI technologies. Having regular training sessions isn’t just about rule-following—it’s about helping everyone spot security risks before they turn into real problems. Plus, when everyone’s clued in on the best ways to handle these tools, it’s one of the best shields you can have against data breaches.
- Block all unwanted AI tools on your firewall and disallow non-approved users from making these decisions without the proper due process or knowledge. Some AI tools also take notes of meetings, often where teams discuss strategic activities related to the business, these too need to be controlled through policies and using approved vendors.
- Steer clear of free AI tools—most of them can share your sensitive data, making it pretty much public. It’s like leaving your diary open on a park bench; anyone can peek. If privacy is a big deal for you, it might be worth it to invest in a paid service that promises better security.
- When you bring in third-party AI tech you’ve got to be really picky about who you team up with. Make sure they’re good with security. Going with partners who’ve got a solid rep for protecting data can really cut down the risks that come with handling and storing your info on their platforms.
- It’s smart to do regular check-ups on your AI tech. These security assessments help you spot any weaknesses, whether it’s with the devices themselves, how data is sent back and forth, or how it’s stored. Catching these issues early means you can fix them before they turn into bigger problems, helping you steer clear of security headaches.
- Use tools like encryption, which is like a secret code for your data. When you send or store info through your AI tools, encryption scrambles it up so only the right people can read it. And don’t forget about beefing up your access controls, too—things like multi-factor authentication and strict permissions make sure only the folks who really need to, can get into your sensitive stuff.
“Putting these simple steps into action will help you enjoy the benefits of AI without the worry. It’s not about fearing the new but being smart about it.” Warren concludes.