Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

AI can be creative, ethical when applied humanly

15 апреля 2022 г Hi-network.com

Artificial intelligence (AI) has seen increasing adoption with its use expanding into fraud detection and even the creative realm, which is commonly perceived to be intrinsically human. Humans, though, still have a role to play in areas that require intuition and morality. 

Creative AI may seem to be an oxymoron, but AI-powered processes already are at work in activities that thrive on creativity, according to executives at Appier. Based in Taiwan, the SaaS vendor taps AI to build products for digital marketers and brands, processing almost 30 billion predictions a day. Its tools are touted to help these companies deliver richer user experience and identify customers with long-term value. 

AI now was used to support creative processes such as generating marketing slogans, images, and music based on given parameters, said Appier's chief AI scientist Sun Min, in an interview with ZDNet. 

See also

Singapore must take caution with AI use, review approach to public trust

Multi-ethnic Asian country needs to take special care navigating its use of artificial intelligence in some areas, specifically, law enforcement, as well as recognise that fostering confidence in AI requires establishing public trust in different aspects of its society.

Read now

He explained that marketers needed to work through several tasks to create a campaign, including deciding on the slogan to use, images or videos to represent the brand or product, and the overall theme comprising the colours and background music to use. 

Here, AI could be used to reduce the need for repetitive steps, Sun said. 

Noting that most campaigns now comprised digital platforms, he said marketers would experiment with different music and video, rather than adopt the same video and slogan across a campaign. 

AI could work on a piece of content, given the parameters within which to do so, and automatically generate multiple combinations of different content in text, music, and video, based on a single campaign idea. 

These then could be automatically rolled out and tested across audience segments to identify those that garnered the most positive feedback. All marketers needed to do was to pick out the top five most popular content and focus their resources to further build on these for their campaign. 

A lot of repetitive tasks could be reduced, hence, enabling humans to be more creative in the process, Sun said. Humans, though, still needed to feed the AI algorithm with the necessary parameters and, ultimately, select the best five out of the 100 generated by the AI system, he said. 

"AI can be creative...but intuition is still a human factor," he added.

Lin SD, Appier's chief machine learning scientist, concurred. He noted that AI and machine learning could look at large datasets and find unique ways to integrate ideas to generate something "creative". 

It offered one way of using big data to produce interesting new ideas, Lin said, but humans eventually had to judge whether these ideas actually were interesting or simply a mashup of things that had little creative value.

Question of morality in AI use for crime

Humans, too, cannot be removed from the equation where ethics are central to the AI discourse, such as in law enforcement.

In making a decision, humans would consider the morals behind it, said David Hardoon, managing director at Aboitiz Data Innovation (ADI), the Singapore-based data science and AI arm of Philippine conglomerate, the Aboitiz Group. He also is chief data and AI officer for UnionBank Philippines. 

"Can AI help us make a decision? Yes. Can it decide the morality of a decision? Absolutely not. This distinction is important," said Hardoon, who was previously chief data office and data analytics head of Monetary Authority of Singapore. 

Commenting on why AI should be applied with care in certain areas such as law enforcement, he stressed the need to ensure the technology could be deployed in a robust manner. This currently was not the case, he said, pointing to the use of AI in facial recognition. 

Facial recognition software has come under fire for its inaccuracy, specifically, in identifying people with darker skintones. An MIT 2017 study, which found that darker females were 32 times more likely to be misclassified than lighter males, pointed to the need for more phenotypically diverse datasets to improve the accuracy of facial recognition systems. 

Vendors such as IBM, Microsoft, and Amazon have banned the sale of facial recognition technology to police and law enforcement, citing human rights concerns and racial discrimination. Most have urged governments to establish stronger regulations to govern and ensure the ethical use of facial recognition tools. 

Hardoon also underscored the need for regulations and balancing these so innovation still could continue to thrive in the industry. 

must read

Ethics of AI: Benefits and risks of artificial intelligence

The increasing scale of AI is raising the stakes for major ethical questions.

Read now

"It doesn't mean you cannot use AI to help in your decision making or that you cannot apply it," he told ZDNet. He pointed to the importance of establishing three key components--namely, data, the AI algorithm, and operationalisation--in applying AI. 

With regards to data, for instance, processes must be in place around key areas such as data governance, risk management, and quality control, he noted. At the same time, AI algorithms should be robust and risk-tolerant, while AI systems and models must be fully operational across all areas they were deployed, such as minority groups in the use of facial recognition.

Addressing some of these key challenges meant that the human component could not be removed completely, he said. Human law enforcement officers, for example, still would have to make the final decision on whether a person identified by the AI system was accurate.

"It isn't that AI can't help us, but that we can't outsource it," Hardoon said.

In other areas, though, where results could be quantified, such as identifying ways to reduce carbon emissions and waste and improving efficiencies, AI could be applied more widely. 

Metaverse to drive need for whole new ecosystem

Its role in driving the future of metaverse also presents many market opportunities. 

AI would be essential in creating different digital content for the metaverse, including avatars and landscapes, Sun said. 

Noting that AI was generating digital assets that exhibited creativity, he added, "There's now a whole empty space in the metaverse and people want something that's different from the real [physical] world. AI can tap this opportunity, working alongside with humans to fill the need."

He believed the emergence of technologies such as AI and smartphones would help metaverses thrive where Second Life, a virtual world platform, had previously failed. 

"With Second Life before, we didn't have the ability to reconstruct objects in 3D and smartphones weren't as widely used. Now, even phones could reconstruct a building or town if you took enough images," he said.

Sun added that AI provided a way to populate the metaverse more efficiently and other technologies, such as blockchain, would enable users to bring digital assets from one metaverse to another. 

Rules and governance also would need to exist in the metaverse, said Hardoon. 

Humans would need to interact within the metaverse and this brought with it risks due to the anonymity, he said. This was where accountability, transparency, and governance would be essential.

In shifting to the metaverse, physical aspects such as laws and enforcement would need to be established. This would provide recourse, for example, if someone stole your land in the metaverse, he noted. 

"What I think is important in this transition is finding the harmony," Hardoon said, adding that the emergence of metaverses presented significant opportunities. "It gives us an ability to visualise our imagination. We just need to mitigate the risks that may come with it."

Ultimately, any use of AI should involve rudimentary hygiene such as having the right data, governance and principles, and transparency, he said. "At the end of the day, it's the application of what's important to the individual, organisation, and society."

RELATED COVERAGE

  • Every country must decide own definition of acceptable AI use
  • Singapore eyes more cameras, technology to boost law enforcement
  • Singapore taps iris, facial biometrics as primary identifiers at immigration checkpoints
  • Singapore releases AI ethics, governance reference guide
  • Singapore to establish AI framework for 'fairness' credit scoring metrics
  • Singapore wants widespread AI use in smart nation drive

tag-icon Горячие метки: Искусственный интеллект 3. Инновации

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.