Shopping Cart

Close

No products in the cart.

Filter

close
We're Building the World's BIGGEST Online Community for Small Businesses

Getty images supports Australia’s proposed AI guardrails

​[[{“value”:”

Getty Images urges tech companies to prioritize obtaining consent and compensating creators for their content used to train AI models.

In the age of AI, where creativity can be both amplified and replaced, the question of ownership and compensation for the intellectual property that fuels these technologies has become increasingly pressing. Getty Images, a pioneer in the visual content industry, is at the forefront of this debate, urging for a balance between innovation and ethical considerations.

Natasha Gallance, Senior Director, Corporate Counsel APAC, Getty Images said: “We commend the Australian Government on the introduction of voluntary guardrails, and its proposed set of mandatory guardrails for high-risk AI which address some of our main concerns pushing towards  AI innovation that respects intellectual property rights, designed to protect creators and sustain ongoing creation. Innovation should not have to come at the expense of creators.  There are certainly paths that would allow the two to coexist, and elevate each other in a balanced way. At Getty Images we support the advancement of generative AI technology that is created responsibly, respects creators and their rights, protects users of such technologies, and ultimately sustains ongoing creation by obtaining consent from rights holders for training.”

“We believe AI can make tremendous contributions to business and society, but we need to be conscious about how we develop it and deploy it. At Getty Images, we believe industry standards should seek to ensure transparency as to the makeup of all training sets used to create AI learning models; seek consent (and remunerate) of intellectual property rights holders in training data where the models are being used commercially; require generative models to clearly and persistently identify outputs and interactions; allow businesses to collectively negotiate with model providers and; hold model providers accountable and liable, by incentivising them to address issues around misinformation and bias. Getty Images works with a number of leading innovators in the areas of artificial intelligence and machine learning to support the development of responsibly created generative models and content.” 

“The notion that AI is inevitable can overshadow the need for ethical considerations. Tech companies have made the argument that it is economically impossible to accommodate licensing for all the content required to train functional AI models, but we have proven this is possible – developing business models that enable the creation of high quality AI models while respecting creator IP.  We strongly oppose the notion that training on copyrighted materials can be considered fair use, or fair dealing.

That decision should not be left up to individual technology companies to decide. On the contrary, where generative AI outputs compete with web scraped training data, this can never be ‘fair’. While AI holds the potential to benefit humanity and enhance creativity, establishing industry guardrails is essential to mitigate risks. If left unchecked, we believe these technologies pose significant risks to society, free press, and creativity.”

Protecting Creators’ Rights 

The Australian Government has unveiled a proposal for mandatory guardrails to regulate high-risk AI applications. Developed in collaboration with an expert AI group, the proposal outlines measures such as human oversight and mechanisms to challenge AI decisions. Industry and Science Minister Ed Husic released a discussion paper outlining the government’s options for mandating guardrails for those developing and deploying high-risk AI in Australia. 

The minister emphasized the importance of ensuring public safety and trust in AI technology. The proposal adopts a risk-based approach, focusing on measures like testing, transparency, and accountability, aligning with international best practices. It includes key elements such as a definition of high-risk AI, ten proposed mandatory guardrails, and three regulatory options to implement these requirements. Minister Husic stated, “Australians are excited about the potential of AI, but they also want to know that there are safeguards in place to prevent misuse or negative consequences.”

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

“}]] 

Leave a Reply