According to Bloomberg, Adobe used images from a competitor's Midjourney to train its Firefly artificial intelligence image generator, which goes against the company's “commercially safe” ethical standards. That's what it means. Tom's Guide reports: The startup hasn't disclosed the source of its training data, but many suspect it comes from unlicensed images collected from the internet. According to Adobe, only about 5% of the millions of images used to train Firefly fall into this category, and they are all part of the Adobe Stock library, which means it has gone through a “rigorous moderation process.”
When Adobe first launched Firefly, it offered coverage against copyright infringement claims as a way to convince business customers that Firefly was safe. Adobe also marketed Firefly as a safe alternative to the likes of Midjourney and DALL-E, as all data was licensed and allowed to be used to train models. At the time, not all artists were so enthusiastic and felt coerced into agreeing to let the creative technology giants use their work, but images created with Firefly could be sued for copyright theft. I felt that it was safe to use with no risk of being exposed. .
Despite revealing that some images were potentially from less reputable sources, Adobe says all non-human images are still safe. A spokesperson told Bloomberg: “All images submitted to Adobe Stock, including a small portion of AI-generated images, contain IP, trademarks, recognizable text or logos, and the name of the referring artist. “We go through a rigorous moderation process to ensure that the content is not tampered with.” . ” The company appears to be taking slightly more rigorous steps with plans to build an AI video generator. Rumor has it that they pay artists per minute for their video clips.