“Our role as online platforms is to label AI-generated content as best as we can,” Mosseri writes, but acknowledges that these labels will omit “some content.” For this reason, platforms “also need to provide context around who is sharing content” so that users can decide how much they trust their content.
Just as it’s good to remember that chatbots will certainly lie to you before you trust an AI-powered search engine, checking whether the claims or images you post are from a reputable account can facilitate you assess their veracity. Right now, Meta platforms don’t offer much of the context that Mosseri wrote about today, although the company recently hinted at upcoming massive changes to its content policies.
What Mosseri describes sounds closer to user-led moderation, such as Community Notes on X and YouTube or Bluesky’s custom moderation filters. It is unknown if Meta plans to implement something like this, but it is known that it is taking pages from Bluesky’s book.