Many sites rely on external RSS feeds to augment their content. Some of these site use the text from other site’s RSS as their primary content source. This can be very successful for SEO, drawing many visitors to a collection of valuable content, written by other people.
You may argue that consolidating the RSS feed data on a site does help people find information, but it also clutters up the ‘net.
For this reason, most major search engines have added RSS intersite content scans into their page rank algorithms. As the RSS data is indexed, it is scored and marked with an identifier. If the feed content is encountered on another site, the logic then analyzes the site for the volume of content from other RSS feeds. The current setting is 33.3%, meaning that if one third of the site’s content is from RSS feeds, the site will be marked as “RSS sourced” and the page rank will be reduced.
This approach makes it more likely that the primary content source will be delivered in the search engine results ahead of the derivatives and collections.
This is fiction. It might be a good idea, it might even be happening, but I’m posting it for fun.