Follow

You can avoid 's weird algorithms and broken subscription system by subscribing to channels through RSS instead. You don't need an account, just a feed reader.

Put the channel's username in this address:

youtube.com/feeds/videos.xml?u

You can then add this RSS address to your feed reader app.

For example, OnePotChefShow's RSS feed is:

youtube.com/feeds/videos.xml?u

You can also make RSS feeds from channel IDs (strings of letters and numbers):

youtube.com/feeds/videos.xml?c

@switchingsocial
since Facebook retired rss feeds some time ago: any idea on how to easily build rss feeds from public FB pages?

@switchingsocial totally love RSS. It makes veeery much easier and more flexible to manage subscriptions than Youtube's system.

@switchingsocial That's how I subscribe to channels with NextCloud News, and it works like a charm!

@switchingsocial petrolette.space/ does it automagically for you: Just enter the channel's URL & click "find feed" ๐Ÿค–

@switchingsocial Perfect timing, currently trying to separate myself from google and in the process of setting up an rss for youtube. This helps a lot, thanks!

@switchingsocial Oh, that's a less known fact. What would you think about a browser add-on just for that?

@switchingsocial i wrote a crude python terminal app a while ago that polls RSS feeds for watched channels, and spits out a list of unwatched vids on demand.

if there is interest, i can make its gitlab repo public.

@switchingsocial you can also use #RSSbridge if you wanna avoid the tracking from youtube (and for other websites not providing RSS feeds :).

@switchingsocial And if you want to subscribe to a RSS feed from the start, I suggest you use my new tool (in beta): rewind.website/

Itโ€™s aimed ad podcasts, but can work for about any kind of feed. It can be self-hosted and is shared under the AGPL.

#RSS #BringBackRSS (but #Atom and #JSONFeed too)

@joachim Oh, neat. Now I have questions about Cast Rewinder. 1) Is the idea to trickle just a few episodes per week to your podcast client, like archivebinge.net does for webcomics? 2) Do you rely on the upstream podcast feeds having all the episodes in one feed document, or do you have some magic for finding older episodes? (Lately I've been advocating for publishers to adopt RFC5005, "Feed Paging and Archiving", to solve the latter issue.)

@jamey Ooh, I didn't know archivebinge.net, I could have used it a lot when I was still reading webcomics by the dozen :)

1/ yes, pretty much. You choose the frequency, and you get a feed that's a mirror of the first few items of the feed, updated at that frequency
2/ I don't have magic, sadly, I only rely on the original feed. I could try to get `rel="previous"` links and add them, but sadly no one respects standards (once I got a feed with localized <pubDate> ๐Ÿค” )

@joachim Cool!

If you make your thing consume RFC5005 pagination links, there are a handful of podcasts and webcomics it'd work with today, and I'm working on getting technology in place to make it easy for others to adopt the spec too. In particular I've written a WordPress plugin that should work for all WordPress-generated feeds, although I haven't been able to get anybody else to test or code-review it yet. I also have a Jekyll plugin and some other stuff.

@jamey Nice to know! I tried to play with the <link rel="prev"> links that are present on SoundCloud feeds, but they link to a rss feed with no entry, so I quickly lost interest (as I didn't know any other podcast feed with these).

@joachim Here's one podcast generated by the only podcast publishing tool I've found yet that implements any part of RFC5005: feeds.metaebene.me/cre/m4a
That uses section 3 of the standard, "Paged Feeds".

I also built a tool for generating your own full-history feeds that uses section 4, "Archived Feeds".
fh.minilop.net/
It doesn't check if the URLs it generates are real so you can generate an arbitrary amount of test data with it if you want.

@joachim Also, I maintain archivebinge.net and comic-rocket.com so I have a bit of an advantage at knowing they exist. ๐Ÿ˜‰ I came to the conclusion a while ago that Archive Binge would be better as a standalone tool that transformed full-history RSS feeds, instead of being tightly coupled to my comics webcrawler, but I haven't had a chance to build that, so I'm happy to see yours!

I had a plan for encoding all info in the feed URL to avoid needing user accounts; are you interested?

@jamey Please share :)

Right now my system works with some info in the feed URL, (it's not encoded because I think of URLs as the web's command line), I follow this pattern : domain.tld/<feed id>/<frequency>/<start date as YYYYMMDD>(<tz>)(/options)

(in parentheses are info that isn't required)

Feeds are entered once and for all in a reader/app, so that's quite a limitation : no way to specify which entries were read, and all that.

@joachim So feed ID refers to some database table, is that right? But all the user-specific details are in the URL? That's what I had in mind actually, except it isn't completely horrifying to outright embed the upstream feed URL in this URL, so long as it's at the endโ€ฆ That's what I did with fh.minilop.net, so it's entirely stateless and database-free, which was convenient for deployment. ๐Ÿ˜

@jamey yep, the ID is the database ID, I build all my feeds from the database contents. Right now I'm having a headache, trying to decide how to deal with feeds who delete old contents. (French public radio podcasts only stay online for one year, for example)

@joachim Haha, yeah, that's exactly why I thought RSS was useless for webcomics for so long, since they rarely have anywhere near even a year of history in the feed. Now that I've discovered RFC5005 I'm hoping to get lots of people to adopt it and let people stop thinking about how to save entries after they disappear from the feed.

Being able to point to tools that will use the archive links if present will help, since nobody wants to be the first to implement a standard. *grumble*

@jamey Well the problem with some podcasts is that they delete old episodes, along with deleting the entries in the RSS

@joachim Really? That's worse than I expected. And I can see how that would make your life particularly difficult, since I assume you're counting where the user is at from the first entry in the upstream feed. ๐Ÿ˜ฅ The only upside I have for that situation is that at least if they used RFC5005 correctly you'd be able to tell that the entry was actually gone, rather than just guessing whether it scrolled out of the feed.

@jamey yes, exactly! I'll check out the RFC in detail, so I can implement it. Building pagination tools should not be too hard, and I already have an option to start at a certain point, so I can generate feeds compatible with RFC5005 Section 3. I just have to make the time to work at it :)

@joachim For my goals, I really only care if you can consume paginated feedsโ€•it doesn't matter to me whether you can produce them, and I'm not even sure whether it makes sense for your tool to do so. But don't let me stop you, either. ๐Ÿ˜‰

If you have any questions about the spec, feel free to ask me! I've, uh, thought about it a lot ๐Ÿ˜…

Also I discovered this tool which you might find useful too: redbot.org/
It checks cache headers and also links to appropriate validators.

@jamey redbot.org doesn't work for me ๐Ÿค”

Speaking of cache headers, I just implemented support for "Last-Modified" and "ETag" HTTP headers, so I don't have to download & parse unchanged feeds, or to serve new feeds when it isn't time. Is it about that?

To manage "dead" entries, I thought of checking the podcast episodes MP3 links by making HTTP HEAD requests. After about 300 requests (in 10 minutes), my Internet connexion went down for a couple of hours. I don't think that's related, though ๐Ÿ˜…

@jamey I mean, 300 HTTP HEAD requests can't make a Denial Of Service attack, can it?

(on the phone my ISP told me the problem was not about that, but stillโ€ฆ)

@joachim Huh. One HEAD request every two seconds sounds reasonable to meโ€ฆ especially for something you can be pretty sure is served as a static file!

Redbot checks that some of the response headers you serve are spec-compliant, including testing conditional requests if you set ETag or Last-Modified, but also testing compression and other stuff. There's also a standalone version you can run locally: github.com/mnot/redbot

@switchingsocial

In #Gpodder I've never been able to follow #youtube's channels, no problems with #users.
With your instruction it works.
Thanks.

@switchingsocial Do you have any recommendations for RSS readers for windows?

@conatus

My favourite is Mozilla Thunderbird:

thunderbird.net

It's advertised as an email app but it works very nicely as an RSS reader (and it's open source and well maintained).

@switchingsocial
Thanks for the recommendation, I had heard of it as an email app but I didn't realize it also did RSS.

@wraidd @switchingsocial
Woah I was not familiar with NextCloud but it looks awesome. I want to run my own stuff eventually, but learning/setting up costs a lot of time/energy. Thanks for the reply.

@conatus @switchingsocial Nextcloud is the actual best. It replaces Google's non-email services almost completely with the right plugins.

If you want to mess around with it, a try-before-you-buy, go sign up for disroot.org -- you'll get Nextcloud, and also email, Matrix, XMPP, Hubzilla, and some other stuff.
@conatus @switchingsocial And by "replace Google services" I mean I've heard of people using it instead of Google to keep their Android accounts synced. So. Super handy.

@wraidd @switchingsocial
Really interesting, it looks cool as heck. This is my first day on mastodon and I've already learned so much. Thanks โค๏ธ

@switchingsocial If you have an account, you can also export an OPML of all of your subscriptions at the bottom of the old subscription manager page (youtube.com/subscription_manag).

@switchingsocial every time I view source on yourube to find the channel_id buried in some mess of javascript, I am again stunned at Google's commitment to web standards.

@joeyh I was gonna say, that is a fantastic typo.

@switchingsocial that's awesome I was just trying to figure that out

@switchingsocial I've been wanting this for so long, and it already exists?

๐ŸŽŠ

@TMo

Yes, apparently it existed all along :blobsurprised:

They don't advertise or link to it any more, but it is still there.

@switchingsocial@mastodon.at You can also use an Invidious account to subscribe to channels without a Google account and with RSS support!
https://invidio.us/

@switchingsocial I keep telling creators who complain about the algorythms about this, glad I'm not the only one! :)

@switchingsocial ... or use Invidious and avoid going through YouTube entirely?

@OTheB @switchingsocial An RSS reader is a better way to subscribe than going to a webpage for each channel.

But redirecting the viewing to invidious is a good idea, when it works. I find that some videos require me to go to hooktube for them to work, haven't looked into why.
Sign in to participate in the conversation
Mastodon

mastodon.at is a microblogging site that federates with most instances on the Fediverse.