About This Gallery
This is a gallery of all the statistics I’ve gathered on the Monsterdon watch party.
How does it work?
The methodology is described in detail in the code. But basically it works like this:
Everyone is on some server. I’m on infosec.exchange. My program logs in as me and pulls all the #monsterdon posts it can find. But then I read all the servers of all the people. And if I find mastodon.social for example, I go connect to mastodon.social and start pulling down everything that they saw for #monsterdon. And I read all the posts I get from there, and I make a list of all the servers that I have discovered, and I try each one in turn.
For example, there might be someone on jorts.horse that I don’t follow and whose posts don’t make it to my server. But I connect directly to jorts.horse and it gives me the posts, which includes the ones from this person that I don’t normally see. Also, when my server gets a post from a different server during the event on Sunday night, it gets a snapshot in time. Here’s how many boosts and favourites were on it at that moment. It might not fetch an updated copy. Since the post isn’t from my server, my server might have out-of-date numbers. When I connect to jorts.horse on Monday, I get the latest, authoritative version of that post, from the server that knows it.
Finally, there are some servers (including Taweret’s timeloop.cafe) which don’t federate with infosec.exchange (for reasons lost to history) and that also don’t offer a public API. I can’t ask it for the definitive posts because it won’t answer an anonymous request. (That’s fine. Some folks don’t want to be too public, it’s ok. I send them one query every Monday, they say “not allowed” and I leave them alone.) So I might get 20 or 30 copies of a post from @Taweret as seen by various other servers. I keep the version that has the highest numbers, and I throw away all the other copies.
How much data is it?
A typical Monsterdon is anywhere from 2100 to 3300 posts. I capture starting 1 hour before and running to one hour after the event. About 55 servers typically answer the anonymous requests, and I retain about 15Mb of JSON for the whole event. It’s not unusual for me to pull down 80,000 posts just to get the 2100 unique posts. So I get TONS of duplicates. It takes anywhere from 35 minutes to 2 hours to pull all the stuff down.
Useful Links
- @taweret hosts the
#Monsterdonpoll every week. - Jonny’s Monsterdon Wiki has some background and history.
- Monsterdon Replay will let you watch the movie on your own time, and see the posts play in real time as it progresses.
- Monsterdon Bingo Cards
- Monsterdon T-Shirts featuring mostly films from season one.
About the Software
You can see the code that produces these graphs and graphics at mastoscore.
About Paco
I’m a computer geek, cigar smoker, information security person, and bass guitar player. If you want to reach me, I’m on Mastodon at @paco@infosec.exchange.
Stuff I do:
- My blog. Whatever stuff motivates me to write.
- John Mastodon. My little tribute to John Mastodon.
- nova.org. A micro ISP that I run in Northern Virginia, USA.
- my peertube. A few random videos and things I have posted.
- www.carecure.net. CareCure: a web forum that provides information and community for people with spinal cord injuries and traumatic brain injuries.
Recent Posts
Tags