I assumed it, because you didn’t complain about twitch not deleting them but instead trying to solve this “issue” by yourself.
If you stream and chat on a site that is available to everyone, then there is no privacy. If anyone can sign up to see “private” content, the privacy-factor would be zero.
I think the best solution would be to just hide everyone that is not actively chatting from the chatterslist #DontCallTheLurkers
I highly doubt people who nefariously join chats would be applying for known/verified bot.
I do think both anonymous and unauthorized reading of messages should go, but the steps to get authorization should be as easy as possible with the creator in control of who can have or delegate said authorization. But the road to get there…
You can require a phone number, email address etc for someone to actually post in chat if the concern is bot accounts posting spam in chat. The chatters end point provides user id’s and names for who is posting in chat and there are various options you could set up to automatically ban accounts who you deem problematic - at least temporarily, i.e. very new account, no PFP, no bio, username ends in a number, multiple follows from very similar usenames etc. Obviously it won’t catch them all, but we’ve been using something like that to determine whether follows should get on-stream alerts for the past 18 months and so far it’s been very successful.
If you find a website that you feel is reliable at posting lists of suspect accounts, it’s not too difficult (depending on how their page code is organized) to harvest those lists and automate banning those wholesale.
Personally I don’t really understand what the issue is with lurker bots, they just sit there. We’ve had far more trouble with actual humans over the past 3 years.
No, but Twitch could encourage people to register their bots and trivially make certain functionality available only to registered bots (with consistency being key—if it’s available on the website, it’s available). That might mitigate some of the purported harm, not that most people are alleging any harm aside from bots being there without permission.
So, Twitch thinks that all of these accounts is ok and have nothing against ToS in part “Distribute unauthorized advertisements”:
community_18k_members
discordstreamercommunity
network_streamer_discord
discord_for_streamez
discord_for_streamers__
paradise_for_streamers
streamers_growth
etc.
And these accounts gained dozens of followers in few days honestly:
therussianmommy
verylonely_liza
o0followme0o
mariah_anderson_usa
I have always banned such accounts based on the number of channels where they are online. Because the channel has a !pickrandomviewer command and I don’t want any of my viewers to go to a suspicious account profile. Now I have to block accounts based on my experience.
Removing the ability for sites to get a list of chat users, but still being able to view a list of users even without logging in through the twitch interface is a reckless decision that does not increase security on Twitch, but deprives me of the ability to take care of my viewers in cases where Twitch does nothing.
Restricting API users compared to what is available to the same user through the twitch interface - what’s the point? Why does my bot, which is a channel moderator, need to receive a streamer token in order to listen to rewards? A user without a Twitch account can see rewards, but my bot, for which I filled out the bot registration form, can’t? There is absolutely no logic in the latest API updates in my opinion.
If you think sitting in chat and taking no action whatsover is ‘unauthorized advertisements’, explain your reasoning when you report the users then if you think they are violating that part of the ToS.
It’s generally discouraged to call out lurkers, so perhaps one solution would be to have your command only pick from active users in chat and then any number of these bots wont be included in your random chatter command to begin with without any intervention from you to take any action against them.
They’re not the same though. The frontend only has a partial list, which is why it states “Some active viewers and chatters in the community.”. The frontend doesn’t return a complete list.
If you have use cases for why you should decide what apps integrate with a channel rather than the streamer themselves then you should submit a feature request on UserVoice https://twitch.uservoice.com/forums/310213-developers/. There are broadcasters that have asked for more control over their channel, privacy, and the apps that integrate with it and these changes do aid that, so if you have a use case otherwise then UserVoice would be the appropriate place to make your voice heard as it may be that Twitch isn’t aware of your usage.
Well, my bot has moderator status on this channel. So, it seems to me that everything is fine in terms of whether bot is welcome here or not.
These bots, whose sole purpose of existence is to advertise a discord server, are listed in “Some active viewers and chatters in the community.” on thousands of channels and Twitch sees no problem.
I want to have the same options (with limits, via API) as an unregistered user - this is against Twitch policy.
I don’t quite understand why I have to create and maintain a system for obtaining a streamer token for those things that a user without an account on the site can receive. My bot works on two channels and it’s overkill. How is my bot with moderator status, verified/known status, with an account that is three years old, more dangerous for a streamer than an unregistered user?
Based on my experience, there are no ways that report or UserVoice will change anything. Twitch focuses on big channels while small channels, small developers and their requests, ideas and reports will not be heard.
And if the streamer is welcome for your bot to perform actions that require additional permissions, the streamer can grant those permissions. An account being a mod in a channel doesn’t automatically mean you can delegate everything you can do first party (ie, logged in as that account on Twitch), to another 3rd party.
While you may be that 3rd party yourself and control the bot, if you connect to some other 3rd party app you can be granting some controls over that channel to someone that the streamer knows nothing about as they have no clue what connections your bot account has to apps that the streamer may or may not want.
Twitch disagrees.
Again, please stop confusing 1st party and 3rd party. Also verified/known status doesn’t grant your bot any special status beyond the intended use which is a higher rate limit. My personal account has verified bot status, as does many others, doesn’t mean we should get more control over a broadcasters channel.
If the reports aren’t actionable it’s because they are not doing anything that is against Twitch’s ToS, Community Guidelines, or any other applicable legal agreement.
As for UserVoice, most of the feature requests are by either individual developers, or small teams, as are the ones that have been successfully implemented. If you look through UserVoice and filter it to completed you’ll see that smaller devs are listened to and their suggestions acted on. The most frequently successful feature requests are those where there is a clear and well explained use case, where as simply not being willing to have the broadcast auth to your app and grant the permissions or saying ‘but I can do x, y, z, on the Twitch site itself!’ may be weighed less heavily against the downsides or development/maintenance time required for the idea.
I don’t quite understand your idea. Maybe because I’m not a native speaker. Sorry for that. I would also like to thank you for your time.
Are you saying that a streamer gives bot moderator status for the sole purpose of banning users?
It confuses me that as a moderator through the site I can manage the rewards, but as a moderator through the API I cannot listen to the rewards. It seems to me that these are not additional privileges, but missing of existing ones.
I agree that to manage an account or something like that, additional clear permissions are needed and I do not argue with this. But some topics are over-restrickted in my opinion.
Just like with the list of viewers - now I cannot get the list of active viewers through Chatterino.
It seems to me that allowing 1 request per minute from a user to view active viewers on any channel, or the ability to listen to rewards on a channel where the account is a moderator, would not have any great development or operational cost for a twitch. And will not increase security risks for streamers.
Many reports of threats went unanswered. Some guy got banned on two accounts on the channel, created a third one, got banned and a report was sent with a list of other accounts (difference in one or two characters in the nickname), soon he came with a fourth account. Only the very first account was banned by Twitch. However, the number confirmation feature greatly reduced the number of such people.
We have deviated somewhat from the original topic. In general, I understood the twitch position that the API will not have parity with the functionality of the site. I also heard the opinion that suggestions can be addressed to UserVoice - sometimes it works, although it looks dead. Thanks again for your time, have a nice day
I’m not sure why you are so positive on twitch reporting. It’s far from the land of perfection you describe. They don’t act on lots of reports I make that are clearly against ToS, and actionable.
As for UserVoice, I refuse to signup for yet another website, where I don’t agree with their privacy policy. I don’t understand why an Amazon company can’t have their own forums for suggestions, like here for devs. I’ve suggested things to @twitchsupport on twitter, but they choose to ignore my suggestions. So, I’ve stopped posting any more suggestions.
It also doesn’t help that the Twitch Suspicious User system flags some people, but doesn’t tell you which already-banned accounts they might be related to. It would be nice to know which new accounts relate to old accounts so mods know what type of behavior to watch for.
If I understood the concern correctly, you are saying that because the full chatters list of a channel is no longer publicly available there can no longer be a maintained list of possible bots, determined by counting how many channels an account is in. You mentioned using this bot list to identify botnets/C&C bots
However, using this list, or any list for that matter, doesn’t actually solve this issue as you may have thought. Any possible idea of trying to proactively ban malicious accounts before they have a chance to do actual harm becomes irrelevant because of how trivial it is to make them undetectable.
This is because your method relies on these accounts just sitting in channels, doing nothing. The problem is that there is no reason for a truly malicious party to be constantly connected to a channel to do harm, it simply needs to join on demand. The bot accounts that are always in the viewer list are in the best case just someone messing around with twitch for fun, seeing how many channels they can join, and in the worst case some random script kiddie that doesn’t have any sort of sophisticated plan, just to get a kick out of some one off event.
In fact, the account doesn’t even have to join the channel to ever send a message in it, just be connected to the IRC server and specifying the channel it wishes to attempt to send a message in. This latter case probably wouldn’t be an as common solution as the former which is why I wanted to mention both.
Using a bot list for this use case just gives you a false sense of security since any truly malicious party that puts even the smallest amount of thought into trying to hide themselves will make any list completely irrelevant for proactive action.
Don’t get me wrong, I’m not denying that these bots aren’t a problem, I just don’t see how a public chatters list provides a meaningful solution for them.
That’s silly. Counting channels a user is in gives no indication of whether it is a malicious bot, but your argument seems to be that any solution will be imperfect so don’t even try, which is absurd.
There are hard rate limits on JOIN/PART messages that limit the effectiveness of that with a single account. (Of course, there are bots that create new accounts on demand, which also defeats lists.)
Maybe people think it means “secure from the prying eyes of bots” (which it doesn’t), but hopefully no one thinks it would provide any real security, even if it worked 100%.
Let’s be honest: this is more about clearing names from the user list than any perceived security benefits. Only Twitch has the data to make tools capable of remotely effectively combating truly malicious bots (and even then, they’d probably only be effective in the remotest sense). All we can do is like putting Band-Aids on a severed artery.
Where is the business argument for better tools to improve Twitch chat and user safety? Amazon is not in the business of doing work for nothing, and it should be pretty clear Twitch is under the same pressure.
This was my point. Bot lists that are made from this method don’t actually indicate this at all, therefore using them does not actual prevent malicious bots.
I’m not saying that it’s not perfect, I’m saying that it’s nowhere close.
I was just giving the join-on-demand method as one example that would probably be someone’s first thought, refer to the second one for a better solution, no JOINs needed