Posté le 27/03/2025 19:34
Planète Casio v4.3 © créé par Neuronix et Muelsaco 2004 - 2026 | Il y a 209 connectés | Nous contacter | Qui sommes-nous ? | Licences et remerciements
Planète Casio est un site communautaire non affilié à Casio. Toute reproduction de Planète Casio, même partielle, est interdite.
Les programmes et autres publications présentes sur Planète Casio restent la propriété de leurs auteurs et peuvent être soumis à des licences ou copyrights.
CASIO est une marque déposée par CASIO Computer Co., Ltd
Citer : Posté le 18/11/2025 13:34 | #
Bah déjà passer par l’interface Web fait que tu charge au moins des ressources statiques dont le bots se foutent.
Les cas d’usage légitimes seraient les paquets de l’AUR qui vont taper sur la forge au build, mais si t’en build plusieurs d’un coup (et c’est souvent le cas), tu tapes sur plusieurs endpoints là aussi.
Bref, je pense que vu les caractéristiques du DDoS c’est pas déconnant de faire comme tu fais.
Par contre ça veut dire que tu bloque uniquement après que la requête ait été faite, donc sans blacklister des préfixes ça sera pas optimal.
Brave new world…
Citer : Posté le 18/11/2025 17:28 | #
Mettre un captcha ne résoudrait pas le problème ?
Albert Einstein
Citer : Posté le 18/11/2025 18:03 | #
Je bannis même que sur les requêtes "louches" à savoir qui vont chercher une version antique d'un fichier spécifique, des trucs qui correspondent à aucun pattern d'accès habituel.
Ça casserait pour nous. On clône en HTTPS donc si tu mets un captcha les gens sans compte peuvent plus cloner (ce qui impacte e.g. GiteaPC de façon "invisible"). Comme Anubis en fait le problème.
Citer : Posté le 19/11/2025 18:14 | #
Mais comment des forges qui utilisent Anubis comme e.g. celle de GNOME gèrent-elles ça ?
Citer : Posté le 24/04/2026 11:13 | #
well,I can't using giteapc to install gint.it says i got a 503 problem.can anyone check if my ip has been banned?or the website had some problem? I was able to use it just yesterday and today it didn't work.thanks for the help.
Citer : Posté le 24/04/2026 11:41 | # |
Fichier joint
Hi, this almost always means the forge is overloaded by crawlers. It can get pretty rough, here's for instance the status from the last 24 hours:
We're trying to be creative in addressing this. Just so you know, every forge on the open Internet has this kind of issues, ours is just not very powerful. The easiest thing you can do "in the moment" is to ask us to restart it. It seems to be fine right now at least.
Citer : Posté le 25/04/2026 01:17 | #
Are bots loading down the server by cloning git repos? Or is all of the traffic and CPU load due to crawling the Forge?
“They call me the king of the spreadsheets, got 'em all printed out on my bedsheets.” — “Weird Al” Yankovic
Citer : Posté le 25/04/2026 09:22 | #
It's web activity.
Citer : Posté le 25/04/2026 23:35 | #
A good first step toward a solution would be to decouple giteapc from the Forge. The basic commands could stay untouched (although managed by a different non-Forge service on the backend), but the advanced commands would go through a different, authenticated, API. That way, legacy giteapc clients would continue to work, except advanced commands. Anyone using advanced commands presumably would be paying attention to PC notices and could upgrade giteapc. Also recommend a version check in future versions of giteapc so that it can self-report that it's outdated and needs to be updated to continue to fully operate, maybe even with some kind of custom notice about what changed.
Once giteapc is decoupled, that makes it easier to lock down the Forge, because changes to the web side would no longer affect giteapc in any way. I'd recommend a simple non-interactive page for each repo that shows basic information: repo owner, repo name, clone URL, description, README, perhaps the license, and a Forge login link. Could still be nicely styled with CSS and such. These static pages would be generated on the backend periodically (perhaps on change). This would be enough information for someone without an account to understand what the repo offers and how to obtain it (via git). And because they're static, loading the pages wouldn't invoke any backend processing to display them (no CGI, git, databases, etc). These pages could easily be cached, since they're just static pages, if that's even necessary. There could be an autogenerated site map or such (also static) so that people could discover all the repos. As the web server would only be serving static pages, bot activity should have minimal impact on the web server.
As for the Forge itself, when logged out, only the login pages would work. Otherwise, it would try to redirect the user to one of the static pages. So, that could put a little load on the server if the detection and redirect were suboptimal, but I'd assume that it'd still be minimal compared to the current load. But if logged in, existing links to the Forge would continue to work (even historic links).
There'd be a little complication as multiple services would be involved in ultimately serving pages, but I think this can be managed. The static pages would need to have a different URL than the existing Forge pages, so that things can be handed off appropriately.
“They call me the king of the spreadsheets, got 'em all printed out on my bedsheets.” — “Weird Al” Yankovic