Category: internet

  • Why Can’t Wesbites Just Block All the Bots?

    There is considerable overlap between sets of of smart bots and dumb humans

    Websites and social networks can/do have mechanisms to limit bot activities, but if they make them too sophisticated, it will frustrate the legit users of those platforms: humans.

    We’ve all encountered captchas — these fuzzy image grids you’re presented with to “prove you’re human“. Or simple math equations to solve for the same purpose.

    Think about animals getting into campsite dumpsters. It’s fairly easy to design a lock mechanism that wards off most animals, but if it’s too simple, some animals with the required level of intelligence and dexterity (racoons, bears, etc) will eventually figure out how to get the dumpster lid open. So they make them a bit more challenging.

    The moment the dumpster mechanisms became too complicated, a new problem arises: people, who either couldn’t figure out the dumpsters, or were just too lazy to use the more complicated mechanisms, started to complain and leave their trash outside the dumpster.

    Now think about this scenario in an online context. The security is so intricate that a legitimate human user—maybe they are distracted, maybe they are technologically challenged, or maybe they just don’t have the patience (shrinking attention spans due to information overload by the same platforms)—cannot figure out the verification step. They are prevented from accessing the service they are supposed to use. Social networks lose what they thrive on: traffic/attention/engagement.

    This highlights the delicate balancing act faced by security developers: they are navigating a difficult landscape where there is a cross section of smart bots and dumb humans. The “smart bots” are programmed to overcome complex obstacles, but “dumb humans” (this is not meant to be an insult, but meaning normal, busy, non-technical people) often stumble over even simple hurdles. If the defenses are set too high, the legitimate users get locked out.

    If the verification requires too many steps, if the image puzzle is too ambiguous, or if the login process is too long, the user simply gives up. There is a very real limit of what hoops and loops a person is willing to jump through in order to verify they are human. When users reach that limit, they stop trying to use the proper channel (the complicated dumpster) and just leave their ‘trash’ (their account, their contribution, their business) outside, or worse, they leave the service entirely.

    The Numbers Game and the Business Impact

    The above analogy illustrates why major online platforms—especially social media—cannot afford to frustrate their users with overly complex security.

    Online platforms rely on sheer volume. They need active users, traffic, and engagement to succeed. Social media is a numbers game. The more complicated the verification process becomes, the higher the risk that users will get frustrated and abandon the site. Platforms don’t want people leaving over such things—meaning, they don’t want to lose valuable users simply because the login process was too frustrating.

    If people leave their “trash outside the dumpster” (abandon the proper, secure login procedures) or just drive away from the neighborhood entirely (close the browser and go to a competitor), the platform loses business.

    Therefore, the platform faces a constant trade-off:

    1. Too simple: The smart bots win, flooding the site with spam, scams and misinformation (even though it can be said that most of the major social networks do very little to combat misinformation by actual humans)
    2. Too complicated: The legitimate humans quit, reducing the site’s user base and traffic.

    Another reason combating bots isn’t done as effectively as it should is that becuase bots are an engagement motivator.

    Why Platforms Need the Pests

    The complexity of the dumpster latch—the endless battle to keep out the “smart bots” —isn’t the only reason bots persist. There is a surprising, self-serving, and often unspoken truth: Social media platforms don’t actually want to eliminate all the bots. Nota all of them, anyway.

    If the trade-off between security and user frustration represents the technical side of the dilemma, the need for constant, massive activity is the business side. As we’ve established, social media (as a busniess) is a numbers game, and sometimes, bots are the easiest way to pad those numbers.

    It has been observed that platforms sometimes employ artificial activity to create a sense of scale and momentum. For example, it has been claimed that the sheer volume of activity on platforms like X (previously known as Twitter) is heavily skewed toward automated accounts, with estimates suggesting that 70-90% of the posts, likes, and shares are actually made by bots. Furthermore, some platforms might intentionally employ fake users (bots) to post and comment, simulating real human interactions to make the platform feel more alive (as has been noted regarding Mark Zuckerberg’s past announcements concerning adding fake users to Facebook).

    Crucially, bots are a necessary tool for social media “content creators” to thrive, which is exactly what the platforms want. The platforms rely on successful creators to generate the content that keeps real human users coming back. Bots facilitate this growth in several ways:

    • Growing Pages: Bots are used to generate likes, shares, and follow-backs to accelerate interaction. If you follow someone on TikTok and they ask to follow you back nearly immediately, it is often a bot set up by the account owner.
    • Scheduling and Volume: Bots help creators to schedule posts, allowing them to write a large amount of content in one hour that can be automatically posted throughout the entire week. This increases the total volume of content without requiring constant manual work from the creator.
    • Fake Interactions: Bots automatically like and comment on other creators’ posts to generate interactions. Since all social media platforms are about fake interactions, even between two real humans, a fake like or a real like often doesn’t matter to the platform’s core metric: activity.

    By allowing these non-malicious (or ‘growth-oriented’) bots to operate, the platforms ensure that the gears of the numbers game keep turning. They prioritize the appearance of a bustling, active neighborhood over the absolute sterility of a bot-free environment. If the bots that facilitate creator growth were eliminated, the content pipeline would shrink, and user engagement—the critical metric—would drop.

    Therefore, the problem isn’t just that the defenses against the smart bots are difficult to manage for the “dumb humans”; the deeper issue is that the very lifeblood of the platform sometimes requires the strategic use of automated activity to inflate engagement and maintain the illusion of exponential growth.

  • Algorithmic demagoguery: The tech-billionnaires’ vision for a digital nation beyond democracy?

    The increasing influence of Silicon Valley on governance models represents a significant, and potentially extremely disruptive, trend in contemporary political thought.

    The symbiotic relationship between Capital (especially Silicon Valley billionaires) and political elite is no secret, but recent developments of how the richest man in the world very quickly got the keys to American bureaucratic institutions brought to the forefront growing concerns about tech leaders leveraging the current US administration to advance their own agendas, a notion that aligns with broader anxieties regarding the tech industry’s growing political power.

    These concerns are amplified by the emergence of novel governance concepts championed by prominent figures within the tech elite. Concepts like network states, cryptocurrency-based societies, and privately governed cities suggest a desire to fundamentally reimagine traditional models of governance, some of which are inherently, if not explicitly, anti-democratic.

    Billionaires such as Peter Thiel, Elon Musk, and Balaji Srinivasan are at the forefront of this movement, often motivated by a perceived decline of the American Empire and a belief that societal transformation, rather than reform, is necessary. Srinivasan, in particular, has articulated a detailed vision of “network states,” decentralized virtual communities that ultimately acquire physical territory and function as sovereign nations, governed along corporate lines. Even though such a vision might sound like it belongs in the realm of an improbable sci-fi cyberpunk future, given the current spectacle of American politics, it’s not beyond reason to think of these ideas as serious theoretical frameworks for ambitious undertaking upending the nature of democratic governance as we know it.

    Srinivasan “Network State” vision, while seemingly novel, echoes earlier, albeit more dystopian, concepts like Curtis Yarvin’s “Patchwork.” Yarvin, also known as Mencius Moldbug, proposed a system of small, corporate-run sovereign territories, or “patches,” prioritizing efficiency and control, potentially through technologies like biometric surveillance. While controversial, Yarvin’s ideas have demonstrably influenced people like Thiel.

    The practical application of these concepts can be seen in projects like Praxis, a Thiel-backed initiative aiming to establish a global corporate governance model utilizing cryptocurrency. Similar projects, such as Prospera in Honduras and Afropolitan in Africa, further illustrate this trend. While proponents often frame these ventures as promoting freedom and innovation, critics raise what I believe to be very legitimate concerns about the potential for corporate authoritarianism, citing the use of surveillance technologies, exclusionary practices, and the focus on land acquisition.

    The connections between these initiatives and key figures in the current American administration are surprisingly apparent. For example, current US vice president JD Vance has publicly engaged with Yarvin’s ideas and is associated with Thiel, suggesting a coordinated effort to reshape governance. Furthermore, the former Trump administration’s proposal for “Freedom Cities” on federal land, with its emphasis on innovation and progress, exhibits a striking resemblance to the Silicon Valley vision for privately governed urban centers.

    The growing influence of Big Tech on governance discourse, allowing previously fringe ideas to gain mainstream traction and political currency, raises crucial questions about the future of political systems. While some perceive these developments as a necessary response to outdated governance structures and an eroding trust in democratic institutions, others (myself included) express apprehension about a potential shift towards corporate-led authoritarianism. The fundamental question remains whether this represents a complete overhaul of the American political system, and if so, whether these new models of governance will ultimately prevail. More importantly: how will all of this influence political developments beyond the American Empire?