About

The current section are extracts for a future to-be-published book written by Wizardry and Steamworks that works to fill the gap between the technological domain of computers and politics, a gap that due to capital concentration becomes ever-smaller, such that it is imperative that people involved with politics, especially if they are involved in governance, are made aware of the emerging dangers, pitfalls and made able to disambiguate misinformation spread pertaining to the domain of technology. The initiative itself, takes place in a context where the proverbial "tulip fever" of "green energy" back in the 70s when "peak oil" was deemed to be a danger on the event-horizon, but with amplified and even stronger significance when applied to computing and the low-cost high spread and high impact of computers on society that makes such bubbles segmented yet repeatable as technology progresses.

For now the section will explain basic concepts, with the hopes of helping politicians understand technological concepts with the technology, for the purpose of this book, being framed into a "political context" by insisting on the social and/or economical impact of the technology rather than "the technology itself". Here and there, the terminology might go in depth but only for the sake of completeness, and the focus should still be on the political impact of the technology being explained.

Understanding Artificial Intelligence for People involved with Politics

Origin

The current Artificial Intelligence (AI) boon is based on a long-existing and forgotten trope, namely the Eliza bot developed by Joseph Weizenbaum in 1967 at MIT that leveraged some psychological primitives in order to befool its users into believing that the chat bot was, in fact, human operated. Eliza kicked up quite a kerfuffle historically speaking, with lots of world-wide drama taking place world-wide as a response, with people swearing they were actually talking to a robot instead of a human being. Later iterations of Eliza, one example from retro-computing being "Racter" (Mindscape, 1986) for the Commodore Amiga, were also additional to using psychological primitives, loaded with knowledge, very similar to what ChatGPT is now, in order to also add the element of offering "insightful responses" with knowledge that the chat interlocutor might not have known. Pure "Eliza" by contrast is mostly just an elegant way of rotating sentences around in order for the chat interlocutor to think they are talking to a real person. None of this is new.

As a followup, if you like, one of the funniest events in post-modern human history is Deepblue defeating Kasparov at chess, another drama that led to Kasparov threaten to sue the venue, going as far as claiming that he was, in fact, playing a human being that would be somehow hidden within the computer.

In principle, both these events, and in general, hinge of a misunderstanding of what A.I. is and with particular attention to engineering more than computer science.

Computer Engineering & Science

In what regards computer engineering, A.I. is coalesced with a bunch of terms that it should not be. Similarly, computer science has some precise definitions of terms that also tend to get coalesced with A.I.

Computer Science

In order to make the situation clear, the following holds true to the limit of current human knowledge (including A.I.) has not solved the issue:

"given a set of inputs and a program, it is undecidable whether the program will ever terminate"

Which is the definition of the "halting problem" leading back to Leibnitz 1671, the mathematician, that thought to devise a machine that would automatically compute proofs for him. Similarly, this is also Gödel theorem on undecideability or "das Entscheidungsproblem".

What is essential to retain is that a cognisant A.I. that would perform a task with a desired output cannot be built until this fundamentally mathematical problem is solved (not even necessarily pertaining to computers).

You can, however, build something that, depending on the set of inputs, might produce some output, yet it is not determinate that whatever that something outputs will also be desired or useful.

This is very critical because you do not want to entrust "A.I." to drive a spaceship that "might or might not" land on the moon, under the guise that the "A.I." might somehow know anything more than matching up and correlating patterns, which is more of what one would call, a probabilistic crap-shoot not too far away from shooting someone out of a huge honking space cannon and hoping they get to the moon!

You could stop here, all of the rest is irrelevant and only a filler judging on smaller cases and disambiguation of the term "A.I." and what it is used for.

"Das Entscheidungsproblem"

Translate plainly, the term means, "the problem of deciding" which is the fundamental problem why A.I. would never be able to "write software". In order to explain this, you would need a simpler language than English and something more abstract to understand the problem so here are two examples very easy to relate to.

The Mathematica Program

"Mathematica" is some software developed by Wolfram that compared to standard "scientific calculators" can be given as input a formula and the software is designed to apply mathematical operations in order to reduce the input formula and solve the problem. For example, you could write an integral equation and "Mathematica" will apply all methodologies in order to expand and reduce the expression symbolically, without substituting numbers like "scientific calculators" would do in order to produce a numeric result.

However, just like in real life, it is completely possible for some polynomial to be crafted that cannot be reduced or in reducing that polynomial, the polynomial instead expands, which turns out to be very frustrating for the person solving the problem. That person will have to "empirically decide" when it is time to stop, backtrack at some point where that person thinks a different path could have been chosen and then perhaps try again. Similarly, it is possible for polynomials to exist that cannot be reduced (of course, within the contraints of a context, for instance natural numbers instead of imaginary).

What happens in those situations with "Mathematica" when solving or reducing an quation is that the software includes a timer that at some point will prompt the user with a soft-error along the lines: "spent … amount of time attempting to solve, but could not find a solution [convergence], should I stop or continue?" at which point the user has to decide whether the problem they have input to the "Mathematica" is "just too tough to solve" or perhaps the machine has insufficient power to solve the problem.

For that reason, A.I. cannot solve problems and it cannot write software.

However, "modern programming" and in particularly for large programs, especially using Object-Oriented languages frequently need "templating" or creating stubs for an API, all of which are mostly very thin files without much content and just identifiers that "just have to exist" in order to match the overall aspect of the program (and also there for further development). Creating these files is very laborious and it is something that can be handled by code generation, which, depending on the sophistication of the algorithm, can just very well be an A.I. However, the idea that an A.I. will write code from scratch that would make sense, for non-trivial and not-known problems, is just ridiculous.

Other examples include just plain code generation like Intellisense in IDEs, and all the way up to Clippy included in Microsoft Office back to the 90s, that are useful but will not sentient in any way, nor do they solve "the problem of deciding".

The Fixed-Point Combinator

The fixed-point combinator in lambda calculus is a special function that can be used to implement recursion. Consider a lambda term $g$ and the fixed-point combinator $Y$, with the following application of $Y$ to $g$ with two $\beta$-reductions and a substitution that take the expression to its $\beta$-normal form:

\begin{eqnarray*}
Yg &=& (\lambda f.(\lambda x.f(xx))(\lambda x.f(xx)))g & \\
&\stackrel{\beta \lambdaf}{=}& (\lambda x.g(xx))(\lambda x.g(xx)) & \\
&\stackrel{\beta \lambdax}{=}& g(\lambda x.g(xx))(\lambda x.g(xx)) & \\
&=& g(Yg)
\end{eqnarray*}

The problem is that without the substitution in the second step, the application of the Y-combinator to the lambda term $g$ would have never halted:

\begin{eqnarray*}
Yg &=& g(Yg)=g(g(Yg)))=g(g(...(Yg)...))
\end{eqnarray*}

and instead would have expand forever.

Using lambda calculus that is a Turing-complete language is perhaps the best way to demonstrate the effects of the halting problem or the dependability problem or "Das Entscheidungsproblem" (from Kurt Gödel back in 1978). It is also additionally a hard-NP problem that simply has not yet been solved and even the possibility of a solution seems unlikely.

Racking a bazillion computers on top of each other and connecting a gorrilion networks together will just not solve the issue such that the halting problem will be solved.

Engineering

In computer engineering, whatever people refer to as "A.I." gets split into other notions such as:

  • algorithm,
  • expert system
  • A.I.

where "algorithm" covers most of what politicians refer to as "A.I.", examples include:

  • "lights turning on during night time", which is just a "heuristic" (to be understood as, "arbitrary" and not fixed) decision that is taken on a series of inputs, for instance, depending on, say, light sensors, computed time of day, etc,
  • "chess", given two perfect chess players, the outcome of chess will always be a stalemate (like tic-tac-toe on a smaller scale, not provenly-(or StarCraft but on a much larger scale)), chess is a deterministic / algorithmically determined game where given sufficient computational power to push beyond the event horizon, the outcome is known in advance and any opponent can be beaten or a stalemate can be drawn. Modernly, the computational power already exists and any player can be perfect.

Planes that have an automatic pilot, and have had a fully fledged automatic pilot for decades, implement what's called an "expert system", which means that the algorithm, depending on a series on inputs, takes decisions, recommends them to the operator, and the operator can reject or approve them. Examples include:

  • "tesla cars", along the lines of "one more change to the algorithm and the car will drive perfectly, we swear!",
  • "planes",
  • "nuclear power plants"

Expert systems are mostly in place to solve very difficult tasks, but they explicitly rely on a human operator to make the final call because it is arguably anti-ethical for a robot to take the final decision given the possible large-scale impact of the decision.

In a Political Context

The statement "A.I. will put people out of jobs" has been true for about a century, iff. "A.I." refers to algorithms like menial jobs, for instance, carrying stuff around, issuing tickets, selling Pepsi, etc. However, remember that "A.I." frequently only refers to algorithms when it is mentioned in the media, never to expert systems and/or "A.I.".

The Big Boon

One thing that was irrevocably provided by "A.I." and, in fact, "machine learning" is the ability to generate work, whether scientifically or otherwise. For example, there is a whole class of "applying machine learning to subject X"-scientific publications that do nothing more than use an algorithm in order to check whether it is able to solve a problem, and if not, how much of the problem it can solve. There is an indefinite amount of publications that you can just bulk-generate by using "machine learning" and a hype or quirky application thereof and cross-domain, to add to the coolness factor of the publication.

Another "boon" or applicability is within art where precise and pre-determined outcomes are not necessary. It feels more like a throw of dice.

The Need for A.I. Regulations

… is a solution to a problem that does not yet exist.

Conspiratorially speaking, it might just be a way for techno-oligarchs to pretend that "A.I. must be slowed down" in order to justify that they have not yet attained the "God"-levels of coolness they were promising and hitting the halting problem very early on after it has been explained to them that none of the "transhumance" mumbo-jumbo they've been using to receive funds will ever come to be! In other words, a way to delay the popping of the "A.I." bubble where investors will figure out that "A.I." is not the second coming of Christ, as the techno-oligarchs portray it to be, and will be desperate to recuperate their money. As for any market, buy early but also be sure to sell early-enough before it becomes too mundane!

On the other hand, perhaps Tesla "autopilot" cars should be regulated and applying whatever pre-trained program in Silicon Valley to a … less democratic place with unmarked roads, wild bears roaming through the streets, outright misleading road signs (points left to exit, leads into a mud wall), the occasional window-washing rapist, roadblocks that should not be there, would be a very funny experience but just for those that stand outside the car and at a safe distance.

More than likely, yet completely unrelated to "A.I.", due to A.I. arriving much later than expert systems or algorithms, there were some U.N. international treaties in place to prevent the usage of autonomous robots in warfare due to ethical concerns, akin to those that made people design "expert systems" instead of fully autonomous systems.

A clear modern-day exemplification thereof is the usage of CAPTCHA prompts by Cloudflare and Google that have driven a whole generation and more crazy with solving silly quizzes that for an informed observer are questionably only solvable by humans (in fact, there are whole sets of tools out there to bypass CAPTCHAs automatically with various techniques, ie: listening to the voice recording instead and interpreting the voice to read out the letters or numbers, using A.I.!). The losers in this case are, of course, the people that the owner of the website might not even know they are blocking, and the decision of the owner to use Cloudflare being purely economically-cheap but not enough to care or too trusting such that the owner starts to lose business without knowing why. Just like people that delegate responsibility to algorithms, very similar to users of Cloudflare that would have an alternative available, it is clear that they can afford the losses in order to not care. A very slow deflation of the bubble that is A.I., if you will.

Another concern that is uttered very frequently by politicians is the usage of chat bots ("A.I.") in order to provide customer support. However, the disgruntlement is mostly due to very vocal people detecting the usage of a chat bot and then resenting the service. In reality, "chat bots" for customer support are very closely related to automated answering machines, where, go figure, that probably more than half of the requests on behalf of customers can be solved automatically without even needing human assistance. While most people consider themselves "technologically apt", the reality is that most humans tend to overlook the obvious, such as plugging the machine into the wall socket, pressing the button to turn the machine on, and so forth, that all can be solved by a simple FAQ or using triage as per the answering machine to speed up more complex problems that customers might have. Unfortunately, while the chat bot is visible, the reality that you might have a very complex issue and are in a queue waiting for a bunch of people that did not read the manual, when, on the contrary, you are a long-term user, is not that apparent. A chat bot could touch on "related" topics that might make the user think and it might prevent a human operator having to intervene.

Machine learning, more precisely, is good at training an algorithm against a fixed-set of inputs such that any input that falls within some entropy range of similarity and variation, will be labeled similarly by the algorithm. This leads to an observation-frame confusion where the operator is tempted to constantly adjust or train the algorithm to match even more inputs that were not matched, up to the point of biasing the algorithm such that it starts to match completely unrelated things. Train the algorithm on too many lemons and it will match an orange, and what happens when it's not fruits but real people, what will be the plan when those people ask for justice for being mislabeled?

Another problem that "A.I." presents is traceability, namely, the ability to determine both the inputs or the algorithm that generated the outputs, as a consequence of the halting problem, which makes "A.I." even less useful. In many cases "here, I found the solution!" does not fly when it must be known how that solution has been found. In such case, anything before the output could just be replaced with a random shuffle, conditioned or not.

Supercomputing (Computational Capabilities)

One of the largest misunderstandings of computing, particularly derived from the crowd that believe that the universe is deterministic, is that every problem can be solved if you throw enough computational power at it.

This has lead to the creation of giant computer farms that are fed difficult problems and a lot of these farms churn on problems numerically and try to solve them. As an aside, in terms of computer security, some algorithm creators in the past hinged their cyphers on the idea that computers with the computational power that are now available will never exist, such that there are some merits to computational power, however, as the sentence implied the assumption was naive and modern cryptography is based on problems that cannot be reversed by an adversary (logarithmic reversibility - elliptic curve, quantum cryptography, etc) such that the mistake would not be made again.

Even though the former sections already demolished the idea by now, there is an additional mathematically provable observation that demolishes the idea that the problems of humanity hinge on the lack of computational power, there is still one very trivial observation that tends to escape even the most of famous of computing or A.I. pioneers making claims about computing and/or A.I., namely the matter of complexity of algorithms.

That is, it is mathematically provable that given an algorithm that is known to terminate but takes $O(n^{x})$ time complexity will terminate much faster using computational power $y$ (or $y$ clustered machines with partitioned tasks) than another algorithm that is known to terminate but takes $O(n^{x+p})$ with $y+q$ computational power (or $y+q$ clustered machines).

A reduction of that formulation in simpler terms is the observation that bubblesort on a super-computer with $O^{2}$ time complexity, given a sufficiently large dataset of numbers will finish much slower than a PC 368 (from the 90s) with an $O(log(n))$ time complexity.

The former observation is absolutely trivial, or should be, to any programmer such that any discussion on how any problem can be solved with more power is just null and void even to non computer-scientists that are just programmers.

Software

There are various types of software that are … associated with A.I. but most of these just use "machine learning" that lead to wild range of false positives and hence defeat the purpose in most cases:

  • tensorflow is a machine learning library that can be used to train an "A.I." / algorithm to identify key-aspects within an image and attribute labels to the image depending on what as detected. If one uses the default per-training package, the results are obviously very bad. If one trains the algorithm using specific inputs then the "A.I." will be trained to identify those and only those inputs including their variations - which is a bit of a tautology when you think about it, tautology that will just fail when a key-characteristic is removed from the inputs. As a short example, you can train tensorflow to identify police cars based on the premise that police cars are white and blue but if the police turn up in an unmarked car, then the whole algorithm fails and hence why it should never be used for anything critical.

  • OpenCV is a very old library that is just capable of doing very very many things with and to images

Frequently Asked Questions

  • What is Artificial General Intelligence (AGI) / Artificial superintelligence (A.I. that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks.)?

A marketing term like "information super highway" to describe something as plain as the Internet.

  • Will A.I. ever be sentient?

We do not know what "sentience" means nor implies but we like to think about ourselves as "sentient". A Tamagocci from the 90s had people grow it like a pet when there was no such talk about A.I. Was the Tamgocci sentient?

  • What is A.I.?

Initially, research into neural networks and machine learning. Now, it is an umbrella term from anything starting from a (linear abstract machine) algorithm and up to aliens while going through a lot of promises that will never materialize because there are fundamental problems that have not yet been solved. In terms of marketing and given that this is the politics section, A.I. is a bubble created by overhyped science, similar to the environmentalist boon in the 70s. It is a blessing for side-researchers that can churn out stacks of papers just by applying established algorithms in different contexts and reporting on their performance and hence expand their publication list (similar to physics and papers with hundreds of authors where it is unclear who-did-what).

  • Is A.I. dangerous?

Very. Just like a car.

  • Are any state and/or legalistic regulations need to prevent the use or misuse of A.I.?

A.I. simply does not have that large of an impact (even if something as trivial as an "algorithm" or "robot" is now also filed under A.I.). Just like for crypto-coins, there might be problems with fraud or scams using A.I., second-hand damages created by "malfunctioning" robots, "misinformation" if you believe that sort of thing yet the serious cases would be processed under scams and/or fraud, some security issues regarding existent software and hardware that base their security on weak assumptions and might be defeated by A.I. (CAPTCHAs, etc) and so on.

In any case, the doom-mongering is just for the effect - it's not nuclear radiation that would render the earth unusable.

  • Why is everyone talking about A.I.?

Because the crypto-coin talk fell out of fashion. More than likely there are large interested parties that would like to keep the bubble growing. Even for scientists it is a way to justify their work and to produce large numbers of publications with very little effort.

  • What sort of problems does supercomputing solve?

Large scale problems where millions of datapoints have to be tracked have large computing needs. However, it is rarely the case that supercomputing solve any fundamental problem. The discovery of he Higgs boson at C.E.R.N. was a practical experiment to practically demonstrate / reveal in reality the existence of a theoretical particle yet the existence of the particle had already been proven theoretically… by Higgs in 1964.

The Folly of Virtual Private Networks (VPNs) and The Suppression of Real Anonymity

Virtual Private Networks (VPNs) were initially devised in order to establish a virtual network on top of a physical network between machines distributed over long distance or different network (for example, on top of multiple Internet Service Providers ISPs).

Typically, the topology for a VPN is a star-shaped network, with multiple clients connecting to one central computer that acts as a mediator, hub or connector between all the machines that connect to it. Given networking principles, it is possible for client machines to route their traffic through the central computer that all machines connect to, such that the Internet traffic will flow through the central computer.

Similarly, the connection between the individual machines / clients to the central computer / gateway, is typically encrypted, such that the traffic between the individual clients and the gateway is encrypted.

Here is a diagram of how a connection to the Internet, through a VPN gateway takes place:

and the diagram can be refactored in order to take into account encryption, which will be represented by bypassing the ISP (in terms of data visibility due to the traffic being encrypted):

The sketch scales for multiple numbers of clients that all connect to the VPN gateway, typically mask their data from the local ISP, and then route out through the gateway to the Internet.

A VPN does not provide anything additional concerning anonymity, but a VPN just moves the problem of identity from the local ISP to the central hub that all computers connect to. In that sense, a VPN will only, at best, hide the traffic from the local ISP but it will not anonymize the traffic in any shape or form. Here are some quick conclusions based on the former:

  • a hub or VPN provider is able to observe the following about the clients connecting to the VPN provider:
    • the websites they connect to (but not the activities they perform) (via DNS and/or IP),
    • the times that the connecting clients connect to the VPN provider,
    • will be able to correlate websites being visited with identifying information of the connecting clients (most VPN providers are commercial, so the connecting clients would have more than likely bought the service using a credit card as well as having provided identifyng information),
  • the local ISP will only see encrypted traffic between a connecting client and the VPN provider, the ISP will know that the connecting client connects to the VPN provider but the ISP will not be able to see the traffic or what websites are browsed through the VPN provider

In many ways, connecting to a VPN provider, is like having another ISP, on top of the ISP that is already providing connectivity.

The Law

Companies that provide VPN access, all tend to have clauses in their EULA stating that they will collaborate with law enforcement to the fullest extent even in the event of a suspected legal issue. Typically this is based on a subpoena that juridically forces the VPN provider to release information to the investigators.

With that said, it should be obvious that a VPN will not protect an individual from law enforcement. There are very many VPN companies, and many of them being very large, with right-about every computer user suggesting a VPN "for privacy", such that the data they collect (in some cases, as a legal obligation), must be off the charts. In some ways, this can be seen as a failure of local governance in the country that regulates ISP so much that now their constituents are forced to export all their data (which is cyclically, illegal due to the G.D.P.R.) to other countries in order to avoid local governing policies. The users from a country will thereby avoid the local authorities, the very same ones that are paid to provide protection (via taxes), and even go as far as to trust a company in a remote jurisidiction that may or may not have the best interests in mind of the users.

The Real McCoy

A real anonymizing network is a network like tor or i2p, that is essentially just a collection of proxies (or, imagine multiple VPNs, in context) through which the user's connection is passed through in order to become less observable to gateways between a client and the destination website. So far, there are no systematic attacks on the tor and/or i2p network that would allow an attacker to observe a client such that a network like tor or i2p is theoretically secure.

Attacks on anonymizing networks do exist, yet all of them are possible in well-chosen scenarios that do not show up statistically (ie: attacks when the established tunnel through all the gateways is conveniently short) and so far there is no known deanonymizing attack on tor that would hinge on solely the tor network. Tracking users with website cookies is trivial, or WebRTC will divulge the real IP, and that will still work on anonymizing networks like tor or i2p, however those attacks do not attack the tor network, architecture or protocols yet leverage flaws in browsers or other case-by-case situations.

Even though attributed to Voltaire and written, in fact, by Kevin Alfred Strom, the quote "To learn who rules over you, simply find out who you are not allowed to criticize" is uncanny in context, given that tor and/or i2p networks that are designed to be anonymous are completely blocked by many websites as well as leading to "death by CAPTCHA" due to Cloudflare security flagging tor / i2p outnodes IP addresses as dangerous. All of this, does not seem too incidental and even institutions that deal with "human rights" sometimes block anonymizing networks from their website, sometimes due to "cargo-load security" ("hey Bob, why are we blocking this website?", "dunno man, the router is from China, we didn't have any troubles with it yet, why?").

Both tor and i2p have a traffic pattern similar to the following:

where the traffic is deliberately routed through a number of nodes within the tor / i2p network, in order to ensure that the client is well-behind all the nodes or gateways leading to the Internet.

The idea is that if one of the nodes, such as the exit node, is compromised, then the follow-up node, going backwards, would have to be compromised as well, and then the next one up, all the way back to the client. From a legal perspective if all the nodes / gateways between the client and the Internet have to be approached and made to divulge the traffic making it next to impossible to track down the client.

Even in terms of conspiracy, it should be a red flag to anyone that so many companies and even "hacker groups" try so hard to dismantle tor and/or i2p, when, in fact, they are true anonymizing technologies that do have privacy in mind. Congruently, the obnoxious pitch for VPNs, even from people that should know better, is excessively large, even if a VPN does not anonymize a user at all, yet exposes their identity to the company offering the VPN service along with all their traffic.

Cloudflare and Outsouring Security

 Oh no, not this shit again! This website is using Cloudflare too, so we're well on our way to become hypocritical politicians ourselves!

Cloudflare started as "project honeypot", which was an endeavor to blacklist a bunch of IP addresses by gathering various automatic attacks in the wild and then to provide a blacklist of IP addresses to people that would subscribe to updates in order to block these IP addresses for other machines not connected to "project honeypot".

Cloudflare swallowed "project honeypot" and additionally became a DNS server (name resolution, IP to website address and back) providing free services to anyone that did not want to run their own DNS server. Arguably, the one and only commercial success could be attributed to Cloudflare hiding the IP address of the real machine across the Internet by making all DNS requests resolve to their own servers and then passing the connection behind Cloudflare to the destination machine.

It was a great achievement over regular TCP/IP that was never designed with privacy in mind, where, an IP address can be looked up in ARIN in order to trace a website to its ISP and sometimes even to its geographical location. Unfortunately, ARIN and the Internet Consortium is regrettably populated by, well, politicians, just like yourself, that are not technically also-economists and needing to generate revenue, such that the concept of "privacy" eludes them. To the point, tracing the IP address to an owner could lead in many cases to instances of burglary, S.W.A.T.-ing (ie: sending the police to a person on the false information that they are dealing drugs) and other harassment opportunities that have been observed in the past that affect businesses and private individuals such as celebrities. By proxying DNS, Cloudflare made it possible to hide the IP address of a website, hence the geographic locator and more than likely prevented loads of drama taking place over time.

Unfortunately, Cloudflare is still a commercial entity, such that they are bound to cooperate with law enforcement given an investigation, which means that they might be coerced juridically to hand over user data. Another troublesome issue is that Cloudflare heavily penalizes traffic from anonymizing networks such as tor and/or i2p, leading to funny scenettes such as "death by CAPTCHA" where a user is repeatedly prompted to solve puzzles with no end in sight, making other users even doubt whether Cloudflare's intentions are legitimate or they are a large data-mining operation themselves - otherwise, if they truly cared about privacy, surely a solution could have been found for users of anonymizing networks but all "solutions" so far have failed to materialize. In principle, Cloudflare's defense is that anonymizing networks are frequently used for "attacks", but in reality the most wide-spread attacks in the wild are along the lines of denial of service, and neither tor nor i2p are even able to sustain or "move" the traffic required in order to perform successful DDoS attacks (such attacks are more typically carried out using "clearnet" and compromised machines).

We would argue that Cloudflare damaged the Internet incidentally because the price of using Cloudflare has been just a signup away, such that every person that did not know better, nor cared to do their own security or hire an export, just used this amazing free service that promised to take away all their problems. More than often, not only did Cloudflare take away their problems, but also their customers, given that Cloudflare has practiced blanket-bans in the past on entire networks that they deemed to be "compromised". Interestingly, at Wizardry and Steamworks, we can recall three incidents when we were asked by users why we are blocking them, only to realize that it was Cloudflare banning their entire networks (more than often, these bans occur on IP blocks belonging to Chinese registrars).

With the former being said, Cloudflare can also be seen as a result of bad political governance or maybe even, a lack thereof, or maybe even the lacking knowledge, because running a DNS server on your own machine requires certain ports to be opened by ISPs, which ISPs do not typically open, claiming they are a security risk (well, they are not), as well as having the option to opt-out of ARIN / Internet Consortium WHOIS databases for businesses or individuals that do not wish to disclose their identity.

There is now even a gray-area where data protection laws are in conflict with ARIN / Internet Consortium regulations, with businesses feeding off this problem that seems well-known, unresolved and now profitable due to being in a shady area of the law. More precisely, ARIN /Internet Consortium require the owner of a domain to fill in the WHOIS information using actual real information; in other words, it is expected for the owner of a domain (ie: grimore.org) to fill in their full name, address and even their phone in the WHOIS database, a database that can then be queried by right about everyone on the Internet. At the same time, data privacy laws, such as the G.D.P.R. or the The California Consumer Privacy Act (C.C.P.A.) outlaw the requirement to provide such data in case the client refuses to offer the data, such that the laws are now in conflict. The resulting black-market business, is that registrars that sell domain names, also add a "paid option" to have "additional security", which involves in essentially acting as a proxy on behalf of the customer by filling in their own company data, or even, not offering any data at all to WHOIS (which makes them break the law themselves). Cloudflare would technically protect the user from all that, and even if the user adds their own personal data to the domain, Cloudflare would proxy the IP such that the real IP would not be revealed and it does so as a free service.

Over the years there has been growing skepticism on the truthfulness of Cloudflare's claims, in particular, related to privacy but also the observation of people that had to deal with real DDoS attacks when it was discovered that Cloudflare's "I'm under attack button" will provide some DDoS protection but that the protection will be turned off after a while in case the DDoS attack is too severe for them as well, thereby nullifying the pretense of security as well as offering an attack vector to attackers in order to be able to determine the IP address that a website has.

The privacy concerns on the other hand, related more to Cloudflare that "normalizes" a "man in the middle" MITM attack where traffic arriving from both sides, both client and server, under certain configurations, is decrypted by Cloudflare such that all the activity of a person browsing the customer's website is visible in plain-text to Cloudflare. This includes credentials when logging in to websites behind Cloudflare that Cloudflare has the ability to observe in plain-text if they so desire (even though, clearly, that is something they would deny doing). Just like VPNs, but even worse, given that a MITM could observe not only the websites that a client connects to but also the exact activity, credentials, chat, etc, the accumulation of private data on Cloudflare servers must be phenomenal and well, you could only trust them on good faith that they do not look at that data and/or delete it periodically ("hey Bob, you know we caught all these terrorists and monitored their traffic using this cool software…", "right", "well, we'd have to shut it down now", "right, because we won and the terrorists are defeated", "well yeah, but I'm thinking Bob, what if we just… leave it running to collect data?").


fuss/politics/computer_science_for_politicians.txt · Last modified: 2025/03/31 16:55 by office

Wizardry and Steamworks

© 2025 Wizardry and Steamworks

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.