It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Algorithms quietly run the city of DC—and maybe your hometown

page: 1
14

log in

join
share:

posted on Nov, 10 2022 @ 11:33 PM
link   
Source material


DC agencies deploy dozens of automated decision systems, often without residents’ knowledge.

Washington, DC, is the home base of the most powerful government on earth. It’s also home to 690,000 people—and 29 obscure algorithms that shape their lives. City agencies use automation to screen housing applicants, predict criminal recidivism, identify food assistance fraud, determine if a high schooler is likely to drop out, inform sentencing decisions for young people, and many other things.

That snapshot of semiautomated urban life comes from a new report from the Electronic Privacy Information Center (EPIC). The nonprofit spent 14 months investigating the city’s use of algorithms and found they were used across 20 agencies, with more than a third deployed in policing or criminal justice. For many systems, city agencies would not provide full details of how their technology worked or was used. The project team concluded that the city is likely using still more algorithms that they were not able to uncover.


I read a definition of "algorithm"(Collins) that said that an algorithm is a series of mathematical steps, especially in a computer program, which will give you the answer to a particular kind of problem or question.

It is an unfortunate trend in popular reporting to prefer "sexy buzzwords" the word 'algorithm' is often conflated with the term with "AI" (Artificial Intelligence,) although that is not technically correct. In my understanding Artificial Intelligence describes several different approaches (narrow, deep learning, neural networks) to mimic what we describe as human intelligence. Artificial intelligences can make use of tons of algorithms in pursuit of its computing goals.

In this context, algorithms are collections of data processing (mathematical) formulas, logically ordered as a step-by-step series of instructions, yielding the best solution. These algorithms are executed by computers (for their speed and consistent accuracy) and - if the algorithm is properly designed and applied - voila! We have liftoff.

But the truth is, it has one fatal flaw. Algorithms are constructs. They are not capable of self-assessment. If the algorithm is not properly designed (coding errors, human error on input) it can produce counterproductive results.

AI and/or human intelligence can nullify that weakness, modifying the algorithm or perhaps using a different one. But automated decision systems naturally presume that the algorithms are perfect. And based on that bias, subsequent suppositions arise like, "The machine says you owe x amount of dollars, and therefore you do," "or you don't qualify for that loan," or even "you are a likely a recidivist (repeat) criminal - so we will treat you differently."

All of those prior examples are not hypothetical. There are already such algorithmic systems in place and running. Not just in Washington DC, and not even just in the US.

How should we feel about "automated" governance? Should the "provider" of such algorithmic systems be obliged to share the code? Does commerce supersede oversight in our regulator's eyes, probably - commerce leads what we are told is government oversight.


Government agencies often turn to automation in hopes of adding efficiency or objectivity to bureaucratic processes, but it’s often difficult for citizens to know they are at work, and some systems have been found to discriminate and lead to decisions that ruin human lives. In Michigan, an unemployment-fraud detection algorithm with a 93 percent error rate caused 40,000 false fraud allegations. A 2020 analysis by Stanford University and New York University found that nearly half of federal agencies are using some form of automated decision-making systems.


I believe that the source's authors may have included some flawed thinking, this is only my opinion but, where it reads:


“More often than not, automated decision-making systems have disproportionate impacts on Black communities,” Winters says. The project found evidence that automated traffic-enforcement cameras are disproportionately placed in neighborhoods with more Black residents.


But that statement seems to link algorithm use to the discrimination problem which is clearly that "...automated traffic-enforcement cameras are disproportionately placed in neighborhoods with more Black residents." which means that someone placed those system in use there. The algorithms were not programed to discriminate. They were employed in a way that affected local residents dissimilarly by their omission elsewhere.

The problem arises with the autocrat's predilection to use the tool a certain way. It isn't the tool's fault. As usual, the problem is with the operators, blithely assuming that anything outside the algorithm's parameters doesn't exist. The machine is perfect.

There are ample examples of this being naively implemented, at best, and abusively misused at worst.


...But, in general, agencies were unwilling to share information about their systems, citing trade secrecy and confidentiality. That made it nearly impossible to identify every algorithm used in DC. Earlier this year, a Yale Law School project made a similar attempt to count algorithms used by state agencies in Connecticut but was also hampered by claims of trade secrecy.


Shades of "voting machine" here, no?

The power of governance is being shifted towards autocracy; and algorithmic governance could be the next 'wave' of infractions against an unwitting public.



posted on Nov, 10 2022 @ 11:52 PM
link   


I read a definition of "algorithm"(Collins) that said that an algorithm is a series of mathematical steps

Algorithms are just that . Mathematical process.
The coded implementing the algorithms are the "steps"




algorithm' is often conflated with the term with "AI" (Artificial Intelligence,)

AI works from the steps that utilize the algorithms .

AI is marketing hype in hopes of drawing in investors. Just like the cloud.
It works the same (improved) as it did 40 years ago.
edit on 11/10/22 by Gothmog because: (no reason given)



posted on Nov, 11 2022 @ 05:05 AM
link   
a reply to: Maxmars

Welcome to “smart” cities. Everything you do will be managed by automated processes, including your social and carbon credit score.



posted on Nov, 11 2022 @ 06:08 AM
link   
I noticed many news agencies around World has been starting to use automated robot journalism more and more every Year. So i wonder what kind of storys we will read 10 Years from now if algorithms are defining the messages / infromation that journalism gives to public. Information/storys is , if used wrong, dangerous and powerfull .



posted on Nov, 11 2022 @ 06:20 AM
link   
a reply to: Kenzo

It's true that "journalism products" are very much being culled from social forums via algorithmic 'extraction.' While I haven't seen much in the way of 'reporting' on this new phenomenon, clearly it must be monitored because as you say - it will have social-engineering applications about which the "media' producers will not be forthcoming.

Mega companies and their think-tank guidance subordination is far too prone to place us in a 'subject audience' light. it will be too easy to abuse. And we lack any serious watchdog efforts worth mentioning in this regard.



posted on Nov, 11 2022 @ 06:28 AM
link   
a reply to: Maxmars

The mega companys, TPTB does not have good intentions, and they use every loophole to screw us , so yes monitoring would be required . But if they themselves set the monitoring , they would monitor themselves then, the systems can be compromised too often.



posted on Nov, 11 2022 @ 07:14 AM
link   
a reply to: Maxmars
LOL. Statistics, modeling, and data, where race discussions go to die. The traffic enforcement placement could itself be the result of algorithms using traffic studies and accident data. If the intent is to improve traffic safety it would seem racist to exclude communities with more people of color if the intersections there met a numerical criteria for placement. Naturally the first place race experts go is racism.

If the discussion is about "impacts" from those devices maybe the discussion shouldn't be about race, but instead about why the government is placing unnecessary "impacts" in neighborhoods at all. If these devices are going up based on accidents and pedestrian strikes then is it racist to put them in or racist to take them out? I think a big problem we have is there are a lot of people that are paid experts in race that see everything through that lens and have no qualifications, nor aspirations to get them, in the various fields they wish to apply their race expertise. For some reason they see a problem and rather than advocate eliminating the problem they turn the problem into racism, which based on current politically approved thought can never be solved. Taking out all the devices would solve their racism problem because it's really a problem of unpopular government overreach.

Pointing out that the core problem may not be racism, I'm sure, will elicit a response from a race expert that even that suggestion is racist. Hammer, nail. The accusations is used so flippantly now I'm pretty much over it.

I wonder how much some of those others algorithms would be worth if you cracked them and if they aren't already being sold by some enterprising individuals.

If you can avoid or trigger AI notice that would seem to be something people would be interested in.

Search engine deoptimization. I'm pretty sure industries like tax preparers have known about ways to avoid flags for years. I wonder if one of the big national companies has big data people looking into this kind of thing. If they're processing millions of returns I'd think once you have a few years of audit data there would be something to extract there.

On the flip side if they start using this kind of thing to determine building permit approvals, business licenses, grants, zoning changes, I wonder how that might be exploited.



posted on Nov, 11 2022 @ 10:15 AM
link   
Makes sense with all the schools and colleges turning out less and less qualified People to make decisions 😁



posted on Nov, 11 2022 @ 01:01 PM
link   
a reply to: Maxmars

Wow, that's a very good find for a very good and well presented OP. Thanks for that.



posted on Nov, 11 2022 @ 03:00 PM
link   
Here is the report cited in the source material. I neglected to add it as I had intended to...

Screened and Scored in DC

Sorry about that.



posted on Nov, 12 2022 @ 02:37 PM
link   
a reply to: Maxmars

MIT Technology review


AI is sending people to jail—and getting it wrong

Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the US have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible. This is where the AI part of our story begins.

But the most controversial tool by far comes after police have made an arrest. Say hello to criminal risk assessment algorithms.

Risk assessment tools are designed to do one thing: take in the details of a defendant’s profile and spit out a recidivism score—a single number estimating the likelihood that he or she will reoffend. A judge then factors that score into a myriad of decisions that can determine what type of rehabilitation services particular defendants should receive, whether they should be held in jail before trial, and how severe their sentences should be. A low score paves the way for a kinder fate. A high score does precisely the opposite.

You may have already spotted the problem. Modern-day risk assessment tools are often driven by algorithms trained on historical crime data.


I think the term you used "algorithmic governance" and it's repercussions are already here. I think a persons future being left in the hands of incorrectly or poorly aquired data is upon us, and is going to be difficult to stop.



posted on Nov, 12 2022 @ 07:43 PM
link   
a reply to: Kurokage

Thanks for the informative contribution.

I find the way these systems are being deployed - quietly - to be disturbing.

I'm all for intelligent comprehensive technological applications to "inform" the process... but these systems are not designed for that - they are designed and more frequently applied in such a way as to usurp human judgement. For me, that's going to require more faith in the underlying algorithms... but we can't know what the algorithms are exactly, because: Trade secrets (commerce.)

edit on 11/12/2022 by Maxmars because: spelling







 
14

log in

join