Computer algorithm overview: types, impact, and solutions

Author

Scroll Down

What on earth is an algorithm?

Algorithms explained

Algorithm etymology – “Algorithm” is a term that repeatedly pops up when you read or hear about computer science. However, algorithms had their origin long before the computer age. The word is derived from the work of Persian mathematician, Al-Khwārizmī (c. 780–850). His basic idea was simple: an algorithm establishes a series of steps (a formula) to perform a particular computational outcome. It’s essentially a recipe in that defines a sequence of carefully described actions.

Introduction to computer algorithms

Although algorithms can be defined in written language–as with my example above, it’s in the world of computing where they take on a life of their own. It’s much more efficient to “translate” them into programming languages, computer programs, and flowcharts. Algorithms are indispensable in Information Technology (IT). When a computer program is written, for example, the resulting detailed script then directs digital systems to automatically execute a step-by-step routine to achieve an end goal.

A programmer begins by asking the following questions when selecting or creating an algorithm.—

  • Is there an existing algorithm to solve the task I have in mind?
  • If one exists, will it work for all the possible inputs within this tech/business environment?
  • Is it quick and efficient?
  • Does it use memory within acceptable parameters?

Even if all of these criteria are met, the question remains whether the algorithm you’re considering is the best possible one.

Two standard algorithm examples

  • Search engine algorithms combine keywords within the search field to search its vast database for relevant web pages. Results are returned almost instantaneously. I’m old enough to still find this amazing.
  • Encryption algorithms take specified actions to translate information into indecipherable code to protect it from hackers. It takes significant time and effort to ensure that an encryption algorithm is sophisticated enough to ensure security. As described in previous Insights articles, it works as follows—for example., the U.S. Department of Defense uses the same key to encrypt and then decrypt specified data from one IP location to another.

Computer algorithms control our lives

Algorithms shape your world more than you may imagine. Artificial intelligence-based learning algorithms sift through enormous volumes of data to help us instantaneously find information. We have come to depend on this dynamic in making both routine and life-changing decisions. These intricate formulas are also invaluable in scientific research and technology development.

On the other hand, sophisticated, ‘learning algorithms’ allow third parties to share our data files with big corporations and government agencies. Those agencies then continually make concealed decisions for and about us. When algorithms get things wrong—or reveal too much (for example, tracking our current location), this can even have potentially life-threatening consequences.

Fortunately, we can rely on algorithms that help us–

Improve our health (from Fitbit devices to apps that can identify skin cancer).

  • Develop a professional network to advance our career.
  • Quickly research and locate a product we want at the best price. Shopping online is now so convenient that retail malls are going out of business.
  • By recommending products, websites, and Netflix movies we might like.
  • By suggesting new Facebook friends and LinkedIn contacts.
  • Quickly get us a mortgage or car loan–assuming we are qualified, and our online data is accurate.
  • Distribute resources where they are most needed, for example, shifting goods among a company’s business centers.

Other algorithms can have damaging consequences when misapplied

  • Many experts are concerned over their general lack of transparency and accountability.
  • Experts agree we need to have standards of ethics to govern the misuse of algorithms.
  • Many question the ‘right’ of programmers to devise questions for algorithms that many consider invasive (e.g., personality traits, political alignment, sexual orientation, etc.).
  • Secret algorithms are being used against minority groups with discriminatory consequences.
  • Who gets hired may be based, for example, on who aligns politically with a company’s management.
  • In determining where police resources are deployed, are lower class areas underserved? Does an algorithm create a ‘feedback loops’ which misidentify lower socioeconomic individuals at higher risk for crime than is accurate?
  • In deciding jail sentences, some courts rely on computer formulas to determine the length of sentence and parole, often discriminating against minorities–especially in cases of non-violent illegal drug distribution.
  • State political boundaries are invariably gerrymandered to give the party in power an unfair advantage in elections.

Many algorithms have potential both for good or bad by determining–

  • Who gets insurance at what cost. Brokers often cull insufficient data from online and other sources to decide yes or no on credit or insurance.
  • Who should be on the ‘no fly list.’ This is a necessary security measure in most cases. However, many people have been misidentified as ‘threats’ because of duplicate names, etc.

How do digital algorithms have negative effects?

A White House report on this issue last year concluded that “algorithms rely on the imperfect inputs, logic, probability, and people who design them.” The report also noted that while algorithms can potentially help eliminate human bias, they also ‘systematically disadvantage certain groups’—(whether by design or not).

Algorithms can hide discriminatory effects with no visibility or accountability. This negatively impacts all groups defined as risky or unprofitable targets—including ‘disadvantaged’ demographic groups–for example, women and those of lower social class.

  • An increasingly algorithm-defined future will widen the gap between those who are digitally connected and informed (those of higher income) and those who don’t go online. This dynamic will inevitably heighten the inequality between the upper/upper middle classes and the rest of the population. It has the appearance of ‘digital logic’ so will be hard to fault.
  • In most cases, those who construct algorithms have only a superficial understanding of culture, values, and diversity. This explains why algorithms often fail to test for different sources of potential bias.
  • In many cases, even the programmers who create an algorithm don’t understand how it works. And even if its components and effects are clearly understood, those in authority will usually define it as a trade secret which cannot be revealed.
  • Adding to this challenge, learning/ self-programming algorithms are already in place. It’s possible that in the future algorithms will write most new algorithms. This will accelerate the rise of robotics to replace humans in an increasing range of jobs. And if humans are out of the loop, how can robots make ‘empathic,’ non-discriminatory decisions when inherently biased human-created algorithms are the original model?
  • Our current radically polarized electorate is the result of lowest-common-denominator information flows. Algorithms directing news flow suppress information inconsistent with a person’s digital profile, funneling people into echo chambers of repeated and reinforced media and political content. The worst consequence of this is diminished empathy for ‘the other’ making reasonable compromise seem like surrender to the other side.
  • Many are concerned about the effects of prison sentencing ‘scorers’ who use machine learning to optimize sentencing recommendations. Related models also predict likely parole outcomes. Unfortunately, there is no tracking of or accountability as to whether these models are accurate or help lower recidivism rates.
  • Similar use of other hidden, untraceable models can be seen in (1) terrorist watch lists; (2) drone-killing profiling models; and (3) modern redlining that limit credit and housing opportunities. One example of this that affects virtually everyone is the algorithm used to derive our credit scores. Shouldn’t this process be transparent so we can understand and even dispute how our credit score is calculated?

The challenge of algorithms explained

We benefit directly from algorithms when retrieving internet information we need to for work and our personal lives. –On the downside, algorithms are almost always proprietary and invisible to the public. And unfortunately, developers too often put a low priority on the needs and rights of users.
Interestingly, a recent Pew Research Center study of 1300 IT professionals found there was an even split between those who believed the positive effects of algorithms will outweigh the negative (38% vs. 37%).

The goal of algorithms explained

Ideally, algorithms would be developed in clear language, and graphically illustrated formulas to help users (e.g., loan applicants, medical insurance accident coverage) understand how decisions affecting them are made. This would also help us better understand how changing data and regulations may affect future outcomes.

Unfortunately, there’s little momentum in this direction.

Possible fixes

  • Enforce and expand consumer protection/deceptive practices laws to uncover hidden algorithmic formulas that work against the common good.
  • Develop honest, balanced regulations that prevent algorithmic abuses without stifling innovation. This will require the combined talents of lawyers, social scientists, and all major internet stakeholders.
  • Disseminate, adopt, and expand the European Union’s new data protection law that includes a ‘right of explanation’ when consumers are affected by an algorithmic decision.
  • Encourage open source algorithms that can be modified by user feedback to help flag unfair practices.
  • Secure access to Google’s and Facebook’s server farms to better ensure national security and user protection.
  • Hire more women and minority group coders. (Most discriminatory algorithms are created by non-racist white male coders who lack insight into diversity issues).
  • Carefully review human-created algorithms before using them as models for automated AI applications. Such machine-generated formulas will simply replicate discriminatory content.
  • Implementing such protections will become increasingly urgent as small entities using algorithms proliferate. Such organizations focus exclusively on surveillance and marketing–retrieving data that can potentially be used for criminal purposes.

Benefits of transparent algorithms

  • Implementing regulations to prevent deceptive algorithms would help reduce corruption.
  • Better algorithms will help create a more efficient distribution of resources and help reduce the environmentally damaging effects of fossil fuels, etc.
  • Improved algorithmic insights will enhance the design of our homes, cities, manufacturing, and much more.

What OWDT can do for you

OWDT is a web design company which can provide comprehensive services in, branding, and digital marketing. Our professionals work rigorously to integrate the latest in website technology and design with leading-edge digital marketing tools to help your company thrive. They know how to finesse a beautiful, brand-defining website with coordinated marketing strategies to expand your organization’s visibility and revenue.

Check out our Portfolio to see the kind of award-winning work we have done for our clients. We would be delighted to hear from you! Contact us at 800-324-1617 or info@owdt.