Snake_Byte[10] – Module Packages

Complexity control is the central problem of writing software in the real world.

Eric S. Raymond
AI-Generated Software Architecture Diagram

Hello dear readers! first i hope everyone is safe. Secondly, it is the mondy-iest WEDNESDAY ever! Ergo its time for a Snake_Byte!

Grabbing a tome off the bookshelf we randomly open and it and the subject matter today is Module Packages. So there will not be much if any code but more discussion as it were on the explanations thereof.

Module imports are the mainstay of the snake language.

A Python module is a file that has a .py extension, and a Python package is any folder that has modules inside it (or if your still in Python 2, a folder that contains an __init__.py file).

What happens when you have code in one module that needs to access code in another module or package? You import it!

In python a directory is said to be a package thus imports are known as package imports. What happens in import is that the code is turned into a directory from a local (your come-pooter) or that cloud thing everyone talks about these days.

It turns out that hierarchy simplifies the search path complexities with organizing files and trends toward simplifying search path settings.

Absolute imports are preferred because they are direct. It is easy to tell exactly where the imported resource is located and what it is just by looking at the statement. Additionally, absolute imports remain valid even if the current location of the import statement changes. In addition, PEP 8 explicitly recommends absolute imports. However, sometimes they get so complicated you want to use relative imports.

So how do imports work?

import dir1.dir2.mod
from dir1.dir2.mod import x

Note the “dotted path” in these statements is assumed to correspond to the path through the directory on the machine you are developing on. In this case it leads to mod.py So in this case directory dir1 which is subdirectory dir2 and contains the module mod.py. Historically the dot path syntax was created for platform neutrality and from a technical standpoint paths in import statements become object paths.

In general the leftmost module in the search path unless it is a home directory top level file is exactly where the file presides.

In Python 3.x packages changed slightly and only applies to imports within files located in package directories. The changes include:

  • Modifies the module import search path semantic to skip the package’s own directory by default. These checks are essentially absolute imports
  • Extension of the syntax f from statements to allow them to explicitly request that imports search the packages directories only, This is the relative import mentioned above.

so for instance:

from.import spam #relative to this package

Instructs Python to import a module named spam located in the same package directory as the file in which this statement appears.

Similarly:

from.spam import name

states from a module named spam located in the same package as the file that contains this statement import the variable name.

Something to remember is that an import without a leading dot always causes Python to skip the relative components of the module import search path and looks instead in absolute directories that sys.path contains. You can only force the dot nomenclature with relative imports with the from statement.

Packages are standard now in Python 3.x. It is now very common to see very large third-party extensions deployed as part of a set of package directories rather than flat list modules. Also, caveat emptor using the relative import function can save memory. Read the documentation. Many times importing AllTheThings results in major memory usage an issue when you are going to production with highly optimized python.

There is much more to this import stuff. Transitive Module Reloads, Managing other programs with Modules (meta-programming), Data Hiding etc. i urge you to go into the LazyWebTM and poke around.

in addition a very timely post:

PyPl is running a survey on packages:

Take the survey here -> PyPl Survey on Packages

Here some great comments and suggestions via Y-Combinator News:

Y-Combinator News Commentary on PyPl Packages,

That is all for now. i think next time we are going to delve into some more scientific or mathematical snake language bytes.

Until Then,

#iwishyouwater <- Wedge top 50 wipeouts. Smoookifications!

@tctjr

MUZAK TO BLOG BY: NIN – “The Downward Spiral (Deluxe Edition)”. A truly phenomenal piece of work. NIN second album, trent reznor told jimmy iovine upon delivering the concept album “Im’ Sorry I had to…”. In 1992, Reznor moved to 10050 Cielo Drive in Benedict Canyon, Los Angeles, where actress Sharon Tate formally lived and where he made the record. i believe it changed the entire concept of music and created a new genre. From an engineering point of view,  Digidesign‘s TurboSynth and  Pro Tools were used extensively.

Ordo Ab Chao – Or Embrace Uncertainty

Masonic Motto: Ordo Ab Chao
Eternal Golden Braid

Only he who undertakes dizzying ventures is authentically human. A single chain of peaks links Prometheus with Siegfried.

~ Jean Mabire

Hello all first as always i hope everyone is safe. Second, i have not been able to write as much as i would have liked, life happens. More on that in later blogs this year.

Now to the current installment. i do not write about specific work topics at all however i am making an exception here as it pertains to the title of the blog. W.R.T. (with respect to) the title this is not some FreeMason diatribe so you can take off the tinfoil hats and stick with me, please dear reader.

If you saw this news Fransisco Partners of San Francisco, California purchased IBM, Watson Health of which i am currently the Global CTO and Chief Architect. Personally, i am extremely excited about this situation as it appears others were as well considering the number of inbound texts, emails, calls that i received. i truly appreciated all of the correspondence. It is an amazing opportunity with an amazing firm that understands the health technology industry with many well-placed investments.

Why does this pertain to the blog subject line? Change causes randomization, randomization can cause chaos. All of it yields Uncertainty. It also is an indicator of Entropy. I wrote a blog some time ago on Randomness.

One of my favorite equations and possible my favorite equation is one Entropy:

This is the original one based on Bolztman’s derivation:

    \[S = k_\mathrm{B} \ln W\]

However, me being an information science type person i prefer the entropy of a channel made famous by one of my heroes Claude Shannon:

    \[E = -1/N*\sum_{i}^{N}(p_{i})log(p_{i})\]

(Note: As 𝑁→∞ this gives an entropy which is solely related to the distribution shape and does not depend on 𝑁.)

Entropy (/ˈentrəpē/) is a measure of a thermodynamic quantity representing the unavailability of a system’s thermal energy for conversion into mechanical work, often interpreted as the degree of disorder or randomness in the system or lack of order or predictability; gradual decline into disorder ergo our current subject of this blog.

Many people want to think or be told that it will be A-OK. Or OK. Being a word nerd I looked up the etymology of both A-OK and OK. Here is what i found:

The expression A-Okay:

Means everything is fine. A-Okay is a space-age expression. It was used in 1961 during the flight of astronaut Alan Shepard. He was the first American to be launched into space. His flight ended when his spacecraft landed in the ocean, as planned. Shepard reported: “Everything is A-Okay.”

However, some experts say the expression did not begin with the space age. One story says it was first used during the early days of the telephone to tell an operator that a message had been received.

Then there is OK (Okay):

The gesture was popularized in the United States in 1840 as a symbol to support then Presidential candidate Martin Van Buren. This was because Van Buren’s nickname, Old Kinderhook, derived from his hometown of Kinderhook, NY, had the initials O. K. i had no idea.

There are also fun ways to say okay. Some people say okey-dokey or okey-doke. Now with text, we have kk in some cases. i have even heard people say I’m doing “hunky dory”. This American-coined adjective has been around since the 1860s, from the now-obsolete hunkey, “all right,” which stems from the New York slang hunk, “in a safe position,” and the Dutch root honk or “home.” So basically you’re fine at home. (It is also a great album by David Bowie).

So even if you not OK, there are situations where we engage in some physical activity or mental endeavor where most want to know when “the end or finish line is near.” “OMG when is this going to END?!” However, one important factor that i have personally found when the distance is unknown and the end is unknown is when you truly find out who You are on this Earth. Sometimes the process is in the putting. Going way past what you thought possible is where the magic happens.

Uncertainty is the basis for which we live. Technically if you think about it we live next to an exploding star in an ever-expanding universe.

As a comparison here is a chart of Alan Chamberlin of JPL/Caltech. The subject matter is near-earth asteroids. Interestingly enough the objects were not previously known and thus there was no early warning. We knew when we knew. Take your time and let this chart sink into the old wetware.

This list does not include any of the hundreds of objects that collided with Earth, which were not discovered in advance, but were recorded by sensors designed to detect the detonation of nuclear devices. Of the objects so detected, 78 had impact energy greater than that of a 1-kiloton device (equivalent to 1000 tons of TNT), including 11 which had impact energy greater than that of a 10-kiloton device i.e. comparable to the atomic bombs used in the Second World War.

Why am i posting these statistics? Entropy until we know a universe that runs in reverse time or loops time it is ever increasing thus randomness and uncertainty writ large are always increasing.

So what does one do? Well we can only truly deal with what we can control. Anything exterior to that is superfluous in our thought patterns or should be extraneous and superfluous.

I wrote a related blog entitled “What Is It You Want?” which posited a different view. First, you need to decide what you want and if you are unable to decide what you want there is a good chance that what you don’t want is a more known entity.

Envision the worst thing that could happen. Fired from your job? Bankrupt? No. Try Grief. True loss as in the death of a loved one (including animals). Someone that will never ever return. Finality. We have all been there with someone.

In the same psychological realm True Loss can also be the other side of an “Aha!” moment. For instance, let’s say you have been working for months possibly years on something creative or an idea and suddenly in an ephemeral flash of lucidity you finally arrive at the ideation incarnate and grasp the totality of understanding. At that moment on the other side, you can no longer duplicate that feeling. It is gone. True Loss. For some that do not have family or friends, this is equivalent. Even for those that have family, friends and beloved animals, this is equivalent.

Yet we want to think in absolutes that someone somewhere will make it “OK”.

I say unto you: one must still have chaos in oneself to be able to give birth to a dancing star. I say unto you: you still have chaos in yourselves.

Zarathustra

Well as we say here in The South. “It’s OK until It Aint.” The only remedy is preparation but not analysis to paralysis. Train and train well.

The only other remedy is just GO and DO and move and make it happen. Only move in the direction of what YOU desire and want – it will make a huge difference.

The most intelligent humans, like the strongest, find their happiness where others find only disaster: in the labyrinth, in being hard with themselves and others, in effort, their delight is self mastery; in them asceticism becomes second nature, a necessity, an instinct.  They regard a difficult task as a privilege; it is to them a recreation to play with burdens that would crush all others.

R.A.C.

Humans are designed via the evolutionary process to work under stress. i believe we operate better when we are under the gun, under a deadline, reaching for goals, or striving for whatever it is that drives Us. Some prefer the sameness that comes with a 9-5 job. For me personally, i like to obtain order out of chaos.

So whenever you find yourself worried about what May-Be-Happening think instead of what IS-To-Be and make it happen. Alan Watts famously discussed the issue with no surprises in life is you know how everything turns out and if that is the case then that is living in the past.

Until Then,

#iwishyouwater <- Mavericks in Half Moon Bay doing its thing 3.22

@tcjr

Muzak To Blog By: Jeff Buckley’s Album “You and I”. Huge that this was all demos. Astounding talent left us to soon. He took a night swim in the Mississippi River and hit by a paddwheel boat. Random? Entropy? Safe? His autopsy showed no drugs or alchohol in a body.

Snake_Byte[4]: Random and PseudoRandom Numbers

Expose yourself to as much randomness as possible.


~ Ben Casnocha
Visualization of the algorithmic random data
A Visualization Of Randomness

First i trust everyone is safe.

Second it is WEDNESDAY and that must mean a Snake_Byte or you are working in a startup because every day is WEDNESDAY in a startup!

i almost didn’t get this one done because well life happens but i want to remain true to the goals herewith to the best of my ability.

So in today’s Snake_Byte we are going to cover Random and PseudoRandom Numbers.  i really liked this one because it was more in line with scientific computing and numerical optimization.

The random module in Python generates what is called pseudorandom numbers.  It is in the vernacular a pseudorandom number generator (PRNG).  This generation includes different types of distributions for said numbers. 

So what is a pseudorandom number:

“A pseudorandom number generator (PRNG), also known as a deterministic random bit generator, is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers.” ~ Wikipedia

The important aspect here is:  the properties approximate sequences of random numbers.  So this means that it is statistically random even though it was generated by a deterministic response.

While i have used the random module and have even generated various random number algorithms i learned something new in this blog.  The pseudorandom number generator in Python uses an algorithm called the Mersenne Twister algorithm.  The period of said algorithm is length 2**19937-1 for the 32 bit version and there is also a 64-bit version.  The underlying implementation in C is both fast and thread-safe. The Mersenne Twister is one of the most extensively tested random number generators in existence. One issue though is that due to the deterministic nature of the algorithm it is not suitable for cryptographic methods.

Let us delve down into some code into the various random module offerings, shall we?

i like using %system in Jupyter Lab to create an interactive session. First we import random. Lets look at random.random() which returns a uniform distribution and when multiplied by a integer bounds it within that distribution range:

%system
import random
for i in range (5):
    x = random.random() * 100
    print (x)
63.281889167063035
0.13679757425121286
47.697874648329
96.66882808709684
76.63300711554905

Next let us check out random.choice(seq) which returns a random element from the non-empty sequence seq. If seq is empty, raises IndexError:

for z in range (5):
mySurfBoardlist = ["longboard", "shortboard", "boogieboard"]
print(random.choice(mySurfBoardlist))
longboard
boogieboard
boogieboard
longboard
shortboard

Next let us look at random.randrange(startstop[, step]) which returns a randomly selected element from range(start, stop, step). This is equivalent to choice(range(start, stop, step)) but doesn’t actually build a range object.

ParameterDescription
startOptional. An integer specifying at which position to start.
Default 0
stopRequired. An integer specifying at which position to end.
stepOptional. An integer specifying the incrementation.
Default 1
random.ranrange parameters
for i in range (5): 
      print(random.randrange(10, 100,1))
84
21
94
91
87

Now let us move on to some calls that you would use in signal processing, statistics or machine learning. The first one is gauss(). gauss() returns a gaussian distribution using the following mathematics:

\Large f(x) = \frac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}\right)

Gaussian distribution (also known as normal distribution) is a bell-shaped curve (aka the bell curve), and it is assumed that during any measurement values will follow a normal distribution with an equal number of measurements above and below the mean value.

ParameterDescription
muthe mean
sigmathe standard deviation
returns a random gaussian distribution floating number
gauss() parameters
# import the required libraries 
import random 
import matplotlib.pyplot as plt 
#set the inline magic
%matplotlib inline   
# store the random numbers in a list 
nums = [] 
mu = 100
sigma = 50
    
for i in range(100000): 
    temp = random.gauss(mu, sigma) 
    nums.append(temp) 
        
# plot the distribution 
plt.hist(nums, bins = 500, ec="red") 
plt.show()
Gaussian Distribution in Red

There are several more parameters in the random module, setter functions, seed functions and very complex statistical functions. Hit stack overflow and give it a try! Also it doesn’t hurt if you dust off that probability and statistics textbook!

As a last thought which came first the framework of entropy or the framework of randomness? As well as is everything truly random? i would love to hear your thought in the comments!

Until then,

#iwishyouwater <- click here on this one!

tctjr

References:

Python In A Nutshell by Alex Martelli

M. Matsumoto and T. Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator”, ACM Transactions on Modeling and Computer Simulation Vol. 8, No. 1, January pp.3–30 1998

Muzak To Muzak To Blog By:  Black Sabbath  – The End: Live In Birmingham

Computing The Human Condition – Project Noumena (Part 2)

In the evolution of a society, continued investment in complexity as a problem-solving strategy yields a declining marginal return.

Joseph A. Tainter

Someone asked me if from now on my blog will only be about Project_Noumena – on the contrary.

I will be interspersing subject matter within Parts 1 to (N) of Project_Noumena. To be transparent at this juncture i am not sure where it will end or if there is even a logical MVP 1.0.  As with open-source systems and frameworks technically one never achieves V1.0 as the systems evolve. i tend to believe this will be the case with Project Noumena.  i  recently provided a book review on CaTB and have a blog on Recurrent Neural Networks with respect to Multiple Time Scale Prediction in the works so stuff is proceeding. 

To that end, i would love comments and suggestions as to anything you would like my opinion on or for me to write about in the comments section.  Also feel free to call me out on typos or anything else you see in error.

Further within Project Noumena there are snippets that could be shorter blogs as well.  Look at Project Noumena as a fractal-based system.

Now on to the matter at hand.

In the previous blog Computing The Human_Condition – Project Noumena (Part 1) i discussed the initial overview of the model from the book World Dynamics.  i will take a part of that model which is what i call, the main, Human_Do_Loop(); and the main attributes of the model: Birth and Death of Humans. One must ask if we didn’t have humans we would not have to be concerned with such matters as societal collapse?  i don’t believe animals are concerned with such existential crisis concerns so my answer is a resounding – NO. We will be discussing such existential issues in this blog although i will address such items in future writings. 

Over the years i have been asking myself is this a biological model by definition?  Meaning do we have cellular components involved only?  Is this biological modeling at the very essence?  If we took the cell-based organisms out of the equation what do we still have as far as models on Earth? 

While i told myself i wouldn’t get too extensional here and i do want to focus on the models and then codebases i continually check the initial conditions of these systems as they for most systems dictate the response for the rest of the future operations of said systems.  Thus for biological systems, are there physical parameters that govern the initial exponential growth rate?  Can we model with power laws and logistic curves for coarse-grained behavior?  Is Bayesian reasoning biologically plausible at a behavioral level or at a neuronal level? Given that what are the atomic units that govern these models?  

These are just a sampling of initial condition questions i ask myself as i evolve through this process. 

So with that long-winded introduction and i trust i didn’t lose you oh reader lets hope into some specifics. 

Birth and Death Rates

The picture from the book depicts basic birth and death loops in the population sector.  In the case of these loops, they are generating positive feedback which causes growth.  Thus an increase in population P causes an increase in birthrate BR.  This, in turn, causes population P to further increase.  The positive feedback loop would if left to its own devices would create an exponentially growing situation.  As i said in the first blog and will continue to say, we seem to have started using exponential growth as a net positive fashion over the years in the technology industry.  In the case of basic population dynamics with no constraints, exponential growth is not a net positive outcome. 

Once again why start with simple models?  The human mind is phenomenal at perceiving pressures, fears, greed, homeostasis, and other human aspects and characteristics and attempting at a structure that is given say the best fit to a situation and categorizing these as attributes thereof.  However, the human mind is rather poor at predicting dynamical systems behaviors which are where the models come into play especially with social interactions and what i attempting to define from a self-organizing theory standpoint.  

The next sets of loops that have the most effective behavior is a Pollution loop and a Crowding Loop.  If we note that pollution POL increases one can assume up to a point that one hopes that nature absorbs and fixes the pollution otherwise it is a completely positive feedback loop and this, in turn, creates over pollution which we are already seeing the effects of around the worlds. One can then couple this with the amount of crowding humans can tolerate. 

Population, Birth Rate, Pollution

We see this behavior in urban sprawl areas when we have extreme heat or extreme cold or let’s say extreme pandemics.  If the population rises crowding ratio increases the birth rate multiplier declines and birth rates reduce.  The increasing death rate and reducing the birth rate are power system dynamic stabilizers coupled with pollution. This in turn obviously has an effect on food supplies. One can easily deduce that these seemingly simple coefficients if you will within the relative feedback loops create oscillations, exponential growth, or exponential decay.  The systems while that seem large and rather stable are very sensitive to slight variations.  If you are familiar with NetLogo it is a great agent-based modeling language.  I picked a simple pollution model whereas we can select the number of people, birthrate, and tree planting rate. 

population dynamics with pollution

As you can see without delving into the specifics after 77 years it doesn’t look to promising.  i ‘ll either be using python or netlogo or a combination of both to extended these models as we add other references. 

Ok enough for now.

Until Then,

#iwishyouwater

@tctjr

Computing The Human Condition – Project Noumena (Part 1)

“I am putting myself to the fullest possible use, which is all I think any conscious entity can ever hope to do.” ~ HAL 9000

“If you want to make the world a better place take a look at yourself and then make a change.” ~ MJ.

First and foremost with this blog i trust everyone is safe.  The world is in an interesting place, space, and time both physically and dare i say collectively – mentally.

A Laundry List

Introduction

This past week we celebrated  Earth Day.  i believe i heard it was the 50th year of Earth Day.  While I applaud the efforts and longevity for a day we should have Earth Day every day.  Further just “thoughting” about or tweeting about Earth Day – while it may wake up your posterior lobe of the pituitary gland and secret some oxytocin – creating the warm fuzzies for you it really doesn’t create an action for furthering Earth Day.  (much like typing /giphy YAY! In Slack).

 As such, i decided to embark on a multipart blog that i have been “thinking” about what i call an Ecological Computing System.  Then the more i thought about it why stop at Ecology?   We are able to model and connect essentially anything, we now have models for the brain that while are coarse-grained can account for gross behaviors, we have tons of data on buying habits and advertisement data and everything is highly mobile and distributed.  Machine learning which can optimize, classify and predict with extremely high dimensionality is no longer an academic exercise.  

Thus, i suppose taking it one step further from ecology and what would differentiate it from other efforts is that <IT>  would actually attempt to provide a compute framework that would compute The Human Condition.  I am going to call this effort Project Noumena.  Kant the eminent thinker of 18th century Germany defined Noumena as a thing as it is in itself, as distinct from a thing as it is knowable by the senses through phenomenal attributes and proposed that the experience was a product of the mind.

My impetus for this are manifold:

  • i love the air, water, trees, and animals,
  • i am an active water person,
  • i want my children’s children’s children to know the wonder of staring at the azure skies, azure oceans and purple mountains,
  • Maybe technology will assist us in saving us from The Human Condition.

Timing

i have waited probably 15+ years to write about this ideation of such a system mainly due to the technological considerations were nowhere near where they needed to be and to be extremely transparent no one seemed to really think it was an issue until recently.  The pandemic seems to have been a global wakeup call that in fact, Humanity is fragile.  There are shortages of resources in the most advanced societies.  Further due to the recent awareness that the pollution levels appear (reported) to be subsiding as a function in the reduction of humans’ daily involvement within the environment. To that point over the past two years, there appears to be an uptake of awareness in how plastics are destroying our oceans.  This has a coupling effect that with the pandemic and other environmental concerns there could potentially be a food shortage due to these highly nonlinear effects.   This uptake in awareness has mainly been due to the usage of technology of mobile computing and social media which in and of itself probably couldn’t have existed without plastics and massive natural resource consumption.  So i trust the irony is not lost there.   

From a technical perspective, Open source and Open Source Systems have become the way that software is developed.  For those that have not read The Cathedral and The Bazaar and In The Beginning Was The Command Line i urge you to do so it will change your perspective.

We are no longer hampered by the concept of scale in computing. We can also create a system that behaves at scale with only but a few human resources.  You can do a lot with few humans now which has been the promise of computing.

Distributed computing methods are now coming to fruition. We no longer think in terms of a monolithic operating system or in place machine learning. Edge computing and fiber networks are accelerating this at an astonishing rate.  Transactions now dictate trust. While we will revisit this during the design chapters of the blog I’ll go out on a limb here and say these three features are cogent to distributed system processing (and possibly the future of computing at scale).

  • Incentive models
  • Consensus models
  • Protocol models

We will definitely be going into the deeper psychological, mathematical, and technical aspects of these items.

Some additional points of interest and on timing.  Microsoft recently released press about a Planetary Computer and announced the position of Chief Ecology Officer.  While i do not consider Project Nuomena to be of the same system type there could be similarities on the ecological aspects which just like in open source creates a more resilient base to work.

The top market cap companies are all information theoretic-based corporations.  Humans that know the science, technology, mathematics and liberal arts are key to their success.  All of these companies are woven and interwoven into the very fabric of our physical and psychological lives.

Thus it is with the confluence of these items i believe the time is now to embark on this design journey.  We must address the Environment, Societal factors and the model of governance.

A mentor once told me one time in a land far away: “Timing is everything as long as you can execute.”  Ergo Timing and Execution Is Everything.

Goals

It is my goal that i can create a design and hopefully, an implementation that is utilizing computational means to truly assist in building models and sampling the world where we can adhere to goals in making small but meaningful changes that can be used within what i am calling the 3R’s:  recycle, redact, reuse.  Further, i hope with the proper incentive models in place that are dynamic it has a mentality positive feedback effect.  Just as in complexity theory a small change – a butterfly wings – can create hurricanes – in this case positive effect. 

Here is my overall plan. i’m not big on the process or gant charts.  I’ll be putting all of this in a README.md as well.  I may ensconce the feature sets etc into a trello or some other tracking mechanism to keep me focused – WebSphere feel free to make recommendations in the comments section:

Action Items:

  • Create Comparative Models
  • Create Coarse-Grained Attributes
  • Identify underlying technical attributes
  • Attempt to coalesce into an architecture
  • Start writing code for the above.

Preamble

Humanity has come to expect growth as a material extension of human behavior.  We equate growth with progress.  In fact, we use the term exponential growth as it is indefinitely positive.  In most cases for a fixed time interval, this means a doubling of the relevant system variable or variables.  We speak of growth as a function of gross national production.  In most cases, exponential growth is treacherous where there are no known or perceived limits.  It appears that humanity has only recently become aware that we do not have infinite resources.  Psychologically there is a clash between the exponential growth and the psychological or physical limit.  The only significance is the relevant (usually local) limit.  How does it affect me, us, and them?  This can be seen throughput most game theory practices – dominant choice.  The pattern of growth is not the surprise it is the collision of the awareness of the limit to the ever-increasing growth function is the surprise.

One must stop and ask: 

Q: Are progress (and capacity) and the ever-increasing function a positive and how does it relate to 2nd law of thermodynamics aka Entropy?  Must it always expand?

We are starting to see that our world can exert dormant forces that within our life can greatly affect our well being. When we approach the actual or perceived limit the forces which are usually negative begin to gain strength.

So given these aspects of why i’ll turn now to start the discussion.  If we do not understand history we cannot predict the future by inventing it or in most cases re-inventing it as it where.

I want to start off the history by referencing several books that i have been reading and re-reading on subjects of modeling the world, complexity, and models for collapse throughout this multipart blog.  We will be addressing issues concerning complex dynamics as are manifested with respect to attributes model types, economics, equality, and mental concerns.  

These core references are located at the end of the blog under references.  They are all hot-linked.  Please go scroll and check them out.  i’ll still be here.  i’ll wait.

Checked them out?  i know a long list. 

As you can see the core is rather extensive due to the nature of the subject matter.  The top three books are the main ones that have been the prime movers and guides of my thinking.  These three books i will refer to as The Core Trilogy:

World Dynamics

The Collapse of Complex Societies 

Six Sources of Collapse 

 As i mentioned i have been deeply thinking about all aspects of this system for quite some time. I will be mentioning several other texts and references along the continuum of creation of this design.

We will start by referencing the first book: World Dynamics by J.W. Forrestor.  World Dynamics came out of several meetings of the Rome Club a 75 person invite-only club founded by the President of Fiat.  The club set forth the following attributes for a dynamic model that would attempt to predict the future of the world:

  • Population Growth
  • Capital Investment
  • Geographical Space
  • Natural Resources
  • Pollution
  • Food Production

The output of this design was codified in a computer program called World3.  It has been running since the 1970s what was then termed a golden age of society in many cases.  All of these variables have been growing at an exponential rate. Here we see the model with the various attributes in action. There have been several criticisms of the models and also analysis which i will go into in further blogs. However, in some cases, the variants have been eerily accurate. The following plot is an output of the World3 model:

2060 does not look good

Issues Raised By World3 and World Dynamics

The issues raised by World3 and within the book World Dynamics are the following:

  • There is a strong undercurrent that technology might not be the savior of humankind
  • Industrialism (including medicine and public health) may be a more disturbing force than the population.  
  • We may face extreme psychological stress and pressures from a four-pronged dilemma via suppression of the modern industrial world.
  • We may be living in a “golden age” despite a widely acknowledged feeling of malaise.  
  • Exhtortions and programs directed at population control may be self-defeating.  Population control, if it works, would yield excesses thereby allowing further procreation.
  • Pollution and Population seem to oscillate whereas the high standard of living increases the production of food and material goods which outrun the population.  Agriculture as it hits a space limit and as natural resources reach a pollution limit then the quality of life falls in equalizing population.
  • There may be no realistic hope of underdeveloped countries reaching the same standard and quality of life as developed countries.  However, with the decline in developed countries, the underdeveloped countries may be equalized by that decline.
  • A society with a high level of industrialization may be unsustainable.  
  • From a long term 100 years hence it may be unwise for underdeveloped countries to seek the same levels of industrialization.  The present underdeveloped nations may be in better conditions for surviving the forthcoming pressures.  These underdeveloped countries would suffer far less in a world collapse.  

Fuzzy Human – Fuzzy Model

The human mind is amazing at identifying structures of complex situations. However, our experiences train us poorly for estimating the dynamic consequences of said complexities.  Our mind is also not very accurate at estimating ad hoc parts of the complexities and the variational outcomes.  

One of the problems with models is well it is just a model  The subject-observer reference could shift and the context shifts thereof.  This dynamic aspect needs to be built into the models.

Also while we would like to think that our mental model is accurate it is really quite fuzzy and even irrational in most cases.  Also attempting to generalize everything into a singular model parameter is exceedingly difficult.  It is very difficult to transfer one industry model onto another.  

In general parameterization of most of these systems is based on some perceptual model we have rationally or irrationally invented.  

When these models were created there was the consideration of modeling social mechanics of good-evil, greed – altruism, fears, goals, habits, prejudice, homeostasis, and other so-called human characteristics.  We are now at a level of science where we can actually model the synaptic impulse and other aspects that come with these perceptions and emotions.

There is a common cross-cutting construct in most complex models within this text that consists of and mainly concerned with the concept of feedback and how the non-linear relationships of these modeled systems feedback into one another.  System-wide thinking permeates the text itself.  On a related note from the 1940’s of which Dr Norbert Weiner and others such as Claude Shannon worked on ballistic tracking systems and coupled feedback both in a cybernetic and information-theoretic fashion of which he attributed the concept of feedback as one of the most fundamental operations in information theory.  This led to the extremely famous Weiner Estimation Filters.  Also, side note: Dr Weiner was a self-styled pacifist proving you can hold two very opposing views in the same instance whilst being successful at executing both ideals.   

Given that basic function of feedback, lets look at the principle structures.  Essentially the model states there will be levels and rates.  Rates are flows that cause levels to change.  Levels can accumulate the net level. Either addition or subtraction to that level.  The various system levels can in aggregate describe the system state at any given time (t).  Levels existing in all subsystems of existence.  These subsystems as you will see include but are not limited to financial, psychological, biological, and economic.   The reason that i say not limited to because i also believe there are some yet to be identified subsystems at the quantum level.  The differential or rate of flow is controlled by one or more systems.  All systems that have some Spatio-temporal manifestation can be represented by using the two variables levels and rates.  Thus with respect to the spatial or temporal variables, we can have a dynamic model.  

The below picture is the model that grew out of interest from the initial meetings of the Club of Rome.  The inaugural meeting which was the impetus for the model was held in Bern, Switzerland on June 29, 1970.  Each of the levels presents a variable in the previously mentioned major structures. System levels appear as right triangles.  Each level is increased or decreased by the respective flow.  As previously mentioned on feedback any closed path through the diagram is a feedback loop.  Some of the closed loops given certain information-theoretic attributes be positive feedback loops that generate growth and others that seek equilibrium will be negative feedback loops.  If you notice something about the diagram it essentially is a birth and death loop. The population loop if you will.  For the benefit of modeling, there are really only two major variables that affect the population.  Birth Rate (BR) and Death Rate (DR).  They represent the total aggregate rate at which the population is being increased or decreased.  The system has coefficients that can initialize them to normal rates.  For example, in 1970 BRN is taken as 0.0885 (88.5 per thousand) which is then multiplied by population to determine BR.  DRN by the same measure is the outflow or reduction.  In 1970 it was 9.5% or 0.095.  The difference is the net and called normal rates.  The normale rates correspond to a physical normal world.  When there are normal levels of food, material standard of living, crowding, and pollution.  The influencers are then multipliers that increase or decrease the normal rates.

Feedback and isomorphisms abound


As a caveat, there have been some detractors of this model. To be sure it is very coarse-grained however while i haven’t seen the latest runs or outputs it is my understanding as i said the current outputs are close. The criticisms come in the shape of “Well its just modeling everything as a y=x*e^{{rt}}. I will be using this concept and map if you will as the basis for Noumena.  The concepts and values as i evolve the system will vary greatly from the World3 model but i believe starting with a minimum viable product is essential here as i said humans are not very good at predicting all of the various outcomes in high dimensional space. We can asses situations very quickly but probably outcomes no so much. Next up we will be delving into the loops deeper and getting loopier.

So this is the first draft if you will as everything nowadays can be considered an evolutionary draft.  

Then again isn’t really all of this just  The_Inifinite_Human_Do_Loop?

until then,

#iwishyouwater

tctjr

References:

(Note: They are all hotlinked)

World Dynamics

The Collapse of Complex Societies 

Six Sources of Collapse 

Beyond The Limits 

The Limits To Growth 

Thinking In Systems Donella Meadows

Designing Distributed Systems Brendan Burns

Introduction to Distributed Algorithms 

A Pragmatic Introduction to Secure Multi-Party Computation 

Reliable Secure Distributed Programming 

Distributed Algorithms 

Dynamic General Equilibrium Modeling 

Advanced Information Systems Engineering 

Introduction to Dynamic Systems Modeling 

Nonlinear Dynamics and Chaos 

Technological Revolutions and Financial Capital 

Marginalism and Discontinuity 

How Nature Works 

Complexity and The Economy 

Complexity a Guided Tour

Future Shock 

Agent_Zero 

Nudge Theory In Action

The Structure of Scientific Revolutions

Agent-Based Modelling In Economics

Cybernetics

Human Use Of Human Beings

The Technological Society

The Origins Of Order

The Lorax

Blog Muzak: Brain and Roger Eno: Mixing Colors