Expose yourself to as much randomness as possible.
~ Ben Casnocha
First i trust everyone is safe.
Second it is WEDNESDAY and that must mean a Snake_Byte or you are working in a startup because every day is WEDNESDAY in a startup!
i almost didn’t get this one done because well life happens but i want to remain true to the goals herewith to the best of my ability.
So in today’s Snake_Byte we are going to cover Random and PseudoRandom Numbers. i really liked this one because it was more in line with scientific computing and numerical optimization.
The random module in Python generates what is called pseudorandom numbers. It is in the vernacular a pseudorandom number generator (PRNG). This generation includes different types of distributions for said numbers.
So what is a pseudorandom number:
“A pseudorandom number generator (PRNG), also known as a deterministic random bit generator, is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers.” ~ Wikipedia
The important aspect here is: theproperties approximate sequences of random numbers. So this means that it is statistically random even though it was generated by a deterministic response.
While i have used the random module and have even generated various random number algorithms i learned something new in this blog. The pseudorandom number generator in Python uses an algorithm called the Mersenne Twister algorithm. The period of said algorithm is length 2**19937-1 for the 32 bit version and there is also a 64-bit version. The underlying implementation in C is both fast and thread-safe. The Mersenne Twister is one of the most extensively tested random number generators in existence. One issue though is that due to the deterministic nature of the algorithm it is not suitable for cryptographic methods.
Let us delve down into some code into the various random module offerings, shall we?
i like using %system in Jupyter Lab to create an interactive session. First we import random. Lets look at random.random() which returns a uniform distribution and when multiplied by a integer bounds it within that distribution range:
%system
import random
for i in range (5):
x = random.random() * 100
print (x)
Next let us look at random.randrange(start, stop[, step]) which returns a randomly selected element from range(start, stop, step). This is equivalent to choice(range(start, stop, step)) but doesn’t actually build a range object.
Parameter
Description
start
Optional. An integer specifying at which position to start. Default 0
stop
Required. An integer specifying at which position to end.
step
Optional. An integer specifying the incrementation. Default 1
random.ranrange parameters
for i in range (5):
print(random.randrange(10, 100,1))
84
21
94
91
87
Now let us move on to some calls that you would use in signal processing, statistics or machine learning. The first one is gauss(). gauss() returns a gaussian distribution using the following mathematics:
Gaussian distribution (also known as normal distribution) is a bell-shaped curve (aka the bell curve), and it is assumed that during any measurement values will follow a normal distribution with an equal number of measurements above and below the mean value.
Parameter
Description
mu
the mean
sigma
the standard deviation
returns
a random gaussian distribution floating number
gauss() parameters
# import the required libraries
import random
import matplotlib.pyplot as plt
#set the inline magic
%matplotlib inline
# store the random numbers in a list
nums = []
mu = 100
sigma = 50
for i in range(100000):
temp = random.gauss(mu, sigma)
nums.append(temp)
# plot the distribution
plt.hist(nums, bins = 500, ec="red")
plt.show()
There are several more parameters in the random module, setter functions, seed functions and very complex statistical functions. Hit stack overflow and give it a try! Also it doesn’t hurt if you dust off that probability and statistics textbook!
As a last thought which came first the framework of entropy or the framework of randomness? As well as is everything truly random? i would love to hear your thought in the comments!
M. Matsumoto and T. Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator”, ACM Transactions on Modeling and Computer Simulation Vol. 8, No. 1, January pp.3–30 1998
Muzak To Muzak To Blog By: Black Sabbath – The End: Live In Birmingham
Someone asked me if from now on my blog will only be about Project_Noumena – on the contrary.
I will be interspersing subject matter within Parts 1 to (N) of Project_Noumena. To be transparent at this juncture i am not sure where it will end or if there is even a logical MVP 1.0. As with open-source systems and frameworks technically one never achieves V1.0 as the systems evolve. i tend to believe this will be the case with Project Noumena. i recently provided a book review on CaTB and have a blog on Recurrent Neural Networks with respect to Multiple Time Scale Prediction in the works so stuff is proceeding.
To that end, i would love comments and suggestions as to anything you would like my opinion on or for me to write about in the comments section. Also feel free to call me out on typos or anything else you see in error.
Further within Project Noumena there are snippets that could be shorter blogs as well. Look at Project Noumena as a fractal-based system.
Now on to the matter at hand.
In the previous blog Computing The Human_Condition – Project Noumena (Part 1) i discussed the initial overview of the model from the book World Dynamics. i will take a part of that model which is what i call, the main, Human_Do_Loop(); and the main attributes of the model: Birth and Death of Humans. One must ask if we didn’t have humans we would not have to be concerned with such matters as societal collapse? i don’t believe animals are concerned with such existential crisis concerns so my answer is a resounding – NO. We will be discussing such existential issues in this blog although i will address such items in future writings.
Over the years i have been asking myself is this a biological model by definition? Meaning do we have cellular components involved only? Is this biological modeling at the very essence? If we took the cell-based organisms out of the equation what do we still have as far as models on Earth?
While i told myself i wouldn’t get too extensional here and i do want to focus on the models and then codebases i continually check the initial conditions of these systems as they for most systems dictate the response for the rest of the future operations of said systems. Thus for biological systems, are there physical parameters that govern the initial exponential growth rate? Can we model with power laws and logistic curves for coarse-grained behavior? Is Bayesian reasoning biologically plausible at a behavioral level or at a neuronal level? Given that what are the atomic units that govern these models?
These are just a sampling of initial condition questions i ask myself as i evolve through this process.
So with that long-winded introduction and i trust i didn’t lose you oh reader lets hope into some specifics.
The picture from the book depicts basic birth and death loops in the population sector. In the case of these loops, they are generating positive feedback which causes growth. Thus an increase in population P causes an increase in birthrate BR. This, in turn, causes population P to further increase. The positive feedback loop would if left to its own devices would create an exponentially growing situation. As i said in the first blog and will continue to say, we seem to have started using exponential growth as a net positive fashion over the years in the technology industry. In the case of basic population dynamics with no constraints, exponential growth is not a net positive outcome.
Once again why start with simple models? The human mind is phenomenal at perceiving pressures, fears, greed, homeostasis, and other human aspects and characteristics and attempting at a structure that is given say the best fit to a situation and categorizing these as attributes thereof. However, the human mind is rather poor at predicting dynamical systems behaviors which are where the models come into play especially with social interactions and what i attempting to define from a self-organizing theory standpoint.
The next sets of loops that have the most effective behavior is a Pollution loop and a Crowding Loop. If we note that pollution POL increases one can assume up to a point that one hopes that nature absorbs and fixes the pollution otherwise it is a completely positive feedback loop and this, in turn, creates over pollution which we are already seeing the effects of around the worlds. One can then couple this with the amount of crowding humans can tolerate.
We see this behavior in urban sprawl areas when we have extreme heat or extreme cold or let’s say extreme pandemics. If the population rises crowding ratio increases the birth rate multiplier declines and birth rates reduce. The increasing death rate and reducing the birth rate are power system dynamic stabilizers coupled with pollution. This in turn obviously has an effect on food supplies. One can easily deduce that these seemingly simple coefficients if you will within the relative feedback loops create oscillations, exponential growth, or exponential decay. The systems while that seem large and rather stable are very sensitive to slight variations. If you are familiar with NetLogo it is a great agent-based modeling language. I picked a simple pollution model whereas we can select the number of people, birthrate, and tree planting rate.
As you can see without delving into the specifics after 77 years it doesn’t look to promising. i ‘ll either be using python or netlogo or a combination of both to extended these models as we add other references.
“I am putting myself to the fullest possible use, which is all I think any conscious entity can ever hope to do.” ~ HAL 9000
“If you want to make the world a better place take a look at yourself and then make a change.” ~ MJ.
First and foremost with this blog i trust everyone is safe. The world is in an interesting place, space, and time both physically and dare i say collectively – mentally.
Introduction
This past week we celebrated Earth Day. i believe i heard it was the 50th year of Earth Day. While I applaud the efforts and longevity for a day we should have Earth Day every day. Further just “thoughting” about or tweeting about Earth Day – while it may wake up your posterior lobe of the pituitary gland and secret some oxytocin – creating the warm fuzzies for you it really doesn’t create an action for furthering Earth Day. (much like typing /giphy YAY! In Slack).
As such, i decided to embark on a multipart blog that i have been “thinking” about what i call an Ecological Computing System. Then the more i thought about it why stop at Ecology? We are able to model and connect essentially anything, we now have models for the brain that while are coarse-grained can account for gross behaviors, we have tons of data on buying habits and advertisement data and everything is highly mobile and distributed. Machine learning which can optimize, classify and predict with extremely high dimensionality is no longer an academic exercise.
Thus, i suppose taking it one step further from ecology and what would differentiate it from other efforts is that <IT> would actually attempt to provide a compute framework that would compute The Human Condition. I am going to call this effort Project Noumena. Kant the eminent thinker of 18th century Germany defined Noumena as a thing as it is in itself, as distinct from a thing as it is knowable by the senses through phenomenal attributes and proposed that the experience was a product of the mind.
My impetus for this are manifold:
i love the air, water, trees, and animals,
i am an active water person,
i want my children’s children’s children to know the wonder of staring at the azure skies, azure oceans and purple mountains,
Maybe technology will assist us in saving us from The Human Condition.
Timing
i have waited probably 15+ years to write about this ideation of such a system mainly due to the technological considerations were nowhere near where they needed to be and to be extremely transparent no one seemed to really think it was an issue until recently. The pandemic seems to have been a global wakeup call that in fact, Humanity is fragile. There are shortages of resources in the most advanced societies. Further due to the recent awareness that the pollution levels appear (reported) to be subsiding as a function in the reduction of humans’ daily involvement within the environment. To that point over the past two years, there appears to be an uptake of awareness in how plastics are destroying our oceans. This has a coupling effect that with the pandemic and other environmental concerns there could potentially be a food shortage due to these highly nonlinear effects. This uptake in awareness has mainly been due to the usage of technology of mobile computing and social media which in and of itself probably couldn’t have existed without plastics and massive natural resource consumption. So i trust the irony is not lost there.
From a technical perspective, Open source and Open Source Systems have become the way that software is developed. For those that have not read The Cathedral and The Bazaar and In The Beginning Was The Command Line i urge you to do so it will change your perspective.
We are no longer hampered by the concept of scale in computing. We can also create a system that behaves at scale with only but a few human resources. You can do a lot with few humans now which has been the promise of computing.
Distributed computing methods are now coming to fruition. We no longer think in terms of a monolithic operating system or in place machine learning. Edge computing and fiber networks are accelerating this at an astonishing rate. Transactions now dictate trust. While we will revisit this during the design chapters of the blog I’ll go out on a limb here and say these three features are cogent to distributed system processing (and possibly the future of computing at scale).
Incentive models
Consensus models
Protocol models
We will definitely be going into the deeper psychological, mathematical, and technical aspects of these items.
Some additional points of interest and on timing. Microsoft recently released press about a Planetary Computer and announced the position of Chief Ecology Officer. While i do not consider Project Nuomena to be of the same system type there could be similarities on the ecological aspects which just like in open source creates a more resilient base to work.
The top market cap companies are all information theoretic-based corporations. Humans that know the science, technology, mathematics and liberal arts are key to their success. All of these companies are woven and interwoven into the very fabric of our physical and psychological lives.
Thus it is with the confluence of these items i believe the time is now to embark on this design journey. We must address the Environment, Societal factors and the model of governance.
A mentor once told me one time in a land far away: “Timing is everything as long as you can execute.” Ergo Timing and Execution Is Everything.
Goals
It is my goal that i can create a design and hopefully, an implementation that is utilizing computational means to truly assist in building models and sampling the world where we can adhere to goals in making small but meaningful changes that can be used within what i am calling the 3R’s: recycle, redact, reuse. Further, i hope with the proper incentive models in place that are dynamic it has a mentality positive feedback effect. Just as in complexity theory a small change – a butterfly wings – can create hurricanes – in this case positive effect.
Here is my overall plan. i’m not big on the process or gant charts. I’ll be putting all of this in a README.md as well. I may ensconce the feature sets etc into a trello or some other tracking mechanism to keep me focused – WebSphere feel free to make recommendations in the comments section:
Action Items:
Create Comparative Models
Create Coarse-Grained Attributes
Identify underlying technical attributes
Attempt to coalesce into an architecture
Start writing code for the above.
Preamble
Humanity has come to expect growth as a material extension of human behavior. We equate growth with progress. In fact, we use the term exponential growth as it is indefinitely positive. In most cases for a fixed time interval, this means a doubling of the relevant system variable or variables. We speak of growth as a function of gross national production. In most cases, exponential growth is treacherous where there are no known or perceived limits. It appears that humanity has only recently become aware that we do not have infinite resources. Psychologically there is a clash between the exponential growth and the psychological or physical limit. The only significance is the relevant (usually local) limit. How does it affect me, us, and them? This can be seen throughput most game theory practices – dominant choice. The pattern of growth is not the surprise it is the collision of the awareness of the limit to the ever-increasing growth function is the surprise.
One must stop and ask:
Q: Are progress (and capacity) and the ever-increasing function a positive and how does it relate to 2nd law of thermodynamics aka Entropy? Must it always expand?
We are starting to see that our world can exert dormant forces that within our life can greatly affect our well being. When we approach the actual or perceived limit the forces which are usually negative begin to gain strength.
So given these aspects of why i’ll turn now to start the discussion. If we do not understand history we cannot predict the future by inventing it or in most cases re-inventing it as it where.
I want to start off the history by referencing several books that i have been reading and re-reading on subjects of modeling the world, complexity, and models for collapse throughout this multipart blog. We will be addressing issues concerning complex dynamics as are manifested with respect to attributes model types, economics, equality, and mental concerns.
These core references are located at the end of the blog under references. They are all hot-linked. Please go scroll and check them out. i’ll still be here. i’ll wait.
Checked them out? i know a long list.
As you can see the core is rather extensive due to the nature of the subject matter. The top three books are the main ones that have been the prime movers and guides of my thinking. These three books i will refer to as The Core Trilogy:
As i mentioned i have been deeply thinking about all aspects of this system for quite some time. I will be mentioning several other texts and references along the continuum of creation of this design.
We will start by referencing the first book: World Dynamics by J.W. Forrestor. World Dynamics came out of several meetings of the Rome Club a 75 person invite-only club founded by the President of Fiat. The club set forth the following attributes for a dynamic model that would attempt to predict the future of the world:
Population Growth
Capital Investment
Geographical Space
Natural Resources
Pollution
Food Production
The output of this design was codified in a computer program called World3. It has been running since the 1970s what was then termed a golden age of society in many cases. All of these variables have been growing at an exponential rate. Here we see the model with the various attributes in action. There have been several criticisms of the models and also analysis which i will go into in further blogs. However, in some cases, the variants have been eerily accurate. The following plot is an output of the World3 model:
Issues Raised By World3 and World Dynamics
The issues raised by World3 and within the book World Dynamics are the following:
There is a strong undercurrent that technology might not be the savior of humankind
Industrialism (including medicine and public health) may be a more disturbing force than the population.
We may face extreme psychological stress and pressures from a four-pronged dilemma via suppression of the modern industrial world.
We may be living in a “golden age” despite a widely acknowledged feeling of malaise.
Exhtortions and programs directed at population control may be self-defeating. Population control, if it works, would yield excesses thereby allowing further procreation.
Pollution and Population seem to oscillate whereas the high standard of living increases the production of food and material goods which outrun the population. Agriculture as it hits a space limit and as natural resources reach a pollution limit then the quality of life falls in equalizing population.
There may be no realistic hope of underdeveloped countries reaching the same standard and quality of life as developed countries. However, with the decline in developed countries, the underdeveloped countries may be equalized by that decline.
A society with a high level of industrialization may be unsustainable.
From a long term 100 years hence it may be unwise for underdeveloped countries to seek the same levels of industrialization. The present underdeveloped nations may be in better conditions for surviving the forthcoming pressures. These underdeveloped countries would suffer far less in a world collapse.
Fuzzy Human – Fuzzy Model
The human mind is amazing at identifying structures of complex situations. However, our experiences train us poorly for estimating the dynamic consequences of said complexities. Our mind is also not very accurate at estimating ad hoc parts of the complexities and the variational outcomes.
One of the problems with models is well it is just a model The subject-observer reference could shift and the context shifts thereof. This dynamic aspect needs to be built into the models.
Also while we would like to think that our mental model is accurate it is really quite fuzzy and even irrational in most cases. Also attempting to generalize everything into a singular model parameter is exceedingly difficult. It is very difficult to transfer one industry model onto another.
In general parameterization of most of these systems is based on some perceptual model we have rationally or irrationally invented.
When these models were created there was the consideration of modeling social mechanics of good-evil, greed – altruism, fears, goals, habits, prejudice, homeostasis, and other so-called human characteristics. We are now at a level of science where we can actually model the synaptic impulse and other aspects that come with these perceptions and emotions.
There is a common cross-cutting construct in most complex models within this text that consists of and mainly concerned with the concept of feedback and how the non-linear relationships of these modeled systems feedback into one another. System-wide thinking permeates the text itself. On a related note from the 1940’s of which Dr Norbert Weiner and others such as Claude Shannon worked on ballistic tracking systems and coupled feedback both in a cybernetic and information-theoretic fashion of which he attributed the concept of feedback as one of the most fundamental operations in information theory. This led to the extremely famous Weiner Estimation Filters. Also, side note: Dr Weiner was a self-styled pacifist proving you can hold two very opposing views in the same instance whilst being successful at executing both ideals.
Given that basic function of feedback, lets look at the principle structures. Essentially the model states there will be levels and rates. Rates are flows that cause levels to change. Levels can accumulate the net level. Either addition or subtraction to that level. The various system levels can in aggregate describe the system state at any given time . Levels existing in all subsystems of existence. These subsystems as you will see include but are not limited to financial, psychological, biological, and economic. The reason that i say not limited to because i also believe there are some yet to be identified subsystems at the quantum level. The differential or rate of flow is controlled by one or more systems. All systems that have some Spatio-temporal manifestation can be represented by using the two variables levels and rates. Thus with respect to the spatial or temporal variables, we can have a dynamic model.
The below picture is the model that grew out of interest from the initial meetings of the Club of Rome. The inaugural meeting which was the impetus for the model was held in Bern, Switzerland on June 29, 1970. Each of the levels presents a variable in the previously mentioned major structures. System levels appear as right triangles. Each level is increased or decreased by the respective flow. As previously mentioned on feedback any closed path through the diagram is a feedback loop. Some of the closed loops given certain information-theoretic attributes be positive feedback loops that generate growth and others that seek equilibrium will be negative feedback loops. If you notice something about the diagram it essentially is a birth and death loop. The population loop if you will. For the benefit of modeling, there are really only two major variables that affect the population. Birth Rate (BR) and Death Rate (DR). They represent the total aggregate rate at which the population is being increased or decreased. The system has coefficients that can initialize them to normal rates. For example, in 1970 BRN is taken as 0.0885 (88.5 per thousand) which is then multiplied by population to determine BR. DRN by the same measure is the outflow or reduction. In 1970 it was 9.5% or 0.095. The difference is the net and called normal rates. The normale rates correspond to a physical normal world. When there are normal levels of food, material standard of living, crowding, and pollution. The influencers are then multipliers that increase or decrease the normal rates.
As a caveat, there have been some detractors of this model. To be sure it is very coarse-grained however while i haven’t seen the latest runs or outputs it is my understanding as i said the current outputs are close. The criticisms come in the shape of “Well its just modeling everything as a . I will be using this concept and map if you will as the basis for Noumena. The concepts and values as i evolve the system will vary greatly from the World3 model but i believe starting with a minimum viable product is essential here as i said humans are not very good at predicting all of the various outcomes in high dimensional space. We can asses situations very quickly but probably outcomes no so much. Next up we will be delving into the loops deeper and getting loopier.
So this is the first draft if you will as everything nowadays can be considered an evolutionary draft.