SnakeByte[19] – Software As A Religion

Religion is regarded by the common people as true, by the wise as false, and by the rulers as useful.

Lucius Annaeus Seneca

Dalle’s Idea of Religion

First as always i hope everyone is safe oh Dear Readers. Secondly, i am going to write about something that i have been pondering for quite some time, probably close to two decades.

What i call Religion Of Warez (ROWZ).

This involves someone who i hold in the highest regard YOU the esteemed developer.

Marc Andreeson famously said “Software Is Eating The Word”. Here is the original blog:

“Why Software Is Eating The World” by Marc Andreeson

There is a war going on for your attention. There is a war going on for your thumb typing. There is a war going on for your viewership. There is a war going on for your selfies. There is a war going on for your emoticons. There is a war going on for github pull requests.

There is a war going on for the output of your prompting.

We have entered into The Great Cognitive Artificial Intelligence Arms Race (TGCAIAR) via camps of Large Languge Model foundational model creators.

The ability to deploy the warez needed to wage war on YOU Oh Dear Reader is much more complex from an ideological perspective. i speculate that Software if i may use that term as an entity is a non-theistic religion. Even within the Main Tabernacle of Software (MTOS) there are various fissures of said religions whether it be languages, architectures or processes.


A great blog can be found here -> Software Development: It’s a Religion.

Let us head over to the LazyWebTM and do a cursory search and see what we find[1] concerning some comparison numbers for religions and software languages.

In going to wikipedia we find:

According to some estimates, there are roughly 4,200 religions, churches, denominations, religious bodies, faith groups, tribes, cultures, movements, ultimate concerns, which at some point in the future will be countless.

Wikipedia

Worldwide, more than eight-in-ten people identify with a religious group. i suppose even though we don’t like to be categorized, we like to be categorized as belonging to a particular sect. Here is a telling graphic:

Let us map this to just computer languages. Just how many computer languages are there? i guessed 6000 in aggregate. There are about 700 main programming languages, including esoteric coding languages. From what i can ascertain some only list notable languages add up to 245 languages. Another list called HOPL, which claims to include every programming language ever to exist, puts the total number of programming languages at 8,945.

So i wasn’t that far off.

Why so much kerfuffle on languages? For those that have ever had a language discussion, did it feel like you were discussing religion? Hmmmm?

Hey, my language does automatic heap management. Why are you messing with memory allocation via this dumb thing called pointers?

The Art of Computer Programming is mapping an idea into a binary computational translation (classical computing rates apply). This process is highly inefficient compared to having binary-to-binary discussions[2]. Note we are not even considering architectures or methods in this mapping. Let us keep it at English to binary representation. What is the dimensionality reduction for that mapping? What is lost in translation?

For reference, i found a very precise and well-written blog here -> How Much Code Has Ever Been Written?

The calculation involves the number of lines of code ever written up to that point sans the exponential rate from the past two years:

2,781,000,000,000

Roughly 2.8 Trillion Lines of Code have been written in the past 20 years.

Sage McEnery 2020

As i always like to do i refer to Miriam Webster Dictionary. It holds a soft spot in my heart strings as i used to read it in grade school. (Yes i read the dictionary…)

Religion: Noun

re·​li·​gion (ruh·li·jen)

: a cause, principle, or system of beliefs held to with ardor and faith

Well, now, Dear Reader, the proverbial plot thickens. A System of Beliefs held to faith. Nowadays, religion is utilized as a concept today applied for a genus of social formations that includes several members, a type of which there are many tokens or facets.

If this is, in fact, the case, I will venture to say that Software could be considered a Religion.

One must then ask? Is there “a model” to the madness? Do we go the route of the core religions? Would we dare say the Belief System Of The Warez[3] be included as a prominent religion?

Symbols Of The World Religions

I have said several times and will continue to say that Software is one of the greatest human endeavors of all time. It is at the essence of ideas incarnate.

It has been said that if you adopt the science, you adopt the ideology. Such hatred or fear of science has always been justified in the name of some ideology or other.

If we take this as the undertone for many new aspects of software, we see that the continuum of mind varies within the perception of the universe by which we are affected by said software. It is extremely canonical and first order.

Most often, we anthropomorphize most things and our software is no exception. It is as though it were an entity or even a thing in the most straightforward cases. It is, in fact, neither. It is just information imputed upon our minds via probabilistic models via non convex optimization methods. It is as if it was a Rorschach test that allowed many people to project their own meaning onto it (sound familiar?). 

Let me say this a different way. With the advent of ChatGPT we seem to desire IT to be alive or reason somehow someway yet we don’t want it to turn into the terminator.

Stock market predictions – YES

Terminator – NO.

The Thou Shalts Will Kill You

~ Joseph Campbell

Now we are entering a time very quickly where we have “agentic” based large language models that can be scripted for specific tasks and then chained together to perform multiple tasks.

Now we have large language models distilling information gleaned from other LLMs. Who’s peanut butter is in the chocolate? Is there a limit of growth here for information? Asymptotic token computation if you will?

We are nowhere near the end of writing the Religion Of Warez (ROWZ) sacred texts compared to the Bible, Sutras, Vedas, the Upanishads, and the Bhagavad Gita, Quran, Agamas, Torah, Tao Te Ching or Avesta, even the Satanic Bible. My apologies if i left your special tome out it wasn’t on purpose. i could have listed thousands. BTW for reference there is even a religion called the Partridge Family Temple. The cult’s members believe the characters are archetypal gods and goddesses.

In fact we have just begun to author the Religion Of Warez (ROWZ) sacred text. The next chapters are going be accelerated and written via generative adversarial networks, stable fusion and reinforcement learning transformer technologies.

Which, then, one must ask which Diety are YOU going to choose?

i wrote a little stupid python script to show relationships of coding languages based on dates for the main ones. Simple key value stuff. All hail the gods K&R for creating C.

import networkx as nx
import matplotlib.pyplot as plt

def create_language_graph():
    G = nx.DiGraph()
    
    # Nodes (Programming languages with their release years)
    languages = {
        "Fortran": 1957, "Lisp": 1958, "COBOL": 1959, "ALGOL": 1960,
        "C": 1972, "Smalltalk": 1972, "Prolog": 1972, "ML": 1973,
        "Pascal": 1970, "Scheme": 1975, "Ada": 1980, "C++": 1983,
        "Objective-C": 1984, "Perl": 1987, "Haskell": 1990, "Python": 1991,
        "Ruby": 1995, "Java": 1995, "JavaScript": 1995, "PHP": 1995,
        "C#": 2000, "Scala": 2003, "Go": 2009, "Rust": 2010,
        "Common Lisp": 1984
    }
    
    # Adding nodes
    for lang, year in languages.items():
        G.add_node(lang, year=year)
    
    # Directed edges (influences between languages)
    edges = [
        ("Fortran", "C"), ("Lisp", "Scheme"), ("Lisp", "Common Lisp"),
        ("ALGOL", "Pascal"), ("ALGOL", "C"), ("C", "C++"), ("C", "Objective-C"),
        ("C", "Go"), ("C", "Rust"), ("Smalltalk", "Objective-C"),
        ("C++", "Java"), ("C++", "C#"), ("ML", "Haskell"), ("ML", "Scala"),
        ("Scheme", "JavaScript"), ("Perl", "PHP"), ("Python", "Ruby"),
        ("Python", "Go"), ("Java", "Scala"), ("Java", "C#"), ("JavaScript", "Rust")
    ]
    
    # Adding edges
    G.add_edges_from(edges)
    
    return G

def visualize_graph(G):
    plt.figure(figsize=(12, 8))
    pos = nx.spring_layout(G, seed=42)
    years = nx.get_node_attributes(G, 'year')
    
    # Color nodes based on their release year
    node_colors = [plt.cm.viridis((years[node] - 1950) / 70) for node in G.nodes]
    
    nx.draw(G, pos, with_labels=True, node_color=node_colors, edge_color='gray', 
            node_size=3000, font_size=10, font_weight='bold', arrows=True)
    
    plt.title("Programming Language Influence Graph")
    plt.show()

if __name__ == "__main__":
    G = create_language_graph()
    visualize_graph(G)

Programming Relationship Diagram

So, folks, let me know what you think. I am considering authoring a much longer paper comparing behaviors, taxonomies and the relationship between religions and software.

i would like to know if you think this would be a worthwhile piece?

Until Then,

#iwishyouwater <- Banzai Pipeline January 2023. Amazing.

@tctjr

MUZAK TO BLOG BY: Baroque Ensemble Of Vienna – “Classical Legends of Baroque”. i truly believe i was born in the wrong century when i listen to this level of music. Candidly J.S. Bach is by far my favorite composer going back to when i was in 3rd grade. BRAVO! Stupdendum Perficientur!

[1] Ever notice that searching is not finding? i prefer finding. Someone needs to trademark “Finding Not Searching” The same vein as catching ain’t fishing.

[2] Great paper from OpenAI on just this subject: two agents having a discussion (via reinforcement learning) : https://openai.com/blog/learning-to-communicate/ (more technical paper click HERE)

[3] For a great read i refer you to the The Ware Tetralogy by Rudy Rucker: Software (1982), Wetware (1988), Freeware (1997), Realware (2000)

[4] When the words “software” and “engineering” were first put together [Naur and Randell 1968] it was not clear exactly what the marriage of the two into the newly minted term really meant. Some people understood that the term would probably come to be defined by what our community did and what the world made of it. Since those days in the late 1960’s a spectrum of research and practice has been collected under the term.

What Is Your Eulogy? (Memento Mori – Memento Vivre)

Dalle’s Idea of a Crypt Monument

One life on this earth is all that we get, whether it is enough or not enough, and the obvious conclusion would seem to be that at the very least we are fools if we do not live it as fully and bravely and beautifully as we can.

Frederick Buechner

First, as always, i trust everyone is safe. Second, i trust everyone had an amazing holiday with your family and friends hopefully did something “screen-free”. It is the start of a new year.

i am changing gears just a little and writing on a subject that, at first blush, might appear morose, yet it is not. in fact quite the opposite.

What Is Your Eulogy?

Yep i went THERE. (Ever notice that once you arrive, you are there and think about somewhere else?)

If you go to my About page, you will see that I set this site up mainly to be a memory machine for me and a digital reference for My Family and Friends in addition, if along the way, i entertain someone on the WorldWideWait(tm) all the better. A reference for a future memory if you will.

I am taking complete editorial advantage of paying the AWS bill every month, and there is a “.org” at the end of the site name denoting a not-for-profit site supposedly like a religion. i can say what i want, i suppose—well, still within reason nowadays. Free Speech, They Said… Yet, I digress.

I will persist until I succeed.

I was not delivered unto this world in defeat, nor does failure course in my veins. I am not a sheep waiting to be prodded by my shepherd. I am a lion and I refuse to talk, to walk, to sleep with the sheep. I will hear not those who weep and complain, for their disease is contagious. Let them join the sheep. The slaughterhouse of failure is not my destiny.

I will persist until I succeed.

~ OG Mandino

For context, this subject matter was initiated on the conflagration of several disparate events:

  1. i introduced one of my progeny to Mozart’s Requiem in D minor, K. 626, aka Lacrimosa. We discussed the word Requiem, and then she immediately informed me that Lacrimosa means Sorrow in Latin and in the key of D minor. Wow, thank you, i said. (maybe something is sticking…)
  2. An old friend whom I hadn’t seen in years passed away the day after I emailed him I contacted him to discuss some audio subject material that I enjoyed speaking with him about in detail. Alas, another cancer victim.
  3. i took a class put on by Mathew McConaughey and Tony Robbins called “The Art of Living” and the book The Greatest Salesman by OG Madino was featured in class.
  4. i took yet another class from the amazing Flow Research Collective Group. You can read a review here.
  5. Since I started this piece, even more humans who are dear to me have passed or received extremely dire news.
  6. i just wanted to scribe these thoughts in order to “remind me to remember”.

Life Should be One Great Adventure or Nothing.

Helen Keller

So here we go… it is tl;dr fo’ sho’.

In one of the aforementioned classes, the subject matter was the title of this blog. I originally had planned to calll this blog “Do Not Be Awed Into Submission,” where most people nowadays are “awed” by TikTok,Instagram or YouTube videos of people doing stuff and keep themselves from truly creating and DOING stuff in their own lives. They just sit and watch sports, listen to podcasts and “consume” without using that information to create. It seems to me, at least, that most people nowadays spectate instead of create or participate.

Yet i started reflecting on the subject matter as this blog has been in draft form for over a year. Another year passed, another amazing birthday (afaic the most important holiday), and here we are, a New Year into 2025.

So given all that context and background:

What do i want to be known for when Ye Ole #EndOTimes is forthcoming? (Note: for those word freaks out there, it is called Eschatology from Greek (who else?) ἔσχατος (éskhatos).

This is the CENTRAL SCRUTINIZER
Joe has just worked himself into an imaginary frenzy during the fade-out of his imaginary song,
He begins to feel depressed now. He knows the end is near. He has realized
at last that imaginary guitar notes and imaginary vocals exist only in the mind
of the imaginer.
And ultimately, who gives a f**k anyway? HAHAHAHA!…Excuse me…so who gives a f**k anyway? So he goes back to his ugly little room and quietly dreams his last imaginary guitar solo…

~ Frank Zappa From Watermelon In Easter Hey

i believe at this point, at the end of this thing called life, pretty much for me, are the following attributes that i want to be known for as best as possible i can be:

  • Honor and Integrity
  • Brutal Honesty
  • Living Life Loud
  • Improving Oneself Daily (mentally, physically, emotionally)
  • Loving (and Hating)
  • Quality Over Quantity
  • Maintaining a sheer sense of wonder and awe for Life

If you note, most of these items are items i can control or affect. You say well, what about being a good friend, spouse, parent? Well, to the best of your ability, you can try to be the best at those, but ultimately, someone else is judging YOU. In fact, we are always judged, and in fact, I will say that most people judge – consciously or subconsciously, ergo, Judge as Ye Be Judged.

As well as, and i hope duly noted, some of those items are controversial. Oh Dear Reader, this wont be the first time i have been associated with controversial.

You have enemies? Why, it is the story of every man who has done a great deed or created a new idea. It is the cloud which thunders around everything that shines. Fame must have enemies, as light must have gnats. Do not bother yourself about it; disdain. Keep your mind serene as you keep your life clear.

~ Victor Hugo

To the best of my ability, I will attempt to provide definitions and context for the above attributes. One additional context is that these are couched in “individualistic” references, not societal norms, overlays or programming.

  1. Honor and Integrity

Honor and integrity are ethical concepts that are often intertwined but have distinct meanings:

Honor

Honor refers to high respect and esteem, often tied to one’s actions, character, and adherence to a code of conduct. It is about upholding a personal set of values considered virtuous and deserving of respect and maintaining one’s reputation and dignity through ethical behavior and moral decision-making.

Integrity

Integrity is the quality of being honest and having strong moral principles. It involves consistently adhering to ethical standards and being truthful, fair, and just in all situations. Key aspects of integrity include being truthful and transparent in one’s actions and communications and acting according to one’s values and principles even when it is challenging, inconvenient, or, in many cases, seemingly impossible.

Essentially, it is standing up for “what is right” (as one views in and unto oneself), even within and to the point of adversity or personal loss.

What is good? – All that heightens the feelings of power, the will to power, power itself in man. What is bad? – All that proceeds from weakness. What is happiness? – The feeling that power increases – that a resistance is overcome.

~ Friedrich Nietzsche

Honor and integrity form the foundation of a trustworthy and respected character. Honor emphasizes the external recognition of one’s ethical behavior, while integrity focuses on the internal adherence to moral principles. Your moral compass is extremely individualistic. In full transparency given that i believe there is no original sin some have questioned how in the world can i have such moral character. Literally, someone said to me: “Given how you view things, how do you have such high morals compared to everyone else.” (NOTE: This question came from a very religious, devout, wonderful person i love.).

It is better to be hated for what you are than to be loved for what you are not.

~ Andre Gide

Brutal Honesty

Brutal honesty refers to being extremely direct and unfiltered in communication, often to the point of being blunt or harsh. This form of honesty prioritizes telling the truth without considering the potential impact on the feelings or reactions of others. It sorta kinda exactly goes hand in hand with Integrity which in turn connects to Honor.

Key aspects of brutal honesty include:

Directness: Providing straightforward and unvarnished truth without sugarcoating or softening the message.

Bluntness: Being frank (or Ted) and candid, even if the truth may be uncomfortable or hurtful.

There isn’t a coffee table book entitled “Mediocore Humans In History”

~ C.T.T.

So why try to toe the Brutal Honesty Line?

Clarity: It can eliminate misunderstandings and provide a clear and unambiguous message. Also, it lets people know where you stand.

Trust: Some people appreciate brutal honesty because it demonstrates a commitment to truthfulness and transparency. I’ve had folks come back to me later and thanked me. Which is really rad of them.

Efficiency: It can get to the heart of an issue without dancing around the subject. Once again, note the time savings component. It saves a ton of time. HUUUUUUOOOOOGGGEEE time saver.

Potential Drawbacks

If you are delivering negative information to someone this can have drawbacks. If you are delivering positive news, do it with gusto! However, this situation can occur.

Hurt Feelings: It can cause emotional harm or strain relationships due to the harsh delivery. Deliver honest negative information with proper propriety and courtesy. They will hopefully get over it if they have any self-reflection.

Perception of Rudeness: It may be perceived as insensitive, disrespectful, lack of empathy, or unnecessarily harsh. However, if you are running a company or in a particularly toxic relationship, great results take drastic measures.

Conflict: It can lead to conflicts or defensive reactions from those who receive the message. Some say life is all conflict. Once again don’t go looking for trouble but you cannot shy away from interactions.

The harder the conflict, the more glorious the triumph.

 ~ Thomas Paine 

Caveat Emptor: As implicit in the above commentary, Brutal Honesty should be balanced with surgical and thoughtful empathy and, shall we say, nuance to ensure that the truth is communicated effectively and respectfully. For instance, it is okay to lie and say someone’s baby is cute. In the same fashion, eating everything on your plate when they have asked you over for supper at a neighbor’s house is also good manners, even though you probably do not like well-done pot roast and peas. Say thank you, and it was delicious. In Everything, practice propriety and courtesy.

When you have lived your individual life in YOUR OWN adventurous way and then look back upon its course, you will find that you have lived a model human life, after all.

Professor Joseph Campbell

2. “Living Life Loud” is a phrase that conveys embracing life with enthusiasm, boldness, and authenticity. It suggests living in a way that is vibrant, expressive, and true to oneself. To be authentic and true to yourself, and to embrace your passions and unique perspectives. It can also mean living intentionally and unapologetically, pursuing your dreams with enthusiasm, and stepping outside of your comfort zone.

Here are some aspects of what it means to Live Life Loud:

Authenticity: Being true to yourself and not being afraid to show your true colors, even if they differ from societal norms or expectations.

Boldness: Taking risks, stepping out of your comfort zone, and confidently pursuing your passions and dreams.

Enthusiasm: Approaching life with energy and excitement, making the most out of every moment.

Courage: Facing challenges head-on and standing up for what you believe in, even when it’s difficult.

I wonder, I wonder what you would do if you had the power to dream any dream you wanted to dream?

~ Alan Watts

This seems rather nebulous in some cases, so let us get a little more specific with some examples.

Pursuing Dreams: Actively chasing your goals and aspirations, regardless of how daunting they may seem. Most dreams are impossible; otherwise, they wouldn’t be dreams.

Taking Risks: Being willing to try new things, even if there’s a chance of failure. It goes hand in hand with Pursuing Your Dreams. Someone once said “I need to surf big waves with two oxygen tanks,” i said well you cant surf them then. In the same vein someone told me when discussing my view on creating companies: “I cant take that risk.”, i asked well you drive a car? Trust me that is a much larger risk everyday.”

In the next five seconds what are you going to do to make your life spectacular?

~ Tim O’Reilly

Being Outspoken: Sharing your opinions and ideas confidently, without fear of judgment. Not bragging. Being forthright in your views and taking responsibility for those views. Owning them and being prepared to defend them.

Celebrating Uniqueness: Embracing what makes you different and showcasing it proudly (not loudly). However, not to the point of narcism. Of course, I hear Tyler Durden saying, “You are not a unique snowflake,” whilst also saying, “You are not your f-ing khakis!”

So why live life loud? Well, I’m glad you asked. Here are just some that I wrote down: Being open and expressive can help build deeper, more meaningful relationships. Brutal Honesty with Onesself and the Universe.

This chooses by definition a life of surprise. Living outside the realm of societal norms in most cases.

Potential Challenges

Judgment: Judge So Ye Be Judged! Others may not always understand or accept your loud approach to life, which can lead to criticism or judgment. THEY are going to judge anyway. In fact THEY have judged even before you started living life loud. Why? Because most who judge follow The Herd mentality of Social Norms.

Risk of Failure: Taking bold steps can sometimes lead to setbacks or failures, which require resilience to overcome. However my “hot-take” (isn’t that the lingo?) is once you have stepped out on the edge and attempted to create or do or launch yourself into the air over ice or over the ledge of a heaving wave – YOU WON! Analysis to paralysis is death. Hesitation Kills folks. Remember if you fail you have no where to go but up and if it is a big enough failure you have a great story!

Vulnerability: Being authentic and expressive means being vulnerable, which will be in most cases uncomfortable, I’d rather crawl through glass attempting to obtain My Personal Legend that sit back and think i could have done or what might have been. In fact, most people are frightened more of living the extreme dream than failing. they would rather fail or even said they failed and quit.

All we hear is radio ga ga

Radio goo goo

Radio ga ga

All we hear is radio ga ga

Radio blah, blah

~ Radio GA GA, Queen

Living Life Loud is about making the most of your existence, embracing who you are, and not being afraid to live boldly and authentically. Go to the extreme of that dream, as extreme as you can obtain because, Dear Reader, there are no circumstances, and once you move toward Living Life Loud, there are even as i once believed – no Consequences.

Caveat Emptor: There is no free lunch here at all. The path you choose for your bliss is expensive. The collateral damage is mult-modal. it has been said Humans love a winner but they love a looser more because it makes them feel better about themselves. This also gets into our subconscious programming from society and our families. My Mother not too long ago when i was discussing a subject concerning “taking care of them” and she responded: You go live your life and make no decisions based on others. Others should be so lucky, but they aren’t. The hardest path is YOUR true path. Choose it. Hold It. Protect IT.

Respice post te. Hominem te esse memento. Memento mori.” (“Look after yourself. Remember you’re a man. Remember you will die.”). 

~The 2nd-century Christian writer Tertullian reports a general said this during a procession

3. Improving Oneself Daily

Improving oneself mentally, physically, and “spiritually” daily involves a commitment to continuous personal development in both the mind and body. This holistic approach to self-improvement includes activities and habits that promote mental clarity, emotional well-being, and physical health. Here’s a breakdown of what it means:

Mentally

Learning: Engaging in activities that stimulate your mind, such as reading, studying, or learning new skills.

Mindfulness: Practicing mindfulness or meditation to enhance self-awareness, reduce stress, and improve mental clarity.

Positive Thinking: Cultivating a positive mindset by focusing on gratitude, affirmations, and reframing negative thoughts. Stay away from pessimistic people and naysayers.

Problem-Solving: Challenging yourself with puzzles, games, or new experiences that require critical thinking and creativity. Study the subject of neo-plasticity. Brush your teeth with the opposite hand for a week. Drive a new path without Apple/Google/Waze Maps. Or do what i like to do Freedive. Click and read.

Emotional Health: Managing emotions effectively through journaling, therapy, or talking to trusted friends or family members. Take martial arts for defense and emotional health. Punch a bag. Lift heavy weights. Love animals.

Reading: Read, Read and Read More. Not trash novels but deep nonfiction and fiction. Write, take notes when you read.

Physically

Exercise: Engaging in regular physical activity, whether it’s strength training, cardio, yoga, or any other form of exercise that keeps your body active and strong. Get up and MOVE!

Nutrition: Eating a balanced and nutritious diet that fuels your body and supports overall health. i happen to trend towards canivore. It’s difficult, but it changed my life. Again, eating meat lifts heavy things.

Sleep: Ensuring you get adequate and quality sleep to allow your body and mind to recover and function optimally. i can sleep standing up in an airport. Learn how to take power naps.

Daily Habits

Consistency: Make these activities a part of your daily routine to ensure continuous improvement. Discipline above all. Not grit or determination but Discipline. Have a morning routine. Or any routine then allows you the mental freedom to go to other places mentally and physically. Takes cognitive load off you and reduces friction. Eat the same things, dress the same way.

Goal Setting: Setting small, achievable goals that contribute to your long-term personal development. Make your bed everyday. Set goals in the am then reflect in pm. How could you do better tomorrow? Take time each day to reflect on your progress, identify areas for improvement, and celebrate your achievements.

Adaptability: Being open to change and willing to adjust your habits and routines as you learn what works best for you. Try things you wouldn’t normally do – listen to smooth jazz. Try Hot Yoga. Do stuff then you can optimize to your liking. You might try it and like it.

Improving oneself mentally and physically daily is a lifelong commitment to becoming the best version of yourself. It involves dedication, consistency, and a willingness to learn and adapt continually. It is all based on discipline. Full stop. Not motivation, not grit not anything but getting up and MOVING. Go do the thing that scares. you the most or the thing that you deplore the most – D I S C I P L I NE. i lift every day and read something every day.

Without contrairies there no progression. Attraction and replusion, reason and energy, love and hate are necessary for human existence.

~ William Blake

4. Loving (and Hating)

The idea of experiencing both love and hate at their fullest potential emphasizes the importance of embracing the full spectrum of human emotions to lead a richer, more authentic life.

Emotional Authenticity

Full Range of Experience: Experiencing the full range of emotions allows for a deeper understanding of oneself and others. It means accepting and acknowledging all feelings rather than suppressing them. i call this the dynamic range of life. Western society suppresses everything except sadness. it is ok to be sad. Be enraged. Be Full Of Lust and Desire. Know were your limits are if there are any and learn to regulate them as needed.

Self-Awareness: Fully engaging with both love and hate can lead to greater self-awareness and insight into what matters to you and why. If i have been guilty of something is not being aware enough. If there is original sin afaic it is stupidity and non-awareness. Funny how they go hand in hand and do related to loving and hating.

Learning Opportunities: Intense emotions, whether positive or negative, can be powerful teachers. They provide opportunities to learn about your triggers, strengths, weaknesses, and values. Putting yourself out there past the pale teaches you quickly and well. Strong emotions can inspire creativity, leading to profound art, writing, music, and other forms of expression.

Resilience: Navigating through both love and hate can build emotional resilience, helping you manage future challenges more effectively. Experiencing hate or intense dislike can make you appreciate love and positive emotions more deeply, providing a balanced perspective on life. Salt and Pepper anyone?

Remember when you were young, you shown like the Sun. Shine On you Crazy Diamond!

~ Pink Floyd “Shine On You Crazy Diamond”

Loving and Hating will lead to Authentic Relationships.

Deeper Connections: Loving deeply fosters strong, meaningful relationships. Being open about negative emotions can also lead to more honest and authentic interactions. Strong emotions can inspire creativity, leading to profound art, writing, music, and other forms of expression. Confronting and understanding negative emotions can lead to healthier conflict resolution and stronger relationships in the long term.

Caveats and Considerations when Loving and Hating

Caveat Emptor: It’s important to express both love and hate in healthy, constructive ways. While deep emotions are natural, how you act on them matters significantly. Ensure that the expression of intense emotions does not harm yourself or others. Finding healthy outlets for negative emotions is crucial. While experiencing emotions entirely is valuable, maintaining a balance is important. Overwhelming negativity or unchecked hatred can be destructive, so it’s essential to seek ways to manage and balance these emotions. Also sometimes we must practice complete indifference. Embracing both love and hate fully can lead to a richer, more nuanced understanding of life, fostering personal growth, deeper relationships, and a more authentic existence.

And the Germans killed the Jews
And the Jews killed the Arabs
And Arabs killed the hostages
And that is the news
And is it any wonder
That the monkey’s confused

~ Perfect Sense Part 1, Roger Waters

5. Quality Over Quantity

The phrase “quality over quantity” as a human value emphasizes prioritizing the excellence, depth, or meaningfulness of something over merely having more of it. It’s a mindset that values richness, purpose, and intentionality over excess or superficial accumulation. i have a saying: “Best Fewest.” You get the best humans that know how to do something together they can create anything.

Relationships: Valuing meaningful, deep connections with a few people rather than having a large network of acquaintances. Iihave a very small network i can count on one hand, i completely trust. Once you get over 30 you find out who really cares about you. See the quote at the end of the blog. Really those who matter just want you truly happy.

Work: Focusing on producing exceptional work or projects instead of completing many tasks without significant impact or value. That 9 am standup is it really needed? Cant we automate this excel spreadsheet? Think much? Work yourself out of a job and into your passion.

Material Possessions: Preferring fewer high-quality, durable items rather than many cheap, disposable ones. But a high quality custom suit or dress – three of them. Prada, Sene etc. Black, navy, or dark blue with custom shirts. i happen to prefer fench cuffs with cuff links. They never go out of style and will last forever.

There are many who would take my time, I shun them. There are some who share my time, I am entertained by them. There are precious few who contribute to my time, I cherish them.

~ A.S.L.

Time Management: Spending your time on activities that matter and bring fulfillment rather than filling your schedule with things that feel busy but are unimportant or things that people put on you. The above quote is my favorite quote in my life, and if i do have a tombstone, i want it on it. EMBLAZONED!

Essentially, it’s a principle that asks, “What truly matters?” and reminds us to focus on what brings genuine value and satisfaction rather than chasing quantity for the sake of just having more of something.

6. Maintaining a sheer sense of wonder and awe for life

Maintaining a sheer sense of wonder and awe for life means approaching the world with curiosity, gratitude, and an openness to its beauty and mysteries. BE AMAZED AT THE THRALL OF IT ALL! It’s about deeply appreciating the small and large marvels around you—whether it’s the intricacies of nature, the complexities of human connections, or the endless potential for discovery and growth. YOU ARE READING <THIS>. Check out my blog Look Up and Down and All Around – has some cool pictures as well.

It involves letting go of jadedness or routine and instead choosing to see the extraordinary in the ordinary. This mindset keeps you engaged, inspired, and connected to the richness of life, no matter the circumstances. It’s like seeing the world through the eyes of a child, where everything holds the potential for fascination and joy. Turn up the back channel like when you were a child. Be Aware! Be Amazed! Wonder what it is like to be a tree or a rock!

i can say unequivocally that while i have many more mistakes than “performing tasks in a correct fashion” that i have lived a loud and truly individuated life. Would i do things differently? Sure some. I probably would have “sent” it even harder, and past eleven pretty much on everything. i can truly say that i left everything out in the ocean, nothing in the bag and gave it my all. Remember: Take care of those you call your own and keep good company:, storms never last and the forecast calls for Blue Skies!

Enough for now.

For those that truly know me, you know, and I cherish you. 🤘🏻💜.

Until Then,

@tctjr

#iwishyouwater <- if i could do it again, i would live this life. He got the memo.

Music To Blog By: All Of the versions of “Watermelon in Easter Hay”, full name “Playing a Guitar Solo With This Band is Like Trying To Grow a Watermelon in Easter Hay, by Frank Zappa (covers etc) i could find and just loop them. There is even a blue grass version. In their review of the album, Down Beat magazine criticized the song (i despise critics), but subsequent reviewers championed it as Zappa’s masterpiece. Kelly Fisher Lowe called it the “crowning achievement of the album” and “one of the most gorgeous pieces of music ever produced.” I must agree. Supposedly, Zappa told Neil Slaven that he thought it was “the best song on the album. “Watermelon in Easter Hay” is in 9/4 time. The song’s hypnotic arpeggiated pattern is played throughout the song’s nine minutes. The 9/4 time signature keeps the song’s two-chord harmonic structure which until you really listen you don’t realize its a two chord structure.  For me i think it is one of the most sonically amazing pieces of music ever written and produced. Sonically, the reverb is amazing. Sonically, the maribas are astounding. Sonically the orchestral percussion is mesmerizing. The song after Watermelon on Joe’s Garage is completely hilarious, “Little Green Rosetta,”and I am putting that on the going away party playlist, and I hope people dance in a conga or kick line and sing it. The grass bone to the ankle bone (listen to the song…).

Think about it a very mediocre guy imagining how he could play, if he could play anything that he wanted to play? Get the reference to the entire blog? Ala Alan Watts, if you could dream any dream, you want to dream? Then what?

The song is, in effect, a dream of freedom.

Here are some other details about “Watermelon in Easter Hay”:

  • The song’s two alternating harmonies are A and B / E, linked by a G#. 
  • The song is introduced by Zappa as the Central Scrutinizer, which then gives way to a guitar solo. 
  • The song’s snare accents have a lot of reverb and delay, creating a swooosh sound that sometimes sounds like wind. 
  • The song’s guitar solo is the only guitar solo specifically recorded for the album.  All others are from a technique known as xenochronous.
  • Rumor has it Dweezil Zappa is the only person allowed to play it.
  • Someone called the song intoxicating in one of my other blogs on the Zappa Documentary. Kind of like a really good baklava.

And a couple more items for your thoughts:

Its so hard to forget pain but its even harder to remember hapiness. We have no scar to show for hapiness. We learn so little from peace.

~ Chuck Palahnuik (author of fight club, choke etc)

Those who mind don’t matter and those who matter don’t mind.

~ Dr. Suess

i listen to this every morning. Rest In Power Maestro with the amazing Susanna Rigacci:

SnakeByte[18] Function Optimization with OpenMDAO

DALLE’s Rendering of Non-Convex Optimization

In Life We Are Always Optimizing.

~ Professor Benard Widrow (inventor of the LMS algorithm)

Hello Folks! As always, i hope everyone is safe. i also hope everyone had a wonderful holiday break with food, family, and friends.

The first SnakeByte of the new year involves a subject near and dear to my heart: Optimization.

The quote above was from a class in adaptive signal processing that i took at Stanford from Professor Benard Widrow where he talked about how almost everything is a gradient type of optimization and “In Life We Are Always Optimizing.”. Incredibly profound if One ponders the underlying meaning thereof.

So why optimization?

Well glad you asked Dear Reader. There are essentially two large buckets of optimization: Convex and Non Convex optimization.

Convex optimization is an optimization problem has a single optimal solution that is also the global optimal solution. Convex optimization problems are efficient and can be solved for huge issues. Examples of convex optimization include maximizing stock market portfolio returns, estimating machine learning model parameters, and minimizing power consumption in electronic circuits. 

Non-convex optimization is an optimization problem can have multiple locally optimal points, and it can be challenging to determine if the problem has no solution or if the solution is global. Non-convex optimization problems can be more difficult to deal with than convex problems and can take a long time to solve. Optimization algorithms like gradient descent with random initialization and annealing can help find reasonable solutions for non-convex optimization problems. 

You can determine if a function is convex by taking its second derivative. If the second derivative is greater than or equal to zero for all values of x in an interval, then the function is convex. Ah calculus 101 to the rescue.

Caveat Emptor, these are very broad mathematically defined brush strokes.

So why do you care?

Once again, Oh Dear Reader, glad you asked.

Non-convex optimization is fundamentally linked to how neural networks work, particularly in the training process, where the network learns from data by minimizing a loss function. Here’s how non-convex optimization connects to neural networks:

A loss function is a global function for convex optimization. A “loss landscape” in a neural network refers to representation across the entire parameter space or landscape, essentially depicting how the loss value changes as the network’s weights are adjusted, creating a multidimensional surface where low points represent areas with minimal loss and high points represent areas with high loss; it allows researchers to analyze the geometry of the loss function to understand the training process and potential challenges like local minima. To note the weights can be millions, billions or trillions. It’s the basis for the cognitive AI arms race, if you will.

The loss function in neural networks, measures the difference between predicted and true outputs, is often a highly complex, non-convex function. This is due to:

The multi-layered structure of neural networks, where each layer introduces non-linear transformations and the high dimensionality of the parameter space, as networks can have millions, billions or trillions of parameters (weights and biases vectors).

As a result, the optimization process involves navigating a rugged loss landscape with multiple local minima, saddle points, and plateaus.

Optimization Algorithms in Non-Convex Settings

Training a neural network involves finding a set of parameters that minimize the loss function. This is typically done using optimization algorithms like gradient descent and its variants. While these algorithms are not guaranteed to find the global minimum in a non-convex landscape, they aim to reach a point where the loss is sufficiently low for practical purposes.

This leads to the latest SnakeBtye[18]. The process of optimizing these parameters is often called hyperparameter optimization. Also, relative to this process, designing things like aircraft wings, warehouses, and the like is called Multi-Objective Optimization, where you have multiple optimization points.

As always, there are test cases. In this case, you can test your optimization algorithm on a function called The Himmelblau’s function. The Himmelblau Function was introduced by David Himmelblau in 1972 and is a mathematical benchmark function used to test the performance and robustness of optimization algorithms. It is defined as:

    \[f(x, y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2\]

Using Wolfram Mathematica to visualize this function (as i didn’t know what it looked like…) relative to solving for f(x,y):

Wolfram Plot Of The Himmelblau Function

This function is particularly significant in optimization and machine learning due to its unique landscape, which includes four global minima located at distinct points. These minima create a challenging environment for optimization algorithms, especially when dealing with non-linear, non-convex search spaces. Get the connection to large-scale neural networks? (aka Deep Learnin…)

The Himmelblau’s function is continuous and differentiable, making it suitable for gradient-based methods while still being complex enough to test heuristic approaches like genetic algorithms, particle swarm optimization, and simulated annealing. The function’s four minima demand algorithms to effectively explore and exploit the gradient search space, ensuring that solutions are not prematurely trapped in local optima.

Researchers use it to evaluate how well an algorithm navigates a multi-modal surface, balancing exploration (global search) with exploitation (local refinement). Its widespread adoption has made it a standard in algorithm development and performance assessment.

Several types of libraries exist to perform Multi-Objective or Parameter Optimization. This blog concerns one that is extremely flexible, called OpenMDAO.

What Does OpenMDAO Accomplish, and Why Is It Important?

OpenMDAO (Open-source Multidisciplinary Design Analysis and Optimization) is an open-source framework developed by NASA to facilitate multidisciplinary design, analysis, and optimization (MDAO). It provides tools for integrating various disciplines into a cohesive computational framework, enabling the design and optimization of complex engineering systems.

Key Features of OpenMDAO Integration:

OpenMDAO allows engineers and researchers to couple different models into a unified computational graph, such as aerodynamics, structures, propulsion, thermal systems, and hyperparameter machine learning. This integration is crucial for studying interactions and trade-offs between disciplines.

Automatic Differentiation:

A standout feature of OpenMDAO is its support for automatic differentiation, which provides accurate gradients for optimization. These gradients are essential for efficient gradient-based optimization techniques, particularly in high-dimensional design spaces. Ah that calculus 101 stuff again.

It supports various optimization methods, including gradient-based and heuristic approaches, allowing it to handle linear and non-linear problems effectively.

By making advanced optimization techniques accessible, OpenMDAO facilitates cutting-edge research in system design and pushes the boundaries of what is achievable in engineering.

Lo and Behold! OpenMDAO itself is a Python library! It is written in Python and designed for use within the Python programming environment. This allows users to leverage Python’s extensive ecosystem of libraries while building and solving multidisciplinary optimization problems.

So i had the idea to use and test OpenMDAO on The Himmelblau function. You might as well test an industry-standard library on an industry-standard function!

First things first, pip install or anaconda:

>> pip install 'openmdao[all]'

Next, being We are going to be plotting stuff within JupyterLab i always forget to enable it with the majik command:

## main code
%matplotlib inline 

Ok lets get to the good stuff the code.

# add your imports here:
import numpy as np
import matplotlib.pyplot as plt
from openmdao.api import Problem, IndepVarComp, ExecComp, ScipyOptimizeDriver
# NOTE: the scipy import 

# Define the OpenMDAO optimization problem - almost like self.self
prob = Problem()

# Add independent variables x and y and make a guess of X and Y:
indeps = prob.model.add_subsystem('indeps', IndepVarComp(), promotes_outputs=['*'])
indeps.add_output('x', val=0.0)  # Initial guess for x
indeps.add_output('y', val=0.0)  # Initial guess for y

# Add the Himmelblau objective function. See the equation from the Wolfram Plot?
prob.model.add_subsystem('obj_comp', ExecComp('f = (x**2 + y - 11)**2 + (x + y**2 - 7)**2'), promotes_inputs=['x', 'y'], promotes_outputs=['f'])

# Specify the optimization driver and eplison error bounbs.  ScipyOptimizeDriver wraps the optimizers in *scipy.optimize.minimize*. In this example, we use the SLSQP optimizer to find the minimum of the "Paraboloid" type optimization:
prob.driver = ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.driver.options['tol'] = 1e-6

# Set design variables and bounds
prob.model.add_design_var('x', lower=-10, upper=10)
prob.model.add_design_var('y', lower=-10, upper=10)

# Add the objective function Himmelblau via promotes.output['f']:
prob.model.add_objective('f')

# Setup and run the problem and cross your fingers:
prob.setup()
prob.run_driver()

Dear Reader, You should see something like this:

Optimization terminated successfully (Exit mode 0)
Current function value: 9.495162792777827e-11
Iterations: 10
Function evaluations: 14
Gradient evaluations: 10
Optimization Complete
———————————–
Optimal x: [3.0000008]
Optimal y: [1.99999743]
Optimal f(x, y): [9.49516279e-11]

So this optimized the minima of the function relative to the bounds of x and y and \epsilon.

Now, lets look at the cool eye candy in several ways:

# Retrieve the optimized values
x_opt = prob['x']
y_opt = prob['y']
f_opt = prob['f']

print(f"Optimal x: {x_opt}")
print(f"Optimal y: {y_opt}")
print(f"Optimal f(x, y): {f_opt}")

# Plot the function and optimal point
x = np.linspace(-6, 6, 400)
y = np.linspace(-6, 6, 400)
X, Y = np.meshgrid(x, y)
Z = (X**2 + Y - 11)**2 + (X + Y**2 - 7)**2

plt.figure(figsize=(8, 6))
contour = plt.contour(X, Y, Z, levels=50, cmap='viridis')
plt.clabel(contour, inline=True, fontsize=8)
plt.scatter(x_opt, y_opt, color='red', label='Optimal Point')
plt.title("Contour Plot of f(x, y) with Optimal Point")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.colorbar(contour)
plt.show()

Now, lets try something that looks a little more exciting:

import numpy as np
import matplotlib.pyplot as plt

# Define the function
def f(x, y):
    return (x**2 + y - 11)**2 + (x + y**2 - 7)**2

# Generate a grid of x and y values
x = np.linspace(-6, 6, 500)
y = np.linspace(-6, 6, 500)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)

# Plot the function
plt.figure(figsize=(8, 6))
plt.contourf(X, Y, Z, levels=100, cmap='magma')  # Gradient color
plt.colorbar(label='f(x, y)')
plt.title("Plot of f(x, y) = (x² + y - 11)² + (x + y² - 7)²")
plt.xlabel("x")
plt.ylabel("y")
plt.show()

That is cool looking.

Ok, lets take this even further:

We can compare it to the Wolfram Function 3D plot:

from mpl_toolkits.mplot3d import Axes3D

# Create a 3D plot
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')

# Plot the surface
ax.plot_surface(X, Y, Z, cmap='magma', edgecolor='none', alpha=0.9)

# Labels and title
ax.set_title("3D Plot of f(x, y) = (x² + y - 11)² + (x + y² - 7)²")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("f(x, y)")

plt.show()

Which gives you a 3D plot of the function:

3D Plot of f(x, y) = (x² + y – 11)² + (x + y² – 7)²

While this was a toy example for OpenMDAO, it is also a critical tool for advancing multidisciplinary optimization in engineering. Its robust capabilities, open-source nature, and focus on efficient computation of derivatives make it invaluable for researchers and practitioners seeking to tackle the complexities of modern system design.

i hope you find it useful.

Until Then,

#iwishyouwater <- The EDDIE – the most famous big wave contest ran this year. i saw it on the beach in 2004 and got washed across e rivermouth on a 60ft clean up set that washed out the river.

@tctjr

Music To Blog By: GodSpeedYouBlackEmperor “No Title As of 13 February 2024” – great band if you enjoy atmospheric compositional music.

How One Of The G.O.A.T.(s) Changed My Life

A mentor is someone who sees more talent and ability within you, than you see in yourself, and helps bring it out of you.

Bob Proctor
The Religious Tomes Of Digital Audio by Professor Ken Pohlmann

First, i trust this finds everyone well. All kinds of craziness abound in the world; for those affected by recent events, my condolences. Second, I was compelled to write a blog after some commentary on LinkedIn concerning mentors and people who changed some of our lives.

You can find the discussion here. <- Click

Dear reader this is a very personal blog so bear with me i have told few if any this story. Oftentimes, the Universe speaks, and when it does, listen.

i had the extreme luxury and luck to attend graduate school at The University Of Miami Frost School Of Music, specializing in Music Engineering. Here is a little history copypasta’d from the website:

“The Graduate Music Engineering Technology degree (GMUE) was introduced in 1986 and has consistently placed graduates into high-tech engineering fields that emphasize audio technology, usually in audio software and hardware design engineering and product engineering or development. Our graduates have enjoyed employment at companies specifically aimed at high-tech audio such as Sonos, Amazon Lab126, Avid, Universal Audio, Soundtoys, iZotope, Waves LLC, Smule, Apple, Facebook Reality Labs, Microsoft, Eventide, Bose, Shure, Dolby Laboratories, Roland, Beats by Dr. Dre, Spotify, Harman International, JBL, Analog Devices, Biamp, QSC, Motorola, Texas Instruments, Cirrus Logic, Audio Precision, and many more.

In most cases, applicants to the M.S. in Music Engineering Technology typically hold a bachelor of science degree in electrical engineering, computer engineering, computer science, math, physics, or other hard sciences and are passionate about combining their love of music and engineering. A few hold dual degrees in music and other engineering/technology areas. The Music Engineering Technology program enjoys being part of a world-class, top-ranked School of Music, and students may become licensed to use the new $1.2 million state-of-the-art recording studio if they wish.”

I would rather be blind than deaf.

Handel from “Listening”

In 1987, Oh Dear Reader, i had a “really good job” with GE Medical Systems working in the Magnetic Resonance Imaging and Cat Scan field service organization. Yet i longed for truly understanding the science and perception of how we as humans process sound physically, neuro-scientifically, and mentality, then how we design that product to reproduce the creation of sound to its fullest extent. I loved mixing sound and thought in would be the end all to work at a “mixing desk” manufacturer such as MCI in Fort Lauderdale, used at Criteria Studios, where such groups as The Allman Brothers, etc, were the pinnacle of audio engineering. i was also particularly fascinated with the perception of reverberation and accurate modeling of acoustics. In undergraduate school i did an extracurricular paper on digital audio circa 1985. Where I analyzed analog-to-digital and digital-to-analog recording techniques. The paper discussed the Shannon Limit theorem and the science of sampling a sound to reconstruct it in full digital form. i also discussed how in the future most (or so i surmised) sound would eventually be played on a chip or transmitted with no medium. i also created a fiber optic transmission network to transmit and modify my voice. However the “riff” of the paper compelled me.

Said pedantic paper figure 1.1

One day i was sitting listening to Al Dimeola’s Elegant Gypsy album in Little Havanna, Miami, FL (where i presided not far from Crescent Moon Studios) and reading an article by a human named Professor Ken Pohlmann. The year was 1989. The magazine was Mix Magazine as i “used to be” a recording engineer having graduated from Full Sail Of The Recording Arts and then went on to obtain a BSEET at Devry Institute of Technology. i still kept up on recording and live sound and every once in a while i would mix for someone.

As they say, I am a recovering sound engineer now.

Mentoring is a brain to pick, an ear to listen, and a push in the right direction.

John Crosby

At the end of the article, it said something to the effect:

“Professor Ken Pohlmann is the founder of the prestigious program for the Graduate School Of Music Engineering at the University Of Miami, where he teaches Propeller Heads to create world class digital effects.” Apologies, folks i’m going off memory here, but i specifically remember reading the article and thinking “ok i am going to drive down to Coral Gables all two miles and walk in and ask for Professor Polhmann to accept me into the program.”

i walked in and asked for Professor Pohlmann. The nice woman at the desk said let me see if he is here. She said yes he is and will see me now.

Awe hell game on.

He sat down with me and asked what i could do for you. i still remember i was “dressed” in a tie with braces (suspenders) and full button down shirt with tassle dress shoes (full corporate mode). Yes tassle loafers.

i said “i want you to accept me into your program and when i get out i am going to work for (this) company and build reverberation algorithms.” i showed him the Mix Magazine where he was mentioned and in the back of Mix Magazine was an advertisement for a “startup” audio company called digidesign. i also showed him my paper on Digital Audio Recording and Editing circa 1985.

(NOTE: If you never ask for the biggest piece of cake you never get it. Worse thing he could say was no.)

He was really cool on the response. He said well i appreciate the passion but you need to go through all of the process and gave me all the paperwork take the GRE etc.

i was also acutely aware that i was a mutt compared to the other students where he only accepted two per year out of several high pedigree applicants. Most of the students where from real engineering schools.

i’ll never forget when i called to see if i was accepted. i called and the women said: “Theodore Tanner Jr. right? Oh Yes you can start fall of 1990.”

I RESIGNED from GE right after the phone call.

Fast forward to the year 1992. My friend Toby Dunn and i where sitting in MTC 667 graduate thesis class for Professor Ken Pohlmann.

Toby and i had done all kinds of awesome projects for the two years at UMiami but now we are sitting in the classroom breeze coming in watching the palm trees and chatting about who knows what waiting for the GOAT.

Professor Pohlmann walks in with a stack of books and sits down and says:

“What do you guys want to talk about? This class is about thinking up brilliant ideas and taking them into execution and also publishing your thesis at a conference.”

“Which conference?” i asked?

He said: “The Audio Engineering Society Conference this coming Fall.”

We both laughed. I specifically remember thinking back in the day when I didn’t even understand most of Stereo Review Magazine when I was in high school, and now it reads like Cat In Hat, BUT The AES Conference is THE SUPER BOWL OF AUDIO ENGINEERING?!

He said: “What are you laughing at? If you don’t get the paper accepted and given at the conference, you can’t graduate as it’s most of the grade along with your thesis and discussion here in class.”

“We haven’t even got started on our thesis or even selected a subject.” i said

He then said: “I asked what do you want to talk about and you didn’t say anything.”

He sat there in silence for a while then He then picked up his books and said: ” i don’t have time for this.”

He got up and left.

Toby and I just sat there (this was before the acronym WTF), but that was the look on our faces. WTF?

We sat there for a while and then i got the courage up to go into his office.

i felt like Charlie walking up to Willy Wonka.

“Professor Polhmann? , i said tentatively, ” i think we are ready to talk ideas.”

He came back in sat on the desk and said (and i will never ever forget this….)

“You two are the people that will change this industry and as such you are expected to come up with the ideas that can be executed upon and that is what i expect from you now as that is what will be expected of you in industry.”

Thus, Spake The GOAT. Amen.

We then had an amazing conversation of thesis topics.

Toby presented his paper on noise reduction, which was amazing. I presented my paper on Subband audio coding methods at the AES in New York in 1992, complete with an AES scholarship stipend. I also got to hang out with Jeff Beck and Les Paul at a Toys R Us BASF party, but that is another story.

We then went on to work for digidesign circa 1992. Toby is one of the most amazing signal-processing audio engineers in the industry. He was at Digidesign for 20 years and is now at Universal Audio. He wrote the original noise reduction plugin for Digidesign on Sound Designer and worked on the digital audio engine as well as several start plugins (dynamics, chorus/flange, etc.).

Excerpt from 1985 Neophyte paper 1.2 and 1.3

Side Note: One cool thing i got to personally tell Al Dimeola and Steve Vai that i assisted in creating some of the original protools and sounder designer plugins and APIs while listening to Elegant Gypsy and Passion Grace and Warfare. One of them is the same album I mentioned at the beginning of this blog. Also, if you not familiar, both are the GOATs of guitar.

Oh, and one more thing—I worked at Criteria Studios for a while and got to mix on the MCI console in Studio C, which was used to record several famous albums, which was a full-circle aspect for me professionally.

Then, later on, in 1993, another mentor, Phil Ramone, called me (yes that phil, he called me his 8th child…) while I was working on Protron Plugin at the amazing company called Crystal River Engineering, founded by Scott Foster. Scott Foster originated interpolated Head Related Transfer Function six degrees of freedom spatial audio for Jaron Laniers VPL Research and Dr. Beth Wenzel at Nasa Ames Research Lab and essentially started full localized spatial audio. Phil called me to come down to Crescent Moon Studios (Gloria Estafan and The Miami Sound Machine) and listen to the Duets Album he was mixing. He wanted me to analyze the reverb tails going through the defunct ATT Disq system versus a Neve IV console. He used three EMT reverbs (left, center, right) feedback to each other. i knew this previously and used this technique in the original Dveb.

To anyone reading this, find your passion and execute those brilliant ideas. Find the right mentor who will push you beyond anything you ever thought possible.

i am lucky enough to have had several mentors in my life. However, it all started with someone taking a chance on me.

Toby if you are out there hope you and sue and the family are well.

To the GOAT, Professor Ken Pohlmann. Thank you for that day. Without it i would not be where i am without that happening and i cannot thank you enough for taking a chance on me when i knew damn good and well i didnt have the resume or pedigree to ever compete at the scholastic level. However, I do hope I have made up for the deficiencies since that time.

Be safe.

Until Then,

#iwshyouwater (thunders in mentawis with a yacht)

@tctjr

Muzak To Blog By: Bach: Goldberg Variations, BWV 988 (The 1955 & 1981 Recordings). Dear Reader tread lightly within the aural halls there are several caves you can go into here with his interpretations. Enjoy. For those that know you know.

SnakeByte[17] The Metropolis Algorithm

Frame Grab From the movie Metropolis 1927

Who told you to attack the machines, you fools? Without them you’ll all die!!

~ Grot, the Guardian of the Heart Machine

First, as always, Oh Dear Reader, i hope you are safe. There are many unsafe places in and around the world in this current time. Second, this blog is a SnakeByte[] based on something that i knew about but had no idea it was called this by this name.

Third, relative to this, i must confess, Oh, Dear Reader, i have a disease of the bibliomaniac kind. i have an obsession with books and reading. “They” say that belief comes first, followed by admission. There is a Japanese word that translates to having so many books you cannot possibly read them all. This word is tsundoku. From the website (if you click on the word):

“Tsundoku dates from the Meiji era, and derives from a combination of tsunde-oku (to let things pile up) and dokusho (to read books). It can also refer to the stacks themselves. Crucially, it doesn’t carry a pejorative connotation, being more akin to bookworm than an irredeemable slob.”

Thus, while perusing a math-related book site, i came across a monograph entitled “The Metropolis Algorithm: Theory and Examples” by C Douglas Howard [1].

i was intrigued, and because it was 5 bucks (Side note: i always try to buy used and loved books), i decided to throw it into the virtual shopping buggy.

Upon receiving said monograph, i sat down to read it, and i was amazed to find it was closely related to something I was very familiar with from decades ago. This finally brings us to the current SnakeByte[].

The Metropolis Algorithm is a method in computational statistics used to sample from complex probability distributions. It is a type of Markov Chain Monte Carlo (MCMC) algorithm (i had no idea), which relies on Markov Chains to generate a sequence of samples that can approximate a desired distribution, even when direct sampling is complex. Yes, let me say that again – i had no idea. Go ahead LazyWebTM laugh!

So let us start with how the Metropolis Algorithm and how it relates to Markov Chains. (Caveat Emptor: You will need to dig out those statistics books and a little linear algebra.)

Markov Chains Basics

A Markov Chain is a mathematical system that transitions from one state to another in a state space. It has the property that the next state depends only on the current state, not the sequence of states preceding it. This is called the Markov property. The algorithm was introduced by Metropolis et al. (1953) in a Statistical Physics context and was generalized by Hastings (1970). It was considered in the context of image analysis (Geman and Geman, 1984) and data augmentation (Tanner (I’m not related that i know of…) and Wong, 1987). However, its routine use in statistics (especially for Bayesian inference) did not take place until Gelfand and Smith (1990) popularised it. For modern discussions of MCMC, see e.g. Tierney (1994), Smith and Roberts (1993), Gilks et al. (1996), and Roberts and Rosenthal (1998b).

Ergo, the name Metropolis-Hastings algorithm. Once again, i had no idea.

Anyhow,

A Markov Chain can be described by a set of states S and a transition matrix P , where each element P_{ij} represents the probability of transitioning from state i to state j .

Provide The Goal: Sampling from a Probability Distribution \pi(x)

In many applications (e.g., statistical mechanics, Bayesian inference, as mentioned), we are interested in sampling from a complex probability distribution \pi(x). This distribution might be difficult to sample from directly, but we can use a Markov Chain to create a sequence of samples that, after a certain period (called the burn-in period), will approximate \pi(x) .

Ok Now: The Metropolis Algorithm

The Metropolis Algorithm is one of the simplest MCMC algorithms to generate samples from \pi(x). It works by constructing a Markov Chain whose stationary distribution is the desired probability distribution \pi(x) . A stationary distribution is a probability distribution that remains the same over time in a Markov chain. Thus it can describe the long-term behavior of a chain, where the probabilities of being in each state do not change as time passes. (Whatever time is, i digress.)

The key steps of the algorithm are:

Initialization

Start with an initial guess x_0 , a point in the state space. This point can be chosen randomly or based on prior knowledge.

Proposal Step

From the current state x_t , propose a new state x^* using a proposal distribution q(x^*|x_t) , which suggests a candidate for the next state. This proposal distribution can be symmetric (e.g., a normal distribution centered at x_t ) or asymmetric.

Acceptance Probability

Calculate the acceptance probability \alpha for moving from the current state x_t to the proposed state x^* :

    \[\alpha = \min \left(1, \frac{\pi(x^) q(x_t | x^)}{\pi(x_t) q(x^* | x_t)} \right)\]

In the case where the proposal distribution is symmetric (i.e., q(x^|x_t) = q(x_t|x^)), the formula simplifies to:

    \[\alpha = \min \left(1, \frac{\pi(x^*)}{\pi(x_t)} \right)\]

Acceptance or Rejection

Generate a random number u from a uniform distribution U(0, 1)
If u \leq \alpha , accept the proposed state x^* , i.e., set x_{t+1} = x^* .
If u > \alpha , reject the proposed state and remain at the current state, i.e., set x_{t+1} = x_t .

Repeat

Repeat the proposal, acceptance, and rejection steps to generate a Markov Chain of samples.

Convergence and Stationary Distribution:

Over time, as more samples are generated, the Markov Chain converges to a stationary distribution. The stationary distribution is the target distribution \pi(x) , meaning the samples generated by the algorithm will approximate \pi(x) more closely as the number of iterations increases.

Applications:

The Metropolis Algorithm is widely used in various fields such as Bayesian statistics, physics (e.g., in the simulation of physical systems), machine learning, and finance. It is especially useful for high-dimensional problems where direct sampling is computationally expensive or impossible.

Key Features of the Metropolis Algorithm:

  • Simplicity: It’s easy to implement and doesn’t require knowledge of the normalization constant of \pi(x) , which can be difficult to compute.
  • Flexibility: It works with a wide range of proposal distributions, allowing the algorithm to be adapted to different problem contexts.
  • Efficiency: While it can be computationally demanding, the algorithm can provide high-quality approximations to complex distributions with well-chosen proposals and sufficient iterations.

The Metropolis-Hastings Algorithm is a more general version that allows for non-symmetric proposal distributions, expanding the range of problems the algorithm can handle.

Now let us code it up:

i am going to assume the underlying distribution is Gaussian with a time-dependent mean \mu_t, which changes slowly over time. We’ll use a simple time-series analytics setup to sample this distribution using the Metropolis Algorithm and plot the results. Note: When the target distribution is Gaussian (or close to Gaussian), the algorithm can converge more quickly to the true distribution because of the symmetric smooth nature of the normal distribution.

import numpy as np
import matplotlib.pyplot as plt

# Time-dependent mean function (example: sinusoidal pattern)
def mu_t(t):
    return 10 * np.sin(0.1 * t)

# Target distribution: Gaussian with time-varying mean mu_t and fixed variance
def target_distribution(x, t):
    mu = mu_t(t)
    sigma = 1.0  # Assume fixed variance for simplicity
    return np.exp(-0.5 * ((x - mu) / sigma) ** 2)

# Metropolis Algorithm for time-series sampling
def metropolis_sampling(num_samples, initial_x, proposal_std, time_steps):
    samples = np.zeros(num_samples)
    samples[0] = initial_x

    # Iterate over the time steps
    for t in range(1, num_samples):
        # Propose a new state based on the current state
        x_current = samples[t - 1]
        x_proposed = np.random.normal(x_current, proposal_std)

        # Acceptance probability (Metropolis-Hastings step)
        acceptance_ratio = target_distribution(x_proposed, time_steps[t]) / target_distribution(x_current, time_steps[t])
        acceptance_probability = min(1, acceptance_ratio)

        # Accept or reject the proposed sample
        if np.random.rand() < acceptance_probability:
            samples[t] = x_proposed
        else:
            samples[t] = x_current

    return samples

# Parameters
num_samples = 10000  # Total number of samples to generate
initial_x = 0.0      # Initial state
proposal_std = 0.5   # Standard deviation for proposal distribution
time_steps = np.linspace(0, 1000, num_samples)  # Time steps for temporal evolution

# Run the Metropolis Algorithm
samples = metropolis_sampling(num_samples, initial_x, proposal_std, time_steps)

# Plot the time series of samples and the underlying mean function
plt.figure(figsize=(12, 6))

# Plot the samples over time
plt.plot(time_steps, samples, label='Metropolis Samples', alpha=0.7)

# Plot the underlying time-varying mean (true function)
plt.plot(time_steps, mu_t(time_steps), label='True Mean \\mu_t', color='red', linewidth=2)

plt.title("Metropolis Algorithm Sampling with Time-Varying Gaussian Distribution")
plt.xlabel("Time")
plt.ylabel("Sample Value")
plt.legend()
plt.grid(True)
plt.show()

Output of Python Script Figure 1.0

Ok, What’s going on here?

For the Target Distribution:

The function mu_t(t) defines a time-varying mean for the distribution. In this example, it follows a sinusoidal pattern.
The function target_distribution(x, t) models a Gaussian distribution with mean \mu_t and a fixed variance (set to 1.0).


Metropolis Algorithm:

The metropolis_sampling function implements the Metropolis algorithm. It iterates over time, generating samples from the time-varying distribution. The acceptance probability is calculated using the target distribution at each time step.


Proposal Distribution:

A normal distribution centered around the current state with standard deviation proposal_std is used to propose new states.


Temporal Evolution:

The time steps are generated using np.linspace to simulate temporal evolution, which can be used in time-series analytics.


Plot The Results:

The results are plotted, showing the samples generated by the Metropolis algorithm as well as the true underlying mean function \mu_t (in red).

The plot shows the Metropolis samples over time, which should cluster around the time-varying mean \mu_t of the distribution. As time progresses, the samples follow the red curve (the true mean) as time moves on like and arrow in this case.

Now you are probably asking “Hey is there a more pythonic library way to to this?”. Oh Dear Reader i am glad you asked! Yes There Is A Python Library! AFAIC PyMC started it all. Most probably know it as PyMc3 (formerly known as…). There is a great writeup here: History of PyMc.

We are golden age of probabilistic programming.

~ Chris Fonnesbeck (creator of PyMC) 

Lets convert it using PyMC. Steps to Conversion:

  1. Define the probabilistic model using PyMC’s modeling syntax.
  2. Specify the Gaussian likelihood with the time-varying mean \mu_t .
  3. Use PyMC’s built-in Metropolis sampler.
  4. Visualize the results similarly to how we did earlier.
import pymc as pm
import numpy as np
import matplotlib.pyplot as plt

# Time-dependent mean function (example: sinusoidal pattern)
def mu_t(t):
    return 10 * np.sin(0.1 * t)

# Set random seed for reproducibility
np.random.seed(42)

# Number of time points and samples
num_samples = 10000
time_steps = np.linspace(0, 1000, num_samples)

# PyMC model definition
with pm.Model() as model:
    # Prior for the time-varying parameter (mean of Gaussian)
    mu_t_values = mu_t(time_steps)

    # Observational model: Normally distributed samples with time-varying mean and fixed variance
    sigma = 1.0  # Fixed variance
    x = pm.Normal('x', mu=mu_t_values, sigma=sigma, shape=num_samples)

    # Use the Metropolis sampler explicitly
    step = pm.Metropolis()

    # Run MCMC sampling with the Metropolis step
    samples_all = pm.sample(num_samples, tune=1000, step=step, chains=5, return_inferencedata=False)

# Extract one chain's worth of samples for plotting
samples = samples_all['x'][0]  # Taking only the first chain

# Plot the time series of samples and the underlying mean function
plt.figure(figsize=(12, 6))

# Plot the samples over time
plt.plot(time_steps, samples, label='PyMC Metropolis Samples', alpha=0.7)

# Plot the underlying time-varying mean (true function)
plt.plot(time_steps, mu_t(time_steps), label='True Mean \\mu_t', color='red', linewidth=2)

plt.title("PyMC Metropolis Sampling with Time-Varying Gaussian Distribution")
plt.xlabel("Time")
plt.ylabel("Sample Value")
plt.legend()
plt.grid(True)
plt.show()

When you execute this code you will see the following status bar:

It will be a while. Go grab your favorite beverage and take a walk…..

Output of Python Script Figure 1.1

Key Differences from the Previous Code:

PyMC Model Usage Definition:
In PyMC, the model is defined using the pm.Model() context. The x variable is defined as a Normal distribution with the time-varying mean \mu_t . Instead of manually implementing the acceptance probability, PyMC handles this automatically with the specified sampler.

Metropolis Sampler:
PyMC allows us to specify the sampling method. Here, we explicitly use the Metropolis algorithm with pm.Metropolis().

Samples Parameter:
We specify shape=num_samples in the pm.Normal() distribution to indicate that we want a series of samples for each time step.

Plotting:
The resulting plot will show the sampled values using the PyMC Metropolis algorithm compared with the true underlying mean, similar to the earlier approach. Now, samples has the same shape as time_steps (in this case, both with 10,000 elements), allowing you to plot the sample values correctly against the time points; otherwise, the x and y axes would not align.

NOTE: We used this library at one of our previous health startups with great success.

Optimizations herewith include several. There is a default setting in PyMC which is called NUTS.
No need to manually set the number of leapfrog steps. NUTS automatically determines the optimal number of steps for each iteration, preventing inefficient or divergent sampling. NUTS automatically stops the trajectory when it detects that the particle is about to turn back on itself (i.e., when the trajectory “U-turns”). A U-turn means that continuing to move in the same direction would result in redundant exploration of the space and inefficient sampling. When NUTS detects this, it terminates the trajectory early, preventing unnecessary steps. Also the acceptance rates on convergence are higher.

There are several references to this set of algorithms. It truly a case of both mathematical and computational elegance.

Of course you have to know what the name means. They say words have meanings. Then again one cannot know everything.

Until Then,

#iwishyouwater <- Of all places Alabama getting the memo From Helene 2024

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

Music To Blog By: View From The Magicians Window, The Psychic Circle

References:

[1] The Metropolis Algorithm: Theory and Examples by C Douglas Howard

[2] The Metropolis-Hastings Algorithm: A note by Danielle Navarro

[3] Github code for Sample Based Inference by bashhwu

Entire Metropolis Movie For Your Viewing Pleasure. (AFAIC The most amazing Sci-Fi movie besides BladeRunner)

What Would Nash,Shannon,Turing, Wiener and von Neumann Think?

An image of the folks as mentioned above via the GAN de jour

First, as usual, i trust everyone is safe. Second, I’ve been “thoughting” a good deal about how the world is being eaten by software and, recently, machine learning. i personally have a tough time with using the words artificial intelligence.

What Would Nash, Shannon, Turing, Wiener, and von Neumann Think of Today’s World?

The modern world is a product of the mathematical and scientific brilliance of a handful of intellectual pioneers who happen to be whom i call the Horsemen of The Digital Future. i consider these humans to be my heroes and persons that i aspire to be whereas most have not accomplished one-quarter of the work product the humans have created for humanity. Among these giants are Dr. John Nash, Dr. Claude Shannon, Dr. Alan Turing, Dr. Norbert Wiener, and Dr. John von Neumann. Each of them, in their own way, laid the groundwork for concepts that now define our digital and technological age: game theory, information theory, artificial intelligence, cybernetics, and computing. But what would they think if they could see how their ideas, theories and creations have shaped the 21st century?

A little context.

John Nash: The Game Theorist

John Nash revolutionized economics, mathematics, and strategic decision-making through his groundbreaking work in game theory. His Nash Equilibrium describes how parties, whether they be countries, companies, or individuals, can find optimal strategies in competitive situations. Today, his work influences fields as diverse as economics, politics, and evolutionary biology. NOTE: Computational Consensus Not So Hard; Carbon (Human) Consensus Nigh Impossible.

The Nash equilibrium is the set of degradation strategies 

    \[(E_i^*,E_j^*)\]

 

such that, if both players adopt it, neither player can achieve a higher payoff by changing strategies. Therefore, two rational agents should be expected to pick the Nash equilibrium as their strategy.

If Nash were alive today, he would be amazed at how game theory has permeated decision-making in technology, particularly in algorithms used for machine learning, cryptocurrency trading, and even optimizing social networks. His equilibrium models are at the heart of competitive strategies used by businesses and governments alike. With the rise of AI systems, Nash might ponder the implications of intelligent agents learning to “outplay” human actors and question what ethical boundaries should be set when AI is used in geopolitical or financial arenas.

Claude Shannon: The Father of Information Theory

Claude Shannon’s work on information theory is perhaps the most essential building block of the digital age. His concept of representing and transmitting data efficiently set the stage for everything from telecommunications to the Internet as we know it. Shannon predicted the rise of digital communication and laid the foundations for the compression and encryption algorithms protecting our data. He also is the father of my favorite equation mapping the original entropy equation from thermodynamics to channel capacity:

    \[H=-1/N \sum_{i=1}^{N} P_i\,log_2\,P_i\]

The shear elegance and magnitude is unprecedented. If he were here, Shannon would witness the unprecedented explosion of data, quantities, and speeds far beyond what was conceivable in his era. The Internet of Things (IoT), big data analytics, 5G/6G networks, and quantum computing are evolutions directly related to his early ideas. He might also be interested in cybersecurity challenges, where information theory is critical in protecting global communications. Shannon would likely marvel at the sheer volume of information we produce yet be cautious of the potential misuse and the ethical quandaries regarding privacy, surveillance, and data ownership.

Alan Turing: The Architect of Artificial Intelligence

Alan Turing’s vision of machines capable of performing any conceivable task laid the foundation for modern computing and artificial intelligence. His Turing Machine is still a core concept in the theory of computation, and his famous Turing Test continues to be a benchmark in determining machine intelligence.

In today’s world, Turing would see his dream of intelligent machines realized—and then some. From self-driving cars to voice assistants like Siri and Alexa, AI systems are increasingly mimicking human cognition human capabilities in specific tasks like data analysis, pattern recognition, and simple problem-solving. While Turing would likely be excited by this progress, he might also wrestle with the ethical dilemmas arising from AI, such as autonomy, job displacement, and the dangers of creating highly autonomous AI systems as well as calling bluff on the fact that LLM systems do not reason in the same manner as human cognition on basing the results on probabilistic convex optimizations. His work on breaking the Enigma code might inspire him to delve into modern cryptography and cybersecurity challenges as well. His reaction-diffusion model called Turings Metapmorphsis equation, is foundational in explaining biological systems:

Turing’s reaction-diffusion system is typically written as a system of partial differential equations (PDEs):

    \[\frac{\partial u}{\partial t} &= D_u \nabla^2 u + f(u, v),\]


    \[\frac{\partial v}{\partial t} &= D_v \nabla^2 v + g(u, v),\]

where:

    \[\begin{itemize}\item $u$ and $v$ are concentrations of two chemical substances (morphogens),\item $D_u$ and $D_v$ are diffusion coefficients for $u$ and $v$,\item $\nabla^2$ is the Laplacian operator, representing spatial diffusion,\item $f(u, v)$ and $g(u, v)$ are reaction terms representing the interaction between $u$ and $v$.\end{itemize}\]

In addition to this, his contributions to cryptography and game theory alone are infathomable.
In his famous paper, Computing Machinery and Intelligence,” Turing posed the question, “Can machines think?” He proposed the Turing Test as a way to assess whether a machine can exhibit intelligent behavior indistinguishable from a human. This test has been a benchmark in AI for evaluating a machine’s ability to imitate human intelligence.

Given the recent advances made with large language models, I believe he would find it amusing, not that they think or reason.

Norbert Wiener: The Father of Cybernetics

Norbert Wiener’s theory of cybernetics explored the interplay between humans, machines, and systems, particularly how systems could regulate themselves through feedback loops. His ideas greatly influenced robotics, automation, and artificial intelligence. He wrote the books “Cybernetics” and “The Human Use of Humans”. During World War II, his work on the automatic aiming and firing of anti-aircraft guns caused Wiener to investigate information theory independently of Claude Shannon and to invent the Wiener filter. (The now-standard practice of modeling an information source as a random process—in other words, as a variety of noise—is due to Wiener.) Initially, his anti-aircraft work led him to write, with Arturo Rosenblueth and Julian Bigelow, the 1943 article ‘Behavior, Purpose and Teleology. He was also a complete pacifist. What was said about those who can hold two opposing views?

If Wiener were alive today, he would be fascinated by the rise of autonomous systems, from drones to self-regulated automated software, and the increasing role of cybernetic organisms (cyborgs) through advancements in bioengineering and robotic prosthetics. He, I would think, would also be amazed that we could do real-time frequency domain filtering based on his theories. However, Wiener’s warnings about unchecked automation and the need for human control over machines would likely be louder today. He might be deeply concerned about the potential for AI-driven systems to exacerbate inequalities or even spiral out of control without sufficient ethical oversight. The interaction between humans and machines in fields like healthcare, where cybernetics merges with biotechnology, would also be a keen point of interest for him.

John von Neumann: The Architect of Modern Computing

John von Neumann’s contributions span so many disciplines that it’s difficult to pinpoint just one. He’s perhaps most famous for his von Neumann architecture, the foundation of most modern computer systems, and his contributions to quantum mechanics and game theory. His visionary thinking on self-replicating machines even predated discussions of nanotechnology.

Von Neumann would likely be astounded by the ubiquity and power of modern computers. His architectural design is the backbone of nearly every device we use today, from smartphones to supercomputers. He would also find significant developments in quantum computing, aligning with his quantum mechanics work. As someone who worked on the Manhattan Project (also Opphenhiemer), von Neumann might also reflect on the dual-use nature of technology—the incredible potential of AI, nuclear power, and autonomous weapons to both benefit and harm humanity. His early concerns about the potential for mutual destruction could be echoed in today’s discussions on AI governance and existential risks.

What Would They Think Overall?

Together, these visionaries would undoubtedly marvel at how their individual contributions have woven into the very fabric of today’s society. The rapid advancements in AI, data transmission, computing power, and autonomous systems would be thrilling, but they might also feel a collective sense of responsibility to ask:

Where do we go from here?

Once again Oh Dear Reader You pre-empt me….

A colleague sent me this paper, which was the impetus for this blog:

My synopsis of said paper:


The Tensor as an Informational Resource” discusses the mathematical and computational importance of tensors as resources, particularly in quantum mechanics, AI, and computational complexity. The authors propose new preorders for comparing tensors and explore the notion of tensor rank and transformations, which generalize key problems in these fields. This paper is vital for understanding how the foundational work of Nash, Shannon, Turing, Wiener, and von Neumann has evolved into modern AI and quantum computing. Tensors offer a new frontier in scientific discovery, building on their theories and pushing the boundaries of computational efficiency, information processing, and artificial intelligence. It’s an extension of their legacy, providing a mathematical framework that could revolutionize our interaction with quantum information and complex systems. Fundamental to systems that appear to learn where the information-theoretic transforms are the very rosetta stone of how we perceive the world through perceptual filters of reality.

This shows the continuing relevance in ALL their ideas in today’s rapidly advancing AI and fluid computing technological landscape.

They might question whether today’s technology has outpaced ethical considerations and whether the systems they helped build are being used for the betterment of all humanity. Surveillance, privacy, inequality, and autonomous warfare would likely weigh heavily on their minds. Yet, their boundless curiosity and intellectual rigor would inspire them to continue pushing the boundaries of what’s possible, always seeking new answers to the timeless question of how to create the future we want and live better, more enlightened lives through science and technology.

Their legacy lives on, but so does their challenge to us: to use the tools they gave us wisely for the greater good of all.

Or would they be dismayed that we use all of this technology to make a powerpoint to save time so we can watch tik tok all day?

Until Then,

#iwishyouwater <- click and see folks who got the memo

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

Music To blog by: Bach: Mass in B Minor, BWV 232. By far my favorite composer. The John Eliot Gardiner and Monterverdi Choir version circa 1985 is astounding.

SnakeByte[16]: Enhancing Your Code Analysis with pyastgrep

Dalle 3’s idea of an Abstract Syntax Tree in R^3 space

If you would know strength and patience, welcome the company of trees.

~ Hal Borland

First, I hope everyone is safe. Second, I am changing my usual SnakeByte [] stance process. I am pulling this from a website I ran across. I saw the library mentioned, so I decided to pull from the LazyWebTM instead of the usual snake-based tomes I have in my library.

As a Python developer, understanding and navigating your codebase efficiently is crucial, especially as it grows in size and complexity. Trust me, it will, as does Entropy. Traditional search tools like grep or IDE-based search functionalities can be helpful, but they cannot often “‘understand” the structure of Python code – sans some of the Co-Pilot developments. (I’m using understand here *very* loosely Oh Dear Reader).

This is where pyastgrep it comes into play, offering a powerful way to search and analyze your Python codebase using Abstract Syntax Trees (ASTs). While going into the theory of ASTs is tl;dr for a SnakeByte[] , and there appears to be some ambiguity on the history and definition of Who actually invented ASTs, i have placed some references at the end of the blog for your reading pleasure, Oh Dear Reader. In parlance, if you have ever worked on compilers or core embedded systems, Abstract Syntax Trees are data structures widely used in compilers and the like to represent the structure of program code. An AST is usually the result of the syntax analysis phase of a compiler. It often serves as an intermediate representation of the program through several stages that the compiler requires and has a strong impact on the final output of the compiler.

So what is the Python Library that you speak of? i’m Glad you asked.

What is pyastgrep?

pyastgrep is a command-line tool designed to search Python codebases with an understanding of Python’s syntax and structure. Unlike traditional text-based search tools, pyastgrep it leverages the AST, allowing you to search for specific syntactic constructs rather than just raw text. This makes it an invaluable tool for code refactoring, auditing, and general code analysis.

Why Use pyastgrep?

Here are a few scenarios where pyastgrep excels:

  1. Refactoring: Identify all instances of a particular pattern, such as function definitions, class instantiations, or specific argument names.
  2. Code Auditing: Find usages of deprecated functions, unsafe code patterns, or adherence to coding standards.
  3. Learning: Explore and understand unfamiliar codebases by searching for specific constructs.

I have a mantra: Reduce, Refactor, and Reuse. Please raise your hand of y’all need to refactor your code? (C’mon now no one is watching… tell the truth…). See if it is possible to reduce the code footprint, refactor the code into more optimized transforms, and then let others reuse it across the enterprise.

Getting Started with pyastgrep

Let’s explore some practical examples of using pyastgrep to enhance your code analysis workflow.

Installing pyastgrep

Before we dive into how to use pyastgrep, let’s get it installed. You can install pyastgrep via pip:

(base)tcjr% pip install pyastgrep #dont actually type the tctjr part that is my virtualenv

Example 1: Finding Function Definitions

Suppose you want to find all function definitions in your codebase. With pyastgrep, this is straightforward:

pyastgrep 'FunctionDef'

This command searches for all function definitions (FunctionDef) in your codebase, providing a list of files and line numbers where these definitions occur. Ok pretty basic string search.

Example 2: Searching for Specific Argument Names

Imagine you need to find all functions that take an argument named config. This is how you can do it:

pyastgrep 'arg(arg=config)'

This query searches for function arguments named config, helping you quickly locate where configuration arguments are being used.

Example 3: Finding Class Instantiations

To find all instances where a particular class, say MyClass, is instantiated, you can use:

pyastgrep 'Call(func=Name(id=MyClass))'

This command searches for instantiations of MyClass, making it easier to track how and where specific classes are utilized in your project.

Advanced Usage of pyastgrep

For more complex queries, you can combine multiple AST nodes. For instance, to find all print statements in your code, you might use:

pyastgrep 'Call(func=Name(id=print))'

This command finds all calls to the print function. You can also use more detailed queries to find nested structures or specific code patterns.

Integrating pyastgrep into Your Workflow

Integrating pyastgrep into your development workflow can greatly enhance your ability to analyze and maintain your code. Here are a few tips:

  1. Pre-commit Hooks: Use pyastgrep in pre-commit hooks to enforce coding standards or check for deprecated patterns.
  2. Code Reviews: Employ pyastgrep during code reviews to quickly identify and discuss specific code constructs.
  3. Documentation: Generate documentation or code summaries by extracting specific patterns or structures from your codebase.

Example Script

To get you started, here’s a simple Python script using pyastgrep to search for all function definitions in a directory:

import os
from subprocess import run

def search_function_definitions(directory):
result = run(['pyastgrep', 'FunctionDef', directory], capture_output=True, text=True)
print(result.stdout)

if __name__ == "__main__":
directory = "path/to/your/codebase" #yes this is not optimal folks just an example.
search_function_definitions(directory)

Replace "path/to/your/codebase" with the actual path to your Python codebase, and run the script to see pyastgrep in action.

Conclusion

pyastgrep is a powerful tool that brings the capabilities of AST-based searching to your fingertips. Understanding and leveraging the syntax and structure of your Python code, pyastgrep allows for more precise and meaningful code searches. Whether you’re refactoring, auditing, or simply exploring code, pyastgrep it can significantly enhance your productivity and code quality. This is a great direct addition to your arsenal. Hope it helps and i hope you found this interesting.

Until Then,

#iwishyouwater <- The best of the best at Day1 Tahiti Pro presented by Outerknown 2024

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

MUZAK to Blog By: SweetLeaf: A Stoner Rock Salute to Black Sabbath. While i do not really like bands that do covers, this is very well done. For other references to the Best Band In Existence ( Black Sabbath) i also refer you to Nativity in Black Volumes 1&2.

References:

[1] Basics Of AST

[2] The person who made pyastgrep

[3] Wikipedia page on AST

SnakeByte [15]: Debugging With pdb In The Trenches

Dalle3 idea of debugging code from the view of the code itself.

If debugging is the process of removing software bugs, then programming must be the process of putting them in.

~ Edsger W. Dijkstra

Image Explanation: Above is the image showing the perspective of debugging code from the viewpoint of the code itself. The scene features lines of code on a large monitor, with sections highlighted to indicate errors. In the foreground, anthropomorphic code characters are working to fix these highlighted sections, set against a digital landscape of code lines forming a cityscape.

So meta and canonical.

In any event Dear Readers, it is Wednesday! That means as usual everyday is Wednesday in a startup, you actually work at a company where you enjoy the work or it is SnakeByte [ ] time!

i haven’t written a SnakeByte is quite some time. Also, recently, in a previous blog, I mentioned that I had a little oversite on my part, and my blog went down. i didn’t have alerting turned on ye ole AWS and those gosh darn binlogs where not deleting in ye ole WordPress as such we took the proverbial digger into downtime land. i re-provisioned with an additional sizing of the volume and changed the disc from magnetic to SSD, turned on alerts and we are back in business.

So per my usual routine i grab one of the python books from the book shelf randomly open then read about command or commands and attempt to write a blog as fast as possible.

Today’s random command from Python In A Nutshell is pdb, the Python Debugger. i’ll walk you through the basic of using pdb to debug your Python code, which, as it turns out is better than a bunch of print().

Getting Started with pdb

To leverage pdb, import it in your Python script. You can then set breakpoints manually with pdb.set_trace(). When the execution hits this line, the script pauses, allowing you to engage in an interactive debugging session.

Consider this straightforward example:

# example.py
import pdb

def add(a, b):
result = a + b
return result

def main():
x = 10
y = 20
pdb.set_trace() # Breakpoint set here
total = add(x, y)
print(f'Total: {total}')

if __name__ == '__main__':
main()

Here, we have a simple add function and a main function that invokes add. The pdb.set_trace() in the main function sets a breakpoint where the debugger will halt execution.

Running the Code with pdb

Execute the script from the command line to initiate the debugger:

python example.py

When the execution reaches pdb.set_trace(), you will see the debugger prompt ((Pdb)):

> /path/to/example.py(11)main()
-> total = add(x, y)
(Pdb)

Essential pdb Commands

Once at the debugger prompt, several commands can help you navigate and inspect your code. Key commands include:

  • l (list): Displays the surrounding source code.
  • n (next): Moves to the next line within the same function.
  • s (step): Steps into the next function call.
  • c (continue): Resumes execution until the next breakpoint.
  • p (print): Evaluates and prints an expression.
  • q (quit): Exits the debugger and terminates the program.

Let’s walk through using these commands in our example:

List the surrounding code:

Pdb) 1
  6         def main():
  7             x = 10
  8             y = 20
  9             pdb.set_trace()  # Breakpoint here
 10  ->         total = add(x, y)
 11             print(f'Total: {total}')
 12     
 13         if __name__ == '__main__':
 14             main()

Print variable values:

(Pdb) p x
10
(Pdb) p y
20

Step into the add function:

(Pdb) s
> /path/to/example.py(3)add()
-> result = a + b
(Pdb)

Inspect parameters a and b:

(Pdb) p a
10
(Pdb) p b
20

Continue execution:

(Pdb) c
Total: 30

Advanced pdb Features

For more nuanced debugging needs, pdb offers advanced capabilities:

Conditional Breakpoints: Trigger breakpoints based on specific conditions:

if x == 10:
    pdb.set_trace()

Post-Mortem Debugging: Analyze code after an exception occurs:

import pdb

def faulty_function():
    return 1 / 0

try:
    faulty_function()
except ZeroDivisionError:
    pdb.post_mortem() #they need to add a pre-mortem what can go wrong will...

Command Line Invocation: Run a script under pdb control directly from the command line like the first simple example:

python -m pdb example.py

Effective debugging is crucial for building robust and maintainable software. pdb provides a powerful, interactive way to understand and fix your Python code. By mastering pdb, you can streamline your debugging process, saving time and enhancing your productivity.

pdb, the Python Debugger, is an indispensable tool for any developer. It allows us to set breakpoints, step through code, inspect variables, and evaluate expressions on the fly. This level of control is crucial for diagnosing and resolving issues efficiently. While i used cli in the examples it also works with Jupyter Notebooks.

We’ve covered the basics and advanced features of pdb, equipping you with the knowledge to tackle even the most challenging bugs. As you integrate these techniques into your workflow, you’ll find that debugging becomes less of a chore and more of a strategic advantage unless you create a perfect design which is no code at all!

Until then,

#iwishyouwater <- La Vaca Gigante Wipeouts 2024

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr)

Muzak to Blog By: Apostrophe (‘) by Frank Zappa. i was part of something called the Zappa Ensemble in graduate school as one of the engineers. The musicians where amazing. Apostrohe (‘)is an amazing album. The solo on Uncle Remus is just astounding well as is most of his solos.

Review: Flow Research Collective

Bonafide in The Art of The Flow

I will persist until I succeed.

I was not delivered unto this world in defeat, nor does failure course in my veins. I am not a sheep waiting to be prodded by my shepherd. I am a lion and I refuse to talk, to walk, to sleep with the sheep. I will hear not those who weep and complain, for their disease is contagious. Let them join the sheep. The slaughterhouse of failure is not my destiny.

I will persist until I succeed.

OG Mandino

First as always Dear Readers i trust everyone is safe. Second, whilst i have not written i in a while that does not mean i have not been “thoughting” of things to write about for You Oh Dear Reader. Third, software is hard and there was a glitch in the matrix and my site was down for a bit.

Starting last year on April 24th, 2023, with Matthew McConaughey’s “Art Of Living” worldwide class that was, in fact, a precursor to a class with him and Tony Robbins dedicated to looking into yourself and figuring out exactly what you want – sound familiar? However, this was not for me to use for others but for me – period. I knew that this was a stepping stone to the class that I was going to write about, a class given by the Flow Research Collective. After i took “The Art of Living” class i knew a Flow Research Collective Class was starting over “The Holidays” in December 2023. Knowing full well that i would be in the throes of work at my new gig and also “The Holidays”, i told myself just like i tell others: “The best time to plant a tree is yesterday. The best time to plant a tree is Now.” So i registered for the 9 week class. At the time, i was very familiar with Stephen Kotler, the founder of FRC given i had read many of his books:

  1. Abundance: The Future Is Better Than You Think” (2012) – Co-authored with Peter H. Diamandis
  2. Bold: How to Go Big, Create Wealth and Impact the World” (2015) – Co-authored with Peter H. Diamandis
  3. The Rise of Superman: Decoding the Science of Ultimate Human Performance” (2014)
  4. Tomorrowland: Our Journey from Science Fiction to Science Fact” (2015) – Co-authored with Peter H. Diamandis
  5. Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work” (2017) – Co-authored with Jamie Wheal
  6. The Future is Faster Than You Think: How Converging Technologies Are Transforming Business, Industries, and Our Lives” (2020) – Co-authored with Peter H. Diamandis
  7. The Art of The Impossible: A Peak Performance Primer” (2021)
  8. Gnar Country: Growing Old and Staying Rad” (2023)

I have read all of the ones concerning human performance.

Why did I push this off till now? Well, denial is an amazing psychological force.

In the realm of human performance, few concepts hold as much promise and intrigue as the state of flow. Coined by psychologist Mihaly Csikszentmihalyi, flow refers to a mental state of complete immersion and energized focus in an activity, where individuals experience profound enjoyment and peak performance. Flow is not just a fleeting moment of productivity; it’s a state where time seems to warp, self-vanishes, and optimal performance becomes effortless. Harnessing the power of flow can unlock human potential in remarkable ways.

The Flow Research Collective (FRC) is an organization that has made it its mission to understand, master, and utilize the principles of flow to help individuals and organizations achieve peak performance consistently. Founded by Steven Kotler, a prolific author and leading expert on the subject, and Rian Doris, the CEO, the FRC has journeyed from humble beginnings to becoming a powerhouse in the field of human performance enhancement.

Origins: The Spark of Inspiration

The story of the Flow Research Collective begins with Steven Kotler’s own personal journey. Struggling with Lyme disease, Kotler found himself facing physical and cognitive limitations that profoundly impacted his life and work. Determined to overcome these challenges, he delved deep into the science of human performance, stumbling upon the concept of flow.

Kotler’s fascination with flow led him to explore its intricacies, drawing from neuroscience, psychology, and various research fields. As he began understanding flow mechanics and its transformative potential, he realized the need to share this knowledge with the world. Thus, the seeds of the Flow Research Collective were planted.

Building Momentum: From Vision to Reality

Armed with a vision to unlock human potential through flow, Kotler embarked on a journey to build the Flow Research Collective from the ground up. Collaborating with like-minded individuals and experts in various domains, he set out to create a platform that would serve as a hub for research, education, and practical flow applications.

The early days were marked by relentless dedication and a commitment to excellence. Kotler and his team immersed themselves in the latest scientific literature, conducted experiments, and engaged with practitioners from diverse fields to gain insights into the nature of flow. Through trial and error, they refined their methodologies, developing frameworks and tools to help individuals cultivate flow and achieve peak performance.

Cultivating a Community: The Power of Connection

Central to the Flow Research Collective’s success is its ability to foster a vibrant and engaged community of flow enthusiasts. Through workshops, seminars, online courses, and collaborative projects, the FRC has brought together individuals from all walks of life who share a common passion for unlocking human potential.

The community’s collective nature has been instrumental in accelerating learning and innovation. By sharing experiences, exchanging ideas, and supporting one another, members of the FRC have been able to tap into the group’s collective wisdom, amplifying their individual efforts and achievements.

From Zero to Dangerous: Mastering the Art of Flow

The term “zero to dangerous” (ZTD) encapsulates the ultimate goal of the Flow Research Collective: to empower individuals to transition from a state of inexperience or mediocrity to one of mastery and excellence. Drawing inspiration from the language of fighter pilots who aim to go from zero to dangerous in their skill level, the FRC seeks to help individuals reach a level of proficiency where they can navigate life’s challenges with confidence and grace.

Achieving this level of mastery requires more than just theoretical knowledge; it demands practice, discipline, and a willingness to push beyond one’s comfort zone. Through a combination of cutting-edge research, immersive training experiences, and personalized coaching, the FRC equips individuals with the tools and techniques they need to harness the power of flow and unleash their full potential.

Looking Ahead: A Future of Possibilities

As the Flow Research Collective grows and evolves, the possibilities are endless. From helping athletes and artists achieve peak performance to revolutionizing the way businesses operate, the principles of flow have the potential to transform every aspect of human endeavor.

With advances in technology, neuroscience, and our understanding of human psychology, the FRC is poised to unlock new frontiers in human performance enhancement. By staying true to its mission of understanding, mastering, and leveraging the power of flow, the Flow Research Collective is paving the way for a future where individuals and organizations can thrive like never before.

What is FLOW?

Specifically, “Flow” occurs when individuals are fully immersed in a task, experiencing deep focus, high levels of enjoyment, and a sense of timelessness. In this state, individuals often report feeling in control, highly motivated, and completely absorbed in the activity at hand. Flow typically occurs when the challenge of a task matches an individual’s skill level, leading to a harmonious balance that encourages peak performance and creativity. Achieving flow can enhance productivity, increased well-being, and a sense of fulfillment. The class mentioned herewith trains you to optimize and balance the release of neurochemicals.

In the state of flow, several neurotransmitters and neurochemicals are released, contributing to the heightened sense of focus, motivation, and well-being experienced by individuals. Some of the key neurochemicals involved include:

  1. Dopamine: Often referred to as the “feel-good” neurotransmitter, dopamine is associated with motivation, reward, and pleasure. During flow, dopamine levels increase, reinforcing the behavior and enhancing the feeling of satisfaction associated with being in the zone.
  2. Endorphins: Endorphins are natural painkillers produced by the body and contribute to feelings of euphoria and well-being. In flow, endorphin levels rise, potentially reducing the perception of discomfort or fatigue and promoting a sense of exhilaration.
  3. Serotonin: Serotonin affects mood regulation, emotional balance, and overall well-being. Increased serotonin levels during flow can contribute to a sense of calmness, contentment, and happiness.
  4. Anandamide: Anandamide is a neurotransmitter associated with bliss, joy, and relaxation. Elevated levels of anandamide during flow may enhance individuals’ overall sense of well-being and pleasure.
  5. Norepinephrine: Norepinephrine plays a role in attention, focus, and arousal. In flow, norepinephrine levels increase, heightening alertness, enhancing concentration, and promoting a state of intense focus on the task at hand.

So this class was much more than just a recipe for flow. It was mapping what is called your Maximally Transformative Process.

The Maximally Transformative process (MTP) refers to a structured approach or methodology designed to help individuals achieve peak performance states such as flow more consistently and experience significant personal and professional growth.

This process typically involves a combination of research-based strategies, tools, and techniques derived from fields such as neuroscience, psychology, and peak performance coaching. It aims to help individuals identify and leverage their strengths, optimize their environment for flow, and cultivate the necessary mindset and skills to enter flow states more reliably.

The maximally transformative process often includes elements such as:

  1. Flow Triggers: Identifying specific triggers or conditions that reliably induce flow states for an individual, such as clear goals, immediate feedback, and a balance between challenge and skill.
  2. Flow Cycles: Understanding the stages of the flow cycle (struggle, release, flow, and recovery) and learning to navigate through them effectively to maximize performance and growth.
  3. Psychological Skills Training: Developing mental skills such as focus, resilience, and mindfulness to enhance the ability to enter and sustain flow states under varying conditions.
  4. Environmental Optimization: Structuring one’s physical and social environment to minimize distractions, maximize motivation, and promote optimal conditions for flow.
  5. Feedback and Reflection: Cultivating a practice of self-awareness, reflection, and continuous learning to refine performance and maintain momentum over time.

The actual class was related to achieving this process. As I mentioned earlier, there was a registration process. Upon registration, one is contacted by a representative from RFC. The person who contacted me for a qualifying interview was Maleke Fuentes. He was amazing during the qualification process. He discussed his background and how he became involved with FRC. He was very forthcoming, and I directly asked if FRC accepted all applicants. He flatly stated – NO.

Once you are accepted, you are dropped into both virtual and live classes. Relative to this, the class operationally consists of a pod that meets twice weekly, and then you have 1:1 time with the respective coach.

My coach was the amazing Marcus Lefton. He was very forthcoming and extremely insightful. He openly shared his amazing background and was very candid in pod and 1:1 classes. Given his background, he led by example and proverbially “at his own dog food,” as they say in the software space. He could go vertically deep and horizontally in recommending operationally, physically, and psychologically, as the FRC is extremely life-changing.

The class is broken into deeper steps into the rabbit hole. As one would expect, this can become extremely self-referential, which is the goal of the class.

For instance, there is a class where we are given 90 seconds to write down at least 15 things YOU do well. I, in full transparency, fully failed. I got to about two, maybe three. In one of Stephen Kotler’s books, he stated to write down 25 things you do well. It is difficult. Further, the suggestions and they are brutal in many cases are counter-intuitive, and they work.

Near the end of the class, we had a 1:1, during which we really drilled down into “my” Maximally Transformative Process. He was extremely candid and stated, “Ted, you are usually the shaman and or the genie that grants everyone else’s wishes. Now the genie is standing before you, asking you what you truly want?” i was very taken back as i don’t think in these terms. i just amplify folks at best.

i have not been the same since. Thank you Marcus.

In short, go look into the class. While it is not cheap, how much is your mental and physical health really worth?

As a wise man once said, “People who don’t need self-help books read them, and people who need them don’t read them.” This is usually the case here as all the folks in the pod i was included in were very performant.

The journey of the Flow Research Collective from Zero To Dangerous is a testament to the transformative power of flow. By unlocking the secrets of peak performance and sharing them with the world, the FRC is helping individuals tap into their innate potential and achieve extraordinary feats. One thing is clear as we look to the future: the flow revolution is just beginning, and the possibilities are limitless.

Personally, I can’t say enough about the class and people. Here is the link to the class -> Flow Research Class.

Go invest in yourself.

Until Then,

#iwishyouwater <- Cloudbreak from the surfing, waves and soundtrack.

Muzak To Blarg by : “Bach Synthesis: 15 Inventions”. Amazing.

Snake_Byte[15] Fourier, Discrete and Fast Transformers

The frequency domain of mind (a mind, it must be stressed, is an unextended, massless, immaterial singularity) can produce an extended, spacetime domain of matter via ontological Fourier mathematics, and the two domains interact via inverse and forward Fourier transforms.

~ Dr. Cody Newman, The Ontological Self: The Ontological Mathematics of Consciousness

I am Optimus Transformer Ruler Of The AutoCorrelation Bots

First i trust everyone is safe. i haven’t written technical blog in a while so figured i would write a Snake_Byte on one of my favorite equations The Fourier Transform:

    \[\hat{f} (\xi)=\int_{-\infty}^{\infty}f(x)e^{-2\pi ix\xi}dx\]

More specifically we will be dealing with the Fast Fourier Transform which is an implementation of The Discrete Fourier Transform. The Fourier Transform operates on continuous signals and while i do believe we will have analog computing devices (again) in the future we have to operate on 0’s and 1’s at this juncture thus we have a discrete version thereof. The discrete version:

    \[F(x) &= f\f[k] &= \sum_{j=0}^{N-1} x[j]\left(e^{-2\pi i k/N}\right)^j\0 &\leq k < N\]

where:

    \[f[k] &= f_e[k]+e^{-2\pi i k/N}f_o[k]\f[k+N/2] &= f_e[k]-e^{-2\pi i k/N}f_o[k]\]

The Discrete Fourier Transform (DFT) is a mathematical operation. The Fast Fourier Transform (FFT) is an efficient algorithm for the evaluation of that operation (actually, a family of such algorithms). However, it is easy to get these two confused. Often, one may see a phrase like “take the FFT of this sequence”, which really means to take the DFT of that sequence using the FFT algorithm to do it efficiently.

The Fourier sequence is a kernel operation for any number of transforms where the kernel is matched to the signal if possible. The Fourier Transform is a series of sin(\theta) and cos(\theta) which makes it really useful for audio and radar analysis.

For the FFT it only takes O(n\log{}n) for the sequence computation and as one would imagine this is a substantial gain. The most commonly used FFT algorithm is the Cooley-Tukey algorithm, which was named after J. W. Cooley and John Tukey. It’s a divide and conquer algorithm for the machine calculation of complex Fourier series. It breaks the DFT into smaller DFTs. Other FFT algorithms include the Rader’s algorithm, Winograd Fourier transform algorithm, Chirp Z-transform algorithm, etc. The only rub comes as a function of the delay throughput.

There have been amazing text books written on this subject and i will list them at the end of the blarg[1,2,3]

So lets get on with some code. First we do the usual houskeeping on import libraries as well as doing some majik for inline display if you are using JupyterNotebooks. Of note ffpack which is a package of Fortran subroutines for the fast Fourier transform. It includes complex, real, sine, cosine, and quarter-wave transforms. It was developed by Paul Swarztrauber of the National Center for Atmospheric Research, and is included in the general-purpose mathematical library SLATEC.

# House keeping libraries imports and inline plots:
import numpy as np
from scipy import fftpack
%matplotlib inline
import matplotlib.pyplot as pl

We now set up some signals where we create a sinusoid with a sample rate. We use linspace to set up the amplitude and signal length.

#frequency in cycles per second or Hertz
#this is equivalent to concert A

Frequency = 20 
# Sampling rate or the number of measurements per second
# This is the rate of digital audio

Sampling_Frequency = 100 

# set up the signal space:
time = np.linspace(0,2,2 * Sampling_Frequency, endpoint = False)
signal = np.sin(Frequency * 2 * np.pi * time)

Next we plot the sinusoid under consideration:

# plot the signal:
fif, ax = plt.subplots()
ax.plot(time, signal)
ax.set_xlabel('Time [seconds]')
ax.set_ylabel('Signal Amplitude')

Next we apply the Fast Fourier Transform and transform into the frequency domain:

X_Hat = fftpack.fft(signal)
Frequency_Component = fftpack.fftfreq(len(signal)) * Sampling_Frequency

We now plot the transformed sinusoid depicting the frequencies we generated:

# plot frequency components of the signal:
fig, ax = plt.subplots()
ax.stem(Frequency_Component, np.abs(X_Hat)) # absolute value of spectrum
ax.set_xlabel ('Frequency in Hertz [HZ] Of Transformed Signal')
ax.set_ylabel ('Frequency Domain (Spectrum) Magnitude')
ax.set_xlim(-Sampling_Frequency / 2, Sampling_Frequency / 2)
ax.set_ylim(-5,110)

To note you will see two frequency components, this is because there are positive and negative (real and imaginary) components to the transform which is what we see using the stem plots as expected. This is because the kernel as mentioned before is both sin(\theta) and cos(\theta).

So something really cool happens when using the FFT. It is called the convolution theorem as well as Dual Domain Theory. Convolution in the time domain yields multiplication in the frequency domain. Mathematically, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the poin-twise (Hadamard multiplication) product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain).

Where:

    \[x(t)*h(t) &= y(t)\]

    \[X(f) H(f) &= Y(f)\]

So there you have it. A little taster on the powerful Fourier Transform.

Until Then,

#iwishyouwater <- Cloudbreak this past year

Muzak To Blarg by: Voyager Essential Max Ricther. Phenomenal. November is truly staggering.

References:

[1] The Fourier Transform and Its Applications by Dr Ronald N Bracewell. i had the honor of taking the actual class at Stanford University from Professor Bracewell.

[2] The Fourier Transform and Its Applications by E. Roan Brigham. Graet book on butterfly and overlap-add derivations thereof.

[3] Adaptive Digital Signal Processing by Dr. Claude Lindquist. A phenomenal book on frequency domain signal processing and kernel analysis. A book ahead of its time. Professor Lindquist was a mentor and had a direct effect and affect on my career and the way i approach information theory.