SnakeByte[20] Fractals Fractals Everywhere

Groks Idea of a Luminous 3D Fractal

Abandon the urge to simplify everything, to look for formulas and easy answers, and to begin to think multidimensionally, to glory in the mystery and paradoxes of life.

~ Scott Peck

First, i hope everyone is safe. Second, I’m writing a SnakeByte that is very near and dear to my heart, fractals. Which is actually encapsulated in the areas of complexity theory, Fibonacci sequences, and the Golden Ratio, ϕ. The world around Us and Our Universe is teeming with these self-similar geometries. Also, sometime ago, i wrote a blog on the mathematics of Grief and how I thought it was fractal-based, never-ending, ever-evolving, entitled “It’s An Honor To Say Goodbye”.

For example:

  • Golden Ratio: Appears in fractals as a scaling factor for self-similar structures (e.g., golden spirals, golden triangles) and in the proportions of natural fractals.
  • Fibonacci Sequence: Manifests in the counting of fractal elements (e.g., spirals in sunflowers, branches in trees) and converges to ϕ, linking to fractal scaling.
  • Mandelbrot Set: Contains spiral patterns that can approximate ϕ-based logarithmic spirals, especially near the boundary.
  • Nature and Art: Both fractals and Fibonacci patterns appear in natural growth and aesthetic designs, reflecting universal mathematical principles as well as our own bodies.

Fractals are mesmerizing mathematical objects that exhibit self-similarity, meaning their patterns repeat at different scales. Their intricate beauty and complexity make them a fascinating subject for mathematicians, artists, and programmers alike. In this blog, i’l dive into the main types of fractals and then create an interactive visualization of the iconic Mandelbrot set using Python and Bokeh, complete with adjustable parameters. get ready Oh Dear Readers i went long in the SnakeTooth.

Like What Are Fractals?

A fractal is a geometric shape that can be split into parts, each resembling a smaller copy of the whole. Unlike regular shapes like circles or squares, fractals have a fractional dimension and display complexity at every magnification level. They’re often generated through iterative processes or recursive algorithms, making them perfect for computational exploration.

Fractals have practical applications too, from modeling natural phenomena (like tree branching or mountain ranges) to optimizing antenna designs and generating realistic graphics in movies and games.

Types of Fractals

Fractals come in various forms, each with unique properties and generation methods. Here are the main types:

Geometric Fractals

Geometric fractals are created through iterative geometric rules, where a simple shape is repeatedly transformed. The result is a self-similar structure that looks the same at different scales.

  • Example: Sierpinski Triangle
    • Start with a triangle, divide it into four smaller triangles, and remove the central one. Repeat this process for each remaining triangle.
    • The result is a triangle with an infinite number of holes, resembling a lace-like pattern.
    • Properties: Perfect self-similarity, simple recursive construction.
  • Example: Koch Snowflake
    • Begin with a straight line, divide it into three parts, and replace the middle part with two sides of an equilateral triangle. Repeat for each segment.
    • The curve becomes infinitely long while enclosing a finite area.
    • Properties: Continuous but non-differentiable, infinite perimeter.

Algebraic Fractals

Algebraic fractals arise from iterating complex mathematical functions, often in the complex plane. They’re defined by equations and produce intricate, non-repeating patterns.

  • Example: Mandelbrot Set (the one you probably have seen)
    • Defined by iterating the function z_n+1=z_n^2+c, where z and c are complex numbers.
    • Points that remain bounded under iteration form the Mandelbrot set, creating a black region with a colorful, infinitely complex boundary.
    • Properties: Self-similarity at different scales, chaotic boundary behavior.
  • Example: Julia Sets
    • Similar to the Mandelbrot set but defined for a fixed c value, with z varying across the plane.
    • Each c produces a unique fractal, ranging from connected “fat” sets to disconnected “dust” patterns.
    • Properties: Diverse shapes, sensitive to parameter changes.

Random Fractals

Random fractals incorporate randomness into their construction, mimicking natural phenomena like landscapes or clouds. They’re less predictable but still exhibit self-similarity.

  • Example: Brownian Motion
    • Models the random movement of particles, creating jagged, fractal-like paths.
    • Used in physics and financial modeling.
    • Properties: Statistically self-similar, irregular patterns. Also akin to the tail of a reverb. Same type of fractal nature.
  • Example: Fractal Landscapes
    • Generated using algorithms like the diamond-square algorithm, producing realistic terrain or cloud textures.
    • Common in computer graphics for games and simulations.
    • Properties: Approximate self-similarity, naturalistic appearance.

Strange Attractors

Strange attractors arise from chaotic dynamical systems, where iterative processes produce fractal patterns in phase space. They’re less about geometry and more about the behavior of systems over time.

  • Example: Lorenz Attractor
    • Derived from equations modeling atmospheric convection, producing a butterfly-shaped fractal.
    • Used to study chaos theory and weather prediction.
    • Properties: Non-repeating, fractal dimension, sensitive to initial conditions.

Now Let’s Compute Some Complexity

Fractals are more than just pretty pictures. They help us understand complex systems, from the growth of galaxies to the structure of the internet. For programmers, fractals are a playground for exploring algorithms, visualization, and interactivity. Let’s now create an interactive Mandelbrot set visualization using Python and Bokeh, where you can tweak parameters like zoom and iterations to explore its infinite complexity.

The Mandelbrot set is a perfect fractal to visualize because of its striking patterns and computational simplicity. We’ll use Python with the Bokeh library to generate an interactive plot, allowing users to adjust the zoom level and the number of iterations to see how the fractal changes.

Prerequisites

  • Install required libraries: pip install numpy bokeh
  • Basic understanding of Python and complex numbers.

Code Overview

The code below does the following:

  1. Computes the Mandelbrot set by iterating the function z_{n+1} = z_n^2 + c for each point in a grid.
  2. Colors points based on how quickly they escape to infinity (for points outside the set) or marks them black (for points inside).
  3. Uses Bokeh to create a static plot with static parameters.
  4. Uses Bokeh to create a static plot.
  5. Displays the fractal as an image with a color palette.
import numpy as np
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
from bokeh.models import LogColorMapper, LinearColorMapper
from bokeh.palettes import Viridis256
import warnings
warnings.filterwarnings('ignore')

# Enable Bokeh output in Jupyter Notebook
output_notebook()

# Parameters for the Mandelbrot set
width, height = 800, 600  # Image dimensions
x_min, x_max = -2.0, 1.0  # X-axis range
y_min, y_max = -1.5, 1.5  # Y-axis range
max_iter = 100  # Maximum iterations for divergence check

# Create coordinate arrays
x = np.linspace(x_min, x_max, width)
y = np.linspace(y_min, y_max, height)
X, Y = np.meshgrid(x, y)
C = X + 1j * Y  # Complex plane

# Initialize arrays for iteration counts and output
Z = np.zeros_like(C)
output = np.zeros_like(C, dtype=float)

# Compute Mandelbrot set
for i in range(max_iter):
    mask = np.abs(Z) <= 2
    Z[mask] = Z[mask] * Z[mask] + C[mask]
    output += mask

# Normalize output for coloring
output = np.log1p(output)

# Create Bokeh plot
p = figure(width=width, height=height, x_range=(x_min, x_max), y_range=(y_min, y_max),
           title="Mandelbrot Fractal", toolbar_location="above")

# Use a color mapper for visualization
color_mapper = LogColorMapper(palette=Viridis256, low=output.min(), high=output.max())
p.image(image=[output], x=x_min, y=y_min, dw=x_max - x_min, dh=y_max - y_min,
        color_mapper=color_mapper)

# Display the plot
show(p)

When you run in in-line you should see the following:

BokehJS 3.6.0 successfully loaded.

Mandlebrot Fractal with Static Parameters

Now let’s do something a little more interesting and add some adjustable parameters for pan and zoom to explore the complex plane space. i’ll break the sections down as we have to add a JavaScript callback routine due to some funkiness with Bokeh and Jupyter Notebooks.

Ok so first of all, let’s break down the imports:

Ok import Libraries:

  • numpy for numerical computations (e.g., creating arrays for the complex plane).
  • bokeh modules for plotting (figure, show), notebook output (output_notebook), color mapping (LogColorMapper), JavaScript callbacks (CustomJS), layouts (column), and sliders (Slider).
  • The above uses Viridis256 for a color palette for visualizing the fractal. FWIW Bokeh provides us with Matplotlib color palettes. There are 5 types of Matplotlib color palettes Magma, Inferno, Plasma, Viridis, Cividis. Magma is my favorite.

Warnings: Suppressed to avoid clutter from Bokeh or NumPy. (i know treat all warnings as errors).

import numpy as np
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
from bokeh.models import LogColorMapper, ColumnDataSource, CustomJS
from bokeh.palettes import Viridis256
from bokeh.layouts import column
from bokeh.models.widgets import Slider
import warnings
warnings.filterwarnings('ignore')

Next, and this is crucial. The following line configures Bokeh to render plots inline in the Jupyter Notebook:

output_notebook()

Initialize the MandleBrot Set Parameters where:

  • width, height: Dimensions of the output image (800×600 pixels).
  • initial_x_min, initial_x_max: Range for the real part of complex numbers (-2.0 to 1.0).
  • initial_y_min, initial_y_max: Range for the imaginary part of complex numbers (-1.5 to 1.5).
  • max_iter: Maximum number of iterations to determine if a point is in the Mandelbrot set (controls detail).

These ranges define the initial view of the complex plane, centered around the Mandelbrot set’s most interesting region.

width, height = 800, 600
initial_x_min, initial_x_max = -2.0, 1.0
initial_y_min, initial_y_max = -1.5, 1.5
max_iter = 100

Okay, now we’re getting somewhere. Onto the meat of the code, as it were. The computation function:

Purpose: Computes the Mandelbrot set for a given region of the complex plane.Steps:

  • Create arrays x and y for real and imaginary parts using np.linspace.
  • Form a 2D grid with np.meshgrid and combine into a complex array C = X + 1j * Y.
  • Initialize Z (iteration values) and output (iteration counts) as zero arrays.
  • Iterate up to max_iter times:
    • mask = np.abs(Z) <= 2: Identifies points that haven’t diverged (magnitude ≤ 2).
    • Update Z for non-diverged points using the Mandelbrot formula: Z = Z^2 + C
    • Increment output for points still in the set (via mask).
  • Apply np.log1p(output) to smooth the color gradient for visualization. This took me a while to be honest.

Output: A 2D array where each value represents the (log-transformed) number of iterations before divergence, used for coloring.

def compute_mandelbrot(x_min, x_max, y_min, y_max, width, height, max_iter):
    x = np.linspace(x_min, x_max, width)
    y = np.linspace(y_min, y_max, height)
    X, Y = np.meshgrid(x, y)
    C = X + 1j * Y
    Z = np.zeros_like(C)
    output = np.zeros_like(C, dtype=float)
    for i in range(max_iter):
        mask = np.abs(Z) <= 2
        Z[mask] = Z[mask] * Z[mask] + C[mask]
        output += mask
    return np.log1p(output)

Let’s call the function where we compute the Mandelbrot set for the initial view (-2.0 to 1.0 real, -1.5 to 1.5 imaginary).

output = compute_mandelbrot(initial_x_min, initial_x_max, initial_y_min, initial_y_max, width, height, max_iter)

Now we are going to set up the eye candy with bokeh.

Figure: Creates a Bokeh plot with the specified dimensions and initial ranges.

  • x_range and y_range set the plot’s axes to match the complex plane’s real and imaginary parts.
  • toolbar_location=”above” adds zoom and pan tools above the plot. (if anyone has any idea, i couldn’t get this to work correctly)
  • Color Mapper: Maps iteration counts to colors using Viridis256 (a perceptually uniform palette). LogColorMapper: ensures smooth color transitions.
  • Image: Renders the output array as an image:
  • dw, dh: Width and height in data coordinates.
p = figure(width=width, height=height, x_range=(initial_x_min, initial_x_max), y_range=(initial_y_min, initial_y_max),
           title="Interactive Mandelbrot Fractal", toolbar_location="above")
color_mapper = LogColorMapper(palette=Viridis256, low=output.min(), high=output.max())
image = p.image(image=[output], x=initial_x_min, y=initial_y_min, dw=initial_x_max - initial_x_min, dh=initial_y_max - initial_y_max, color_mapper=color_mapper)

Now let’s add some interactivity where:

Sliders:

step: Controls slider precision (0.1 for smooth adjustments).

x_pan_slider: Adjusts the center of the real axis (range: -2.0 to 1.0, default 0).

y_pan_slider: Adjusts the center of the imaginary axis (range: -1.5 to 1.5, default 0).

zoom_slider: Scales the view (range: 0.1 to 3.0, default 1.0; smaller values zoom in, larger zoom out).

x_pan_slider = Slider(start=-2.0, end=1.0, value=0, step=0.1, title="X Pan")
y_pan_slider = Slider(start=-1.5, end=1.5, value=0, step=0.1, title="Y Pan")
zoom_slider = Slider(start=0.1, end=3.0, value=1.0, step=0.1, title="Zoom")

Ok, now here is what took me the most time, and I had to research it because, well, because i must be dense. We need to add a JavaScript Callback for Slider Updates. This code updates the plot’s x and y ranges when sliders change, without recomputing the fractal (for performance). For reference: Javascript Callbacks In Bokeh.

  • zoom_factor: Scales the view (1.0 = original size, <1 zooms in, >1 zooms out).
  • x_center: Shifts the real axis center by x_pan.value from the initial center.
  • y_center: Shifts the imaginary axis center by y_pan.value.
  • x_width, y_height: Scale the original ranges by zoom_factor.
  • Updates p.x_range and p.y_range to reposition and resize the view.

The callback triggers whenever any slider’s value changes.

callback = CustomJS(args=dict(p=p, x_pan=x_pan_slider, y_pan=y_pan_slider, zoom=zoom_slider,
                             initial_x_min=initial_x_min, initial_x_max=initial_x_max,
                             initial_y_min=initial_y_min, initial_y_max=initial_y_max),
                    code="""
    const zoom_factor = zoom.value;
    const x_center = initial_x_min + (initial_x_max - initial_x_min) / 2 + x_pan.value;
    const y_center = initial_y_min + (initial_y_max - initial_y_min) / 2 + y_pan.value;
    const x_width = (initial_x_max - initial_x_min) * zoom_factor;
    const y_height = (initial_y_max - initial_y_min) * zoom_factor;
    p.x_range.start = x_center - x_width / 2;
    p.x_range.end = x_center + x_width / 2;
    p.y_range.start = y_center - y_height / 2;
    p.y_range.end = y_center + y_height / 2;
""")
x_pan_slider.js_on_change('value', callback)
y_pan_slider.js_on_change('value', callback)
zoom_slider.js_on_change('value', callback)

Ok, here is a long explanation which is important for the layout and display to understand what is happening computationally fully:

  • Layout: Arranges the plot and sliders vertically using column.
  • Display: Renders the interactive plot in the notebook.

X and Y Axis Labels and Complex Numbers

The x and y axes of the plot represent the real and imaginary parts of complex numbers in the plane where the Mandelbrot set is defined.

  • X-Axis (Real Part):
    • Label: Implicitly represents the real component of a complex number c=a+bi.
    • Range: Initially spans -2.0 to 1.0 (covering the Mandelbrot set’s primary region).
    • Interpretation: Each x-coordinate corresponds to the real part a of a complex number c. For example, x=−1.5 x = -1.5 x=−1.5 corresponds to a complex number with real part -1.5 (e.g., c=−1.5+bi).
    • Role in Mandelbrot: The real part, combined with the imaginary part, defines the constant c in the iterative formula z_{n+1} = z_n^2 + c.
  • Y-Axis (Imaginary Part):
    • Label: Implicitly represents the imaginary component of a complex number c=a+bi.
    • Range: Initially spans -1.5 to 1.5 (symmetric to capture the set’s structure).
    • Interpretation: Each y-coordinate corresponds to the imaginary part b (scaled by i). For example, y=0.5 y = 0.5 y=0.5 corresponds to a complex number with imaginary part 0.5i (e.g., c=a+0.5i).
    • Role in Mandelbrot: The imaginary part contributes to c, affecting the iterative behavior.
  • Complex Plane:
    • Each pixel in the plot corresponds to a complex number c=x+yi, where x is the x-coordinate (real part) and y is the y-coordinate (imaginary part).
    • The Mandelbrot set consists of points c where the sequence z_0 = 0 , z_{n+1} = z_n^2 + c remains bounded (doesn’t diverge to infinity).
    • The color at each pixel reflects how quickly the sequence diverges (or if it doesn’t, it’s typically black).
  • Slider Effects:
    • X Pan: Shifts the real part center, moving the view left or right in the complex plane.
    • Y Pan: Shifts the imaginary part center, moving the view up or down.
    • Zoom: Scales both real and imaginary ranges, zooming in (smaller zoom_factor) or out (larger zoom_factor). Zooming in reveals finer details of the fractal’s boundary.
layout = column(p, x_pan_slider, y_pan_slider, zoom_slider)
show(layout)

A couple of notes on general performance:

  • Performance: The code uses a JavaScript callback to update the view without recomputing the fractal, which is fast but limits resolution. For high-zoom levels, you’d need to recompute the fractal (not implemented here for simplicity).
  • Axis Labels: Bokeh doesn’t explicitly label axes as “Real Part” or “Imaginary Part,” but the numerical values correspond to these components. You could add explicit labels using p.xaxis.axis_label = “Real Part” and p.yaxis.axis_label = “Imaginary Part” if desired.
  • Fractal Detail: The fixed max_iter=100 limits detail at high zoom. For deeper zooms, max_iter should increase, and the fractal should be recomputed. Try adding a timer() function to check compute times.

Ok here is the full listing you can copypasta this into a notebook and run it as is. i suppose i could add gist.

import numpy as np
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
from bokeh.models import LogColorMapper, ColumnDataSource, CustomJS
from bokeh.palettes import Viridis256
from bokeh.layouts import column
from bokeh.models.widgets import Slider
import warnings
warnings.filterwarnings('ignore')

# Enable Bokeh output in Jupyter Notebook
output_notebook()

# Parameters for the Mandelbrot set
width, height = 800, 600  # Image dimensions
initial_x_min, initial_x_max = -2.0, 1.0  # Initial X-axis range
initial_y_min, initial_y_max = -1.5, 1.5  # Initial Y-axis range
max_iter = 100  # Maximum iterations for divergence check

# Function to compute Mandelbrot set
def compute_mandelbrot(x_min, x_max, y_min, y_max, width, height, max_iter):
    x = np.linspace(x_min, x_max, width)
    y = np.linspace(y_min, y_max, height)
    X, Y = np.meshgrid(x, y)
    C = X + 1j * Y  # Complex plane

    Z = np.zeros_like(C)
    output = np.zeros_like(C, dtype=float)

    for i in range(max_iter):
        mask = np.abs(Z) <= 2
        Z[mask] = Z[mask] * Z[mask] + C[mask]
        output += mask

    return np.log1p(output)

# Initial Mandelbrot computation
output = compute_mandelbrot(initial_x_min, initial_x_max, initial_y_min, initial_y_max, width, height, max_iter)

# Create Bokeh plot
p = figure(width=width, height=height, x_range=(initial_x_min, initial_x_max), y_range=(initial_y_min, initial_y_max),
           title="Interactive Mandelbrot Fractal", toolbar_location="above")

# Use a color mapper for visualization
color_mapper = LogColorMapper(palette=Viridis256, low=output.min(), high=output.max())
image = p.image(image=[output], x=initial_x_min, y=initial_y_min, dw=initial_x_max - initial_x_min,
                dh=initial_y_max - initial_y_min, color_mapper=color_mapper)

# Create sliders for panning and zooming
x_pan_slider = Slider(start=-2.0, end=1.0, value=0, step=0.1, title="X Pan")
y_pan_slider = Slider(start=-1.5, end=1.5, value=0, step=0.1, title="Y Pan")
zoom_slider = Slider(start=0.1, end=3.0, value=1.0, step=0.1, title="Zoom")

# JavaScript callback to update plot ranges
callback = CustomJS(args=dict(p=p, x_pan=x_pan_slider, y_pan=y_pan_slider, zoom=zoom_slider,
                             initial_x_min=initial_x_min, initial_x_max=initial_x_max,
                             initial_y_min=initial_y_min, initial_y_max=initial_y_max),
                    code="""
    const zoom_factor = zoom.value;
    const x_center = initial_x_min + (initial_x_max - initial_x_min) / 2 + x_pan.value;
    const y_center = initial_y_min + (initial_y_max - initial_y_min) / 2 + y_pan.value;
    const x_width = (initial_x_max - initial_x_min) * zoom_factor;
    const y_height = (initial_y_max - initial_y_min) * zoom_factor;
    
    p.x_range.start = x_center - x_width / 2;
    p.x_range.end = x_center + x_width / 2;
    p.y_range.start = y_center - y_height / 2;
    p.y_range.end = y_center + y_height / 2;
""")

# Attach callback to sliders
x_pan_slider.js_on_change('value', callback)
y_pan_slider.js_on_change('value', callback)
zoom_slider.js_on_change('value', callback)

# Layout the plot and sliders
layout = column(p, x_pan_slider, y_pan_slider, zoom_slider)

# Display the layout
show(layout)

Here is the output screen shot: (NOTE i changed it to Magma256)

Mandlebrot Fractal with Interactive Bokeh Sliders.

So there ya have it. A little lab for fractals. Maybe i’ll extend this code and commit it. As a matter of fact, i should push all the SnakeByte code over the years. That said concerning the subject of Fractals there is a much deeper and longer discussion to be had relating fractals, the golden ratio ϕ=1+ϕ1​ and the Fibonacci sequences (A sequence where each number is the sum of the two preceding ones: 0, 1, 1, 2, 3, 5, 8, 13, 21…) are deeply interconnected through mathematical patterns and structures that appear in nature, art, and geometry. The very essence of our lives.

Until Then.

#iwishyouwater <- Pavones,CR Longest Left in the Northern Hemisphere. i love it there.

Ted ℂ. Tanner Jr. (@tctjr) / X

MUZAK To Blog By: Arvo Pärt, Für Anna Maria, Spiegel im Spiegel (Mirror In Mirror) very fractal song. it is gorgeous.

References:

Here is a very short list of books on the subject. Ping me as I have several from philosophical to mathematical on said subject.

[1] Fractals for the Classroom: Strategic Activities Volume One by Peitgen, HeinzOtto; Jürgens, Hartmut; Saupe, Dietmar by Springer

Fractals for the Classroom: Strategic Activities Volume One by Peitgen, HeinzOtto; Jürgens, Hartmut; Saupe, Dietmar by Springer

Good Introduction Text. Has some fun experiments like creating fractals with video feedback.

[2] Fractals and Chaos: The Mandelbrot Set and Beyond

Fractals and Chaos: The Mandelbrot Set and Beyond

[3] Introduction to Dynamic Systems: Theory, Models, and Applications by Luenberger, David G. by Wiley

Introduction to Dynamic Systems: Theory, Models, and Applications by Luenberger, David G. by Wiley

Great book on stability and things like Lorentz attractors and Lyapunov functions.

[4] The Fractal Geometry of Nature by Benoit B. Mandelbrot by W. H. Freeman & Co.

The Fractal Geometry of Nature by Benoit B. Mandelbrot by W. H. Freeman & Co.

The book that started it all. I was lucky enough to see a lecture by him. Astounding.

[5] Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering By Steven H Strogatz

Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering By Steven H Strogatz

This is a tough slog and not for the faint of heart.

[6] Complexity: A Guided Tour by Mitchell, Melanie

Complexity: A Guided Tour by Mitchell, Melanie by OUP Us

Professor Mitchell wrote a great accessible book. i took her complexity class at the Sante Fe Institute.

[7]

NVIDIA GTC 2025: The Time Has Come The Valley Said

OpenAI’s idea of The Valley – Its Been A Minute

Embrace the unknown and embrace change. That’s where true breakthroughs happen.

~Jensen Huang

First i trust everyone is safe. Second i usually do not write about discrete events or “work” related items but this is an exception. March 17-21, 2025 i and some others attended NVIDIA GTC2025. It warranted a long writeup. Be Forewarned: tl;dr. Read on Dear Reader. Hope you enjoy this one as it is a sea change in computing and a tectonic ocean shift in technology.

NVIDIA GTC 2025: AI’s Raw Hot Buttered Future

March 17-21, 2025, San Jose became geek central for NVIDIA’s GTC—aka the “Super Bowl of AI.” Hybrid setup, in-person or virtual, didn’t matter; thousands of devs, researchers, and suits swarmed to see what’s cooking in AI, GPUs, and robotics. Jensen Huang dropped bombs in his keynote, 1,000+ sessions drilled into the guts of it, and big players flexed their wares. Here’s the raw dog buttered scoop—and why you should care if you sling code or ship product.

The time has come,’ the Walrus said,

      To talk of many things:

Of shoes — and ships — and sealing-wax —

      Of cabbages — and kings —

And why the sea is boiling hot —

      And whether pigs have wings.’


~ The Walrus and The Carpenter

All The Libraries

Jensen’s Keynote: AI’s Next Gear, No Hype

March 18, 2025 SAP Center and the MCenery Civic Center, over 28,000 geeks packed in both halls and out in the streets . Jensen Huang, NVIDIA’s leather-jacketed maestro, hit the stage and didn’t waste breath. 2.5 hours no notes and started with the top of the stack with all the libraries NVIDIA has “CUDA-ized” and went all the way down to the photonic ethernet cables. No corporate fluff, just tech meat for the developer carnivore. His pitch: AI’s not just chatbots anymore; it’s “agentic,” thinking and moving in the real world forward at the speed of thought. Backed up with specifications, cycles, cost and even calling out library function calls.

Here’s what he unleashed:

  • Blackwell Ultra (B300): Mid-cycle beast, 288GB memory, out H2 2025. Training LLMs that’d choke lesser rigs—AMD’s sniffing, but NVIDIA’s still king.
  • Rubin + Vera Rubin: GPU + CPU superchip combo, late 2026. Named for the galaxy guru, it’s Grace Blackwell’s heir. Full-stack domination vibes.
  • Physical AI & GR00T N1: Robots that do real things. GR00T’s a humanoid platform tying training together, synced with Omniverse and Cosmos for digital twin sims. Robotics just got real even surreal.
  • NVIDIA Dynamo: “AI Factory OS.” Data centers as reasoning engines, not just compute mules. Deploy AI without the usual ops nightmare. <This> will change it all.
  • Quantum Day: IonQ, D-Wave, Rigetti execs talking quantum. It’s distant, but NVIDIA’s planting CUDA flags for the long game.

Jensen’s big claim: AI needs 100 more computing than we thought. That’s not a flex it’s a warning. NVIDIA’s rigging the pipes to pump it.

He said thank you to the developer more than 5 times, mentioned open source at least 4 times and said ecosystem at least 5 times. It was possibly the best keynote i have ever seen and i have been to and seen some of the best. Zuckerburg was right – if you do not have a technical CEO and a technical board, you are not a technical company at heart.

Jensen with Disney Friend

What It Means: Unfiltered and Untrained Takeaways

As i said GTC 2025 wasn’t a bloviated sales conference taking over a city; it was the tech roadmap, raw and real:

  • AI’s Next Frontier: The shift to agentic AI and physical AI (e.g., robotics) suggests that AI is moving beyond chatbots and image generation into real-world problem-solving. NVIDIA’s hardware and software innovations—like Blackwell Ultra and Dynamo—position it as the enabler of this transition.
  • Compute Power Race: Huang’s claim of a 100x compute demand surge underscores the urgency for scalable, energy-efficient solutions. NVIDIA’s full-stack approach (hardware, software, networking) gives it an edge, though competition from AMD and custom chipmakers looms.
  • Robotics Revolution: With GR00T and related platforms, NVIDIA is betting big on robotics as a 50 trillion dollar opportunity. This could transform industries like manufacturing and healthcare, making 2025 a pivotal year for robotic adoption.
  • Ecosystem Dominance: NVIDIA’s partnerships with tech giants and startups alike reinforce its role as the linchpin of the AI ecosystem. Its 82% GPU market share may face pressure, but its software (e.g., CUDA, NIM) and services (e.g., DGX Cloud) create a formidable moat.
  • Long-Term Vision: The focus on quantum computing and the next-next-gen architectures (like Feynman, slated for 2028) shows NVIDIA isn’t resting on its laurels. It’s preparing for a future where AI and quantum tech converge.

Sessions: Ship Code, Not Slides

Over 1,000 sessions at the McEnery Convention Center. No hand-holding pure tech fuel for devs and decision-makers. Standouts:

  • Generative AI & MLOps: Scaling LLMs without losing your mind (or someone else’s). NVIDIA’s inference runtime and open models cut the fat—production-ready, not science-fair thoughting.
  • Robotics: Isaac and Cosmos hands-on. Simulate, deploy, done. Manufacturing and healthcare devs, this is your cue.
  • Data Centers: DGX Station’s 20 petaflops in a box. Next-gen networking talks had the ops crowd drooling.
  • Graphics: RTX for 2D/3D and AR/VR. Filmmakers and game devs got a speed boost—less render hell.
  • Quantum: Day-long deep dive. CUDA’s quantum bridge is speculative, but the math’s stacking up.
  • Digital Twins and Simulation: Omniverse™ provides advanced simulation capabilities for adding true-to-reality physics to scene compositions. Build on models from basic rigid-body simulation to destruction, fluid-dynamics-based fire simulation, and physics-based scene authoring.

Near Real-Time Digital Twin Rendering Of A Ship

The DGX Spark Computer

i personally thought this deserved its own call-out. The announcement of the DGX Spark Computer. It is a compact AI supercomputer. Let us unpack its specs and capabilities for training large language models (LLMs). This little beast is designed to bring serious AI firepower to your desk, so here’s the rundown based on what NVIDIA has shared at the conference.

The DGX Spark is powered by the NVIDIA GB10 Grace Blackwell Superchip, a tightly integrated combo of CPU and GPU muscle. Here’s what it’s packing:

  • GPU: Blackwell GPU with 5th-generation Tensor Cores, supporting FP4 precision (4-bit floating-point). NVIDIA claims it delivers up to 1,000 AI TOPS (trillions of operations per second) at FP4—insane compute for a desktop box.
  • CPU: 20 Armv9 cores (10 Cortex-X925 + 10 Cortex-A725), connected to the GPU via NVIDIA’s NVLink-C2C interconnect. This gives you 5x the bandwidth of PCIe Gen 5, keeping data flowing fast between CPU and GPU.
  • Memory: 128 GB of unified LPDDR5x with a 256-bit bus, clocking in at 273 GB/s bandwidth. This unified memory pool is shared between CPU and GPU, critical for handling big AI workloads without choking on data transfers.
  • Storage: Options for 1 TB or 4 TB NVMe SSD—plenty of room for datasets, models, and checkpoints.
  • Networking: NVIDIA ConnectX-7 with 200 Gb/s RDMA (scalable to 400 Gb/s when pairing two units), plus Wi-Fi 7 and 10GbE for wired connections. You can cluster two Sparks to double the power.
  • I/O: Four USB4 ports (40 Gbps), HDMI 2.1a, Bluetooth 5.3—modern connectivity for hooking up peripherals or displays.
  • OS: Runs NVIDIA DGX OS, a custom Ubuntu Linux build loaded with NVIDIA’s AI software stack (CUDA, NIM microservices, frameworks, and pre-trained models).
  • Power: Sips just 170W from a standard wall socket—efficient for its punch.
  • Size: Tiny at 150 mm x 150 mm x 50.5 mm (about 1.1 liters) and 1.2 kg—it’s palm-sized but packs a wallop.

The DGX Spark Computer

This thing’s a sleek, power-efficient monster styled like a mini NVIDIA DGX-1, aimed at developers, researchers, and data scientists who want data-center-grade AI on their desks – in gold metal flake!

Now, the big question: how beefy an LLM can the DGX Spark train? NVIDIA’s marketing pegs it at up to 200 billion parameters for local prototyping, fine-tuning, and inference on a single unit. Pair two Sparks via ConnectX-7, and you can push that to 405 billion parameters. But let’s break this down practically—training capacity depends on what you’re doing (training from scratch vs. fine-tuning) and how you manage memory.

  • Fine-Tuning: NVIDIA highlights fine-tuning models up to 70 billion parameters as a sweet spot for a single Spark. With 128 GB of unified memory, you’re looking at enough space to load a 70B model in FP16 (16-bit floating-point), which takes about 140 GB uncompressed. Techniques like quantization (e.g., 8-bit or 4-bit) or offloading to SSD can stretch this further, but 70B is the comfy limit for active fine-tuning without heroic optimization.
  • Training from Scratch: Full training (not just fine-tuning) is trickier. A 200B-parameter model in FP16 needs around 400 GB of memory just for weights, ignoring gradients and optimizer states, which can triple that to 1.2 TB. The Spark’s 128 GB can’t handle that alone without heavy sharding or clustering. NVIDIA’s 200B claim likely assumes inference or light fine-tuning with aggressive quantization (e.g., FP4 via Tensor Cores), not full training. For two units (256 GB total), you might train a 200B model with extreme optimization—think model parallelism and offloading—but it’s not practical for most users.
  • Real-World Limit: For full training on one Spark, you’re realistically capped at 20-30 billion parameters in FP16 with standard methods (weights + gradients + Adam optimizer fit in 128 GB). Push to 70B with quantization or two-unit clustering. Beyond that, 200B+ is more about inference or fine-tuning pre-trained models, not training from zero.

Not bad for 4000.00. Think of all the things you could do… All of the companies you could build… Now onto the sessions.

Speakings and Sessions

There were 2,000+ speakers, some Nobel-tier, delivered. Straight no chaser – code, tools, and war stories. Hardcore programming sessions on CUDA, NVIDIA’s parallel computing platform, and tools like Dynamo (the new AI Factory OS). Think line-by-line breakdowns of optimizing AI models or squeezing performance from Blackwell Ultra GPUs. Once again, slideware jockeys need not apply.

The speaker list was a who’s-who of brainpower and hustle. Nobel laureates like Frances Arnold brought scientific heft—imagine her linking GPU-accelerated protein folding to drug discovery. Meanwhile, Yann LeCun and Noam Brown (OpenAI) tackled AI’s bleeding edge, like agentic reasoning or game theory hacks. Then you had practitioners Joe Park (Yum! Brands) on AI for fast food RJ Scaringe (Rivian) on autonomous driving, grounding it in real-world stakes.

Literally, a who-who of the AI developer world baring souls (if they have one) and scars from the war stories, and they do have them.

There was one talk in particular that was probably one of the best discussions i have seen in the past decade. SoFar Ocean Technologies is partnering with MITRE and NVIDIA to power the future of ocean AI!

MITRE announced a joint effort to build an AI-powered ocean digital twin fueled by real-time data from the global Spotter network. Researchers, government, and industry will use the digital twin to simulate and better understand the marine environments in which they operate.

As AI supercharges weather prediction, even the most advanced models will need more ocean data to be effective. Sofar provides these essential observations at scale. To power the digital twin, SoFar will deliver data from their global network of real-time ocean sensors and collaborate with MITRE to rapidly expand the adoption of the Bristlemouth open connectivity standard. Live data will feed into the NVIDIA Omniverse and open up new pathways for AI-powered ocean understanding.

BristleMouth Open Source Orchestration UxV Platform

The systems of systems and ecosystem reach are spectacular. The effort is monumental, and only through software can this scale be achievable. Of primary interest to this ecosystem effort they have partnered with Ocean Exploration Trust and the Nautilus Exploration Program to seek out new discoveries in geology, biology, and archaeology while conducting scientific exploration of the seafloor. The expeditions launch aboard Exploration Vessel Nautilus — a 68-meter research ship equipped with live-streaming underwater vehicles for scientists, students, and the public to explore the deep sea from anywhere in the world. We embed educators and interns in our expeditions who share their hands-on experiences via ship-to-shore connections with the next generation. Even while they are not at sea, explorers can dive into Nautilus Live to learn more about our expeditions, find educational resources, and marvel at new encounters.

“The most powerful technologies are the ones that empower others.”

~Jensen Huang

The Nautilus Live Mapping Software

At the end of the talk, I asked a question on the implementation of AI Orchestration for sensors underwater as well as personally thanked Dr Robert Ballard, who was in the audience, for his amazing work. Best known for his 1985 discovery of the RMS Titanic, Dr. Robert Ballard has succeeded in tracking down numerous other significant shipwrecks, including the German battleship Bismarck, the lost fleet of Guadalcanal, the U.S. aircraft carrier Yorktown (sunk in the World War II Battle of Midway), and John F. Kennedy’s boat, PT-109.

Again Just amazing. Check out the work here: SoFar Ocean.

What Was What: Big Dogs and Upstarts

The Exhibit hall was a technology zoo and smorgasbord —400+ OGs and players showing NVIDIA’s reach. (An Introvert’s Worst Nightmare.) Who showed up:

  • Tech Giants: Adobe, Amazon, Microsoft, Google, Oracle. AWS and Azure lean hard on NVIDIA GPUs—cloud AI’s backbone.
  • AI Hotshots: OpenAI and DeepSeek. ChatGPT’s parents still ride NVIDIA silicon; efficiency debates be damned.
  • Robots & Cars: Tesla hinting at autonomy juice, Delta poking at aviation AI. NVIDIA’s tentacles stretch wide.
  • Quantum Crew: Alice & Bob, D-Wave, IonQ, Rigetti. Quantum’s sci-fi, but they’re here.
  • Hardware: Dell, Supermicro, Cisco with GPU-stuffed rigs. Ecosystem’s locked in.
  • AI Platforms: Edge Impulse, Clear ML, Haystack – you need training and ML deployment they had it.

Inception Program: Fueling the Next Wave

Now, the Inception program—NVIDIA’s startup accelerator—is the unsung hero of GTC. With over 22,000 members worldwide, it’s a breeding ground for AI innovation, and GTC 2025 was their stage. Nearly 250 Inception startups showed up, from healthcare disruptors to robotics trailblazers like Stelia (shoutout to their “petabit-scale data mobility” talk). These aren’t pie-in-the-sky outfits—100+ had speaking slots, and their demos at the Inception Pavilion were hands-on proof of GPU-powered breakthroughs.

The program’s a sweet deal: free to join, no equity grab, just pure support—100K in DGX Cloud credits, Deep Learning Institute training, VC intros via the VC Alliance. They even had a talk on REVERSE VC pitches. What the VCs in Silicon Valley are looking for at the moment, and they were funding companies at the conference! It’s NVIDIA saying, “We’ll juice your tech, you change the game.” At GTC, you saw the payoff—startups like DeepSeek and Baseten flexing optimized models or enterprise tools, all built on NVIDIA’s stack. Critics might say it locks startups into NVIDIA’s ecosystem, but with nearly 300K in credits and discounts on tap, it’s hard to argue against the boost. The war stories from these founders—like scaling AI infra without frying a data center—were gold for any dev in the trenches.

GTC 2025 and Inception are two sides of the same coin. GTC’s the megaphone—blasting NVIDIA’s vision (and hardware) to the world—while Inception’s the incubator, quietly powering the startups that’ll flesh out that vision. Huang’s keynote hyped a token-driven AI economy, and Inception’s crew is already living it, churning out reasoning models and robotics on NVIDIA’s gear. It’s a symbiotic flex: GTC shows the “what,” Inception delivers the “how.”

We’re here to put a dent in the universe. Otherwise, why else even be here? 

~ Steve Jobs

Micheal Dell and Your Humble Narrator at the Dell Booth

I did want to call out one announcement that I think has been a long time in the works in the industry, and I have been a very strong evangelist for, and that is a distributed inference OS.

Dynamo: The AI Factory OS That’s Too Cool to Gatekeep

NVIDIA unleashed Dynamo—think of it as the operating system for tomorrow’s AI factories. Huang’s pitch? Data centers aren’t just server farms anymore; they’re churning out intelligence like Willy Wonka’s chocolate factory but with fewer Oompa Loompas (queue the imagination song). Dynamo’s got a slick trick: it’s built from the ground up to manage the insane compute loads of modern AI, whether you’re reasoning, inferring, or just flexing your GPU muscle. And here’s the kicker—NVIDIA’s tossing the core stack into the open-source wild via GitHub. Yep, you heard that right: free for non-commercial use under an Apache 2.0 license. It’s like they’re saying, “Go build your own AI empire—just don’t sue us!” For the enterprise crowd, there’s a beefier paid version with extra bells and whistles (of course). Open-source plus premium? Whoever heard of such a thing! That’s a play straight out of the Silicon Valley handbook.

Dynamo High-Level Architecture


Dynamo is high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as

  • Disaggregated prefill & decode inference – Maximizes GPU throughput and facilitates trade off between throughput and latency.
  • Dynamic GPU scheduling – Optimizes performance based on fluctuating demand
  • LLM-aware request routing – Eliminates unnecessary KV cache re-computation
  • Accelerated data transfer – Reduces inference response time using NIXL.
  • KV cache offloading – Leverages multiple memory hierarchies for higher system throughput

Dynamo enables dynamic worker scaling, responding to real-time deployment signals. These signals, captured and communicated through an event plane, empower the Planner to make intelligent, zero-downtime adjustments. For instance, if an increase in requests with long input sequences is detected, the Planner automatically scales up prefill workers to meet the heightened demand.

Beyond efficient event communication, data transfer across multi-node deployments is crucial at scale. To address this, Dynamo utilizes NIXL, a technology designed to expedite transfers through reduced synchronization and intelligent batching. This acceleration is particularly vital for disaggregated serving, ensuring minimal latency when prefill workers pass KV cache data to decode workers.

Dynamo prioritizes seamless integration. Its modular design allows it to work harmoniously with your existing infrastructure and preferred open-source components. To achieve optimal performance and extensibility, Dynamo leverages the strengths of both Rust and Python. Critical performance-sensitive modules are built with Rust for speed, memory safety, and robust concurrency. Meanwhile, Python is employed for its flexibility, enabling rapid prototyping and effortless customization.

Oh yeah, and for all the naysayers over the years, it uses Nats.io as the messaging bus. Here is the Github. Get your fork on, but please contribute back – ya hear?

Tokenized Reasoning Economy

Along with this Dynamo announcement, NVidia has created an economy around tokenized reasoning models, in a monetary sense. This is huge. Let me break this down.

Now, why call this an economy? In a monetary sense, NVIDIA’s creating a system where compute power (delivered via its GPUs) and tokens (the output of reasoning models) act like resources and currency in a marketplace. Here’s how it works:

  • Compute as the Factory: NVIDIA’s GPUs—think Blackwell Ultra or Hopper—are the engines that power these reasoning models. The more compute you throw at a problem (more GPUs, more time), the more tokens you can generate, and the smarter the AI’s answers get. It’s like a factory producing goods, but the goods here are tokens representing intelligence.
  • Tokens as Currency: In the AI world, tokens aren’t just data—they’re value. Companies running AI services (like chatbots or analytics tools) often charge based on tokens processed—say, (X) dollars per million tokens. NVIDIA’s optimizing this with tools like Dynamo, which boosts token output while cutting costs, essentially making the “token economy” more efficient. More tokens per dollar = more profit for businesses using NVIDIA’s tech. Tokens Per Second will be the new metric.
  • Supply and Demand: Demand for reasoning AI is skyrocketing—enterprises, developers, and even robotics firms want smarter systems. NVIDIA supplies the hardware (GPUs) and software (like Dynamo and NIM microservices) to meet that demand. The more efficient their tech, the more customers flock to them, driving sales of GPUs and services like DGX Cloud.
  • Revenue Flywheel: Here’s the monetary kicker—NVIDIA’s raking in billions ($39.3B in a single quarter, per GTC 2025 buzz) because every industry needs this tech. They sell GPUs to data centers, cloud providers, and enterprises, who then use them to generate tokens and charge end users. NVIDIA reinvests that cash into better chips and software, keeping the cycle spinning.

NVIDIA’s “tokenized reasoning model economy” is about turning AI intelligence into a scalable, profitable commodity—where tokens are the product, GPUs are the means of production, and the tech industry is the market. The Developers power the Flywheel. Makes the mid-90s look like Bush League sports ball.

Tori MCcaffrey Technical Product Manager Extraordinaire and Your Humble Narrator

All that is really missing is a good artificial intelligence to control the whole process. And that is the trick, isnt it? These types of blue-sky discussions always assume certain advances for a sucessful implmentation. Unfortunately, A.I. is the bottleneck in this case. We’re close with replication and manufacturing processes and we could probably build sufficiently effective ion drives if we had the budget. But we lack a way to provide enought intelligence for the probe to handle all the situations it could face.

~ Eduard Guijpers from the Convention Panel -Designing a Von Nueman Probe

Daily and Lecun – Fireside

Lecun FireSide Chat

Yann LeCun, Turing Award badass and Meta’s AI Chief Scientist brain, sat down for a fireside chat with Bill Daily, Chief Scientist at NVIDIA that cut through the AI hype. No fluffy TED Talk (or me talking) vibes here just hot takes from a guy who’s been torching (get it?) neural net limits since the ‘80s. With Jensen Huang’s “agentic AI” bomb still echoing from the keynote, LeCun brought the dev crowd at the McEnery Civic Center a dose of real talk on where deep learning’s headed.

LeCun didn’t mince words: generative AI’s cool, but it’s a stepping stone. The future’s in systems that reason, not just parrot think less ChatGPT, and more “machines that actually get real work done.” He riffed on NVIDIA’s Blackwell Ultra and GR00T robotics push, nodding to the computing muscle needed for his vision. “You want AI that plans and acts? You’re burning 100x more flops than today,” he said, echoing Jensen’s compute hunger warning. No surprise—he’s been preaching energy-efficient architectures forever.

The discussion further dug into LeCun’s latest obsession: self-supervised learning on steroids. He’s betting it’ll crack real-world perception for robots and autonomous rigs stuff NVIDIA’s Cosmos and Isaac platforms are already juicing. “Supervised learning’s dead-end for scale,” he jabbed. “Data’s the bottleneck, not flops.” There were several nods from the devs in the Civic Center. He also said we would be managing hundreds of agents in the future, vertically trained – horizontally chained so to speak.

No slides once again, just LeCun riffing extempore, per NVIDIA’s style. He dodged the Meta AI roadmap but teased “open science” wins—likely a jab at closed-shop rivals. For devs, it was a call to arms: ditch the hype, build smarter, lean on NVIDIA’s stack. With Quantum Day buzzing next door, he left us with a zinger: “Quantum’s cute, but deep nets will out-think it first.”

GTC’s “Super Bowl of AI” rep held. LeCun proved why he’s still the godfather—unfiltered, technical, and ready to break the next ceiling and pragmatic.

Jay Sales, Engineering Executive Rockstar and Your Humble Narrator

Bottom Line

GTC2025 wasn’t just a conference. GTC 2025 was NVIDIA flipping the table: AI’s industrial now, not academic. Jensen’s vision, the sessions’ grit, and the hall’s buzz screamed one thing—build or get buried. For devs, it’s a CUDA goldmine. For suits, it’s strategy. For the industry, it’s NVIDIA steering the ship—full speed into an AI agentic and robotic future. With San Jose’s dust settling, the code’s just starting to run. Big fish and small fry are all feeding on bright green chips. 5 devs can now do the output of 50. Building stuff so others can build is Our developer mantra. Always has been, always will be – Gabba Gabba Hey One Of Us, One of Us!

Huang’s overarching message was clear: AI is evolving beyond generative models into “agentic AI”—systems that can reason, plan, and act autonomously. This shift demands exponentially more compute power (100x more than previously predicted, he noted), cementing NVIDIA’s role as the backbone of this transformation.

Despite challenges—early Blackwell overheating issues, U.S. export controls, and a 13% stock dip in 2025. Whatevs. NVIDIA’s record-breaking 39.3 billion dollar revenue quarter in February proves its resilience. GTC 2025 reaffirmed that NVIDIA isn’t just riding the AI wave; it’s creating it.

One last thought: a colleague was walking with me around the conference and inquired to me how did this feel and what i thought. Context: i was in The Valley from 1992-2001 and then had a company headquartered out there from 2011-2018. i thought for a moment, looked around, and said, “This feels like 90’s on steroids, which was the heyday of embedded programming and what i think was then the height of some of the most performant code in the valley.” i still remember when at Apple the Nvidia chip was chosen over ATI’s graphics chip. NVIDIA’s stock was something like 2.65 / share. i still remember when at Microsoft the NVIDIA chip was chosen for the XBox. NVIDIA the 33 year old start-up that analyst are talking of the demise. Just like music critics – right? As i drove up and down 101 and 280 i saw all of the new buildings and names – i realized – The Valley Is Back.

until then,

#iwishyouwater <- Mark Healy Solo Outer Reef Memo

@tctjr

Muzak To Blog By: Grotus, stylized as G̈r̈oẗus̈, was an industrial rock band from San Francisco, active from 1989 to 1996. Their unique sound incorporated sampled ethnic instruments, two drummers, and two bassists, and featured angry but humorous lyrics. NIN, Mr Bungle, Faith No More and Jello Biafra championed the band. Not for the faint of heart. Nevertheless great stuff.

Note: Rumor has it the Rivian SUV does in fact, go 0-60 in 2.6 seconds with really nice seats. Also thanks to Karen and Paul for the tea and sympathy steak supper in Palo Alto, Miss ya’ll!

Only In The Valley

SnakeByte[19] – Software As A Religion

Religion is regarded by the common people as true, by the wise as false, and by the rulers as useful.

Lucius Annaeus Seneca

Dalle’s Idea of Religion

First as always i hope everyone is safe oh Dear Readers. Secondly, i am going to write about something that i have been pondering for quite some time, probably close to two decades.

What i call Religion Of Warez (ROWZ).

This involves someone who i hold in the highest regard YOU the esteemed developer.

Marc Andreeson famously said “Software Is Eating The Word”. Here is the original blog:

“Why Software Is Eating The World” by Marc Andreeson

There is a war going on for your attention. There is a war going on for your thumb typing. There is a war going on for your viewership. There is a war going on for your selfies. There is a war going on for your emoticons. There is a war going on for github pull requests.

There is a war going on for the output of your prompting.

We have entered into The Great Cognitive Artificial Intelligence Arms Race (TGCAIAR) via camps of Large Languge Model foundational model creators.

The ability to deploy the warez needed to wage war on YOU Oh Dear Reader is much more complex from an ideological perspective. i speculate that Software if i may use that term as an entity is a non-theistic religion. Even within the Main Tabernacle of Software (MTOS) there are various fissures of said religions whether it be languages, architectures or processes.


A great blog can be found here -> Software Development: It’s a Religion.

Let us head over to the LazyWebTM and do a cursory search and see what we find[1] concerning some comparison numbers for religions and software languages.

In going to wikipedia we find:

According to some estimates, there are roughly 4,200 religions, churches, denominations, religious bodies, faith groups, tribes, cultures, movements, ultimate concerns, which at some point in the future will be countless.

Wikipedia

Worldwide, more than eight-in-ten people identify with a religious group. i suppose even though we don’t like to be categorized, we like to be categorized as belonging to a particular sect. Here is a telling graphic:

Let us map this to just computer languages. Just how many computer languages are there? i guessed 6000 in aggregate. There are about 700 main programming languages, including esoteric coding languages. From what i can ascertain some only list notable languages add up to 245 languages. Another list called HOPL, which claims to include every programming language ever to exist, puts the total number of programming languages at 8,945.

So i wasn’t that far off.

Why so much kerfuffle on languages? For those that have ever had a language discussion, did it feel like you were discussing religion? Hmmmm?

Hey, my language does automatic heap management. Why are you messing with memory allocation via this dumb thing called pointers?

The Art of Computer Programming is mapping an idea into a binary computational translation (classical computing rates apply). This process is highly inefficient compared to having binary-to-binary discussions[2]. Note we are not even considering architectures or methods in this mapping. Let us keep it at English to binary representation. What is the dimensionality reduction for that mapping? What is lost in translation?

For reference, i found a very precise and well-written blog here -> How Much Code Has Ever Been Written?

The calculation involves the number of lines of code ever written up to that point sans the exponential rate from the past two years:

2,781,000,000,000

Roughly 2.8 Trillion Lines of Code have been written in the past 20 years.

Sage McEnery 2020

As i always like to do i refer to Miriam Webster Dictionary. It holds a soft spot in my heart strings as i used to read it in grade school. (Yes i read the dictionary…)

Religion: Noun

re·​li·​gion (ruh·li·jen)

: a cause, principle, or system of beliefs held to with ardor and faith

Well, now, Dear Reader, the proverbial plot thickens. A System of Beliefs held to faith. Nowadays, religion is utilized as a concept today applied for a genus of social formations that includes several members, a type of which there are many tokens or facets.

If this is, in fact, the case, I will venture to say that Software could be considered a Religion.

One must then ask? Is there “a model” to the madness? Do we go the route of the core religions? Would we dare say the Belief System Of The Warez[3] be included as a prominent religion?

Symbols Of The World Religions

I have said several times and will continue to say that Software is one of the greatest human endeavors of all time. It is at the essence of ideas incarnate.

It has been said that if you adopt the science, you adopt the ideology. Such hatred or fear of science has always been justified in the name of some ideology or other.

If we take this as the undertone for many new aspects of software, we see that the continuum of mind varies within the perception of the universe by which we are affected by said software. It is extremely canonical and first order.

Most often, we anthropomorphize most things and our software is no exception. It is as though it were an entity or even a thing in the most straightforward cases. It is, in fact, neither. It is just information imputed upon our minds via probabilistic models via non convex optimization methods. It is as if it was a Rorschach test that allowed many people to project their own meaning onto it (sound familiar?). 

Let me say this a different way. With the advent of ChatGPT we seem to desire IT to be alive or reason somehow someway yet we don’t want it to turn into the terminator.

Stock market predictions – YES

Terminator – NO.

The Thou Shalts Will Kill You

~ Joseph Campbell

Now we are entering a time very quickly where we have “agentic” based large language models that can be scripted for specific tasks and then chained together to perform multiple tasks.

Now we have large language models distilling information gleaned from other LLMs. Who’s peanut butter is in the chocolate? Is there a limit of growth here for information? Asymptotic token computation if you will?

We are nowhere near the end of writing the Religion Of Warez (ROWZ) sacred texts compared to the Bible, Sutras, Vedas, the Upanishads, and the Bhagavad Gita, Quran, Agamas, Torah, Tao Te Ching or Avesta, even the Satanic Bible. My apologies if i left your special tome out it wasn’t on purpose. i could have listed thousands. BTW for reference there is even a religion called the Partridge Family Temple. The cult’s members believe the characters are archetypal gods and goddesses.

In fact we have just begun to author the Religion Of Warez (ROWZ) sacred text. The next chapters are going be accelerated and written via generative adversarial networks, stable fusion and reinforcement learning transformer technologies.

Which, then, one must ask which Diety are YOU going to choose?

i wrote a little stupid python script to show relationships of coding languages based on dates for the main ones. Simple key value stuff. All hail the gods K&R for creating C.

import networkx as nx
import matplotlib.pyplot as plt

def create_language_graph():
    G = nx.DiGraph()
    
    # Nodes (Programming languages with their release years)
    languages = {
        "Fortran": 1957, "Lisp": 1958, "COBOL": 1959, "ALGOL": 1960,
        "C": 1972, "Smalltalk": 1972, "Prolog": 1972, "ML": 1973,
        "Pascal": 1970, "Scheme": 1975, "Ada": 1980, "C++": 1983,
        "Objective-C": 1984, "Perl": 1987, "Haskell": 1990, "Python": 1991,
        "Ruby": 1995, "Java": 1995, "JavaScript": 1995, "PHP": 1995,
        "C#": 2000, "Scala": 2003, "Go": 2009, "Rust": 2010,
        "Common Lisp": 1984
    }
    
    # Adding nodes
    for lang, year in languages.items():
        G.add_node(lang, year=year)
    
    # Directed edges (influences between languages)
    edges = [
        ("Fortran", "C"), ("Lisp", "Scheme"), ("Lisp", "Common Lisp"),
        ("ALGOL", "Pascal"), ("ALGOL", "C"), ("C", "C++"), ("C", "Objective-C"),
        ("C", "Go"), ("C", "Rust"), ("Smalltalk", "Objective-C"),
        ("C++", "Java"), ("C++", "C#"), ("ML", "Haskell"), ("ML", "Scala"),
        ("Scheme", "JavaScript"), ("Perl", "PHP"), ("Python", "Ruby"),
        ("Python", "Go"), ("Java", "Scala"), ("Java", "C#"), ("JavaScript", "Rust")
    ]
    
    # Adding edges
    G.add_edges_from(edges)
    
    return G

def visualize_graph(G):
    plt.figure(figsize=(12, 8))
    pos = nx.spring_layout(G, seed=42)
    years = nx.get_node_attributes(G, 'year')
    
    # Color nodes based on their release year
    node_colors = [plt.cm.viridis((years[node] - 1950) / 70) for node in G.nodes]
    
    nx.draw(G, pos, with_labels=True, node_color=node_colors, edge_color='gray', 
            node_size=3000, font_size=10, font_weight='bold', arrows=True)
    
    plt.title("Programming Language Influence Graph")
    plt.show()

if __name__ == "__main__":
    G = create_language_graph()
    visualize_graph(G)

Programming Relationship Diagram

So, folks, let me know what you think. I am considering authoring a much longer paper comparing behaviors, taxonomies and the relationship between religions and software.

i would like to know if you think this would be a worthwhile piece?

Until Then,

#iwishyouwater <- Banzai Pipeline January 2023. Amazing.

@tctjr

MUZAK TO BLOG BY: Baroque Ensemble Of Vienna – “Classical Legends of Baroque”. i truly believe i was born in the wrong century when i listen to this level of music. Candidly J.S. Bach is by far my favorite composer going back to when i was in 3rd grade. BRAVO! Stupdendum Perficientur!

[1] Ever notice that searching is not finding? i prefer finding. Someone needs to trademark “Finding Not Searching” The same vein as catching ain’t fishing.

[2] Great paper from OpenAI on just this subject: two agents having a discussion (via reinforcement learning) : https://openai.com/blog/learning-to-communicate/ (more technical paper click HERE)

[3] For a great read i refer you to the The Ware Tetralogy by Rudy Rucker: Software (1982), Wetware (1988), Freeware (1997), Realware (2000)

[4] When the words “software” and “engineering” were first put together [Naur and Randell 1968] it was not clear exactly what the marriage of the two into the newly minted term really meant. Some people understood that the term would probably come to be defined by what our community did and what the world made of it. Since those days in the late 1960’s a spectrum of research and practice has been collected under the term.

What Would Nash,Shannon,Turing, Wiener and von Neumann Think?

An image of the folks as mentioned above via the GAN de jour

First, as usual, i trust everyone is safe. Second, I’ve been “thoughting” a good deal about how the world is being eaten by software and, recently, machine learning. i personally have a tough time with using the words artificial intelligence.

What Would Nash, Shannon, Turing, Wiener, and von Neumann Think of Today’s World?

The modern world is a product of the mathematical and scientific brilliance of a handful of intellectual pioneers who happen to be whom i call the Horsemen of The Digital Future. i consider these humans to be my heroes and persons that i aspire to be whereas most have not accomplished one-quarter of the work product the humans have created for humanity. Among these giants are Dr. John Nash, Dr. Claude Shannon, Dr. Alan Turing, Dr. Norbert Wiener, and Dr. John von Neumann. Each of them, in their own way, laid the groundwork for concepts that now define our digital and technological age: game theory, information theory, artificial intelligence, cybernetics, and computing. But what would they think if they could see how their ideas, theories and creations have shaped the 21st century?

A little context.

John Nash: The Game Theorist

John Nash revolutionized economics, mathematics, and strategic decision-making through his groundbreaking work in game theory. His Nash Equilibrium describes how parties, whether they be countries, companies, or individuals, can find optimal strategies in competitive situations. Today, his work influences fields as diverse as economics, politics, and evolutionary biology. NOTE: Computational Consensus Not So Hard; Carbon (Human) Consensus Nigh Impossible.

The Nash equilibrium is the set of degradation strategies 

    \[(E_i^*,E_j^*)\]

 

such that, if both players adopt it, neither player can achieve a higher payoff by changing strategies. Therefore, two rational agents should be expected to pick the Nash equilibrium as their strategy.

If Nash were alive today, he would be amazed at how game theory has permeated decision-making in technology, particularly in algorithms used for machine learning, cryptocurrency trading, and even optimizing social networks. His equilibrium models are at the heart of competitive strategies used by businesses and governments alike. With the rise of AI systems, Nash might ponder the implications of intelligent agents learning to “outplay” human actors and question what ethical boundaries should be set when AI is used in geopolitical or financial arenas.

Claude Shannon: The Father of Information Theory

Claude Shannon’s work on information theory is perhaps the most essential building block of the digital age. His concept of representing and transmitting data efficiently set the stage for everything from telecommunications to the Internet as we know it. Shannon predicted the rise of digital communication and laid the foundations for the compression and encryption algorithms protecting our data. He also is the father of my favorite equation mapping the original entropy equation from thermodynamics to channel capacity:

    \[H=-1/N \sum_{i=1}^{N} P_i\,log_2\,P_i\]

The shear elegance and magnitude is unprecedented. If he were here, Shannon would witness the unprecedented explosion of data, quantities, and speeds far beyond what was conceivable in his era. The Internet of Things (IoT), big data analytics, 5G/6G networks, and quantum computing are evolutions directly related to his early ideas. He might also be interested in cybersecurity challenges, where information theory is critical in protecting global communications. Shannon would likely marvel at the sheer volume of information we produce yet be cautious of the potential misuse and the ethical quandaries regarding privacy, surveillance, and data ownership.

Alan Turing: The Architect of Artificial Intelligence

Alan Turing’s vision of machines capable of performing any conceivable task laid the foundation for modern computing and artificial intelligence. His Turing Machine is still a core concept in the theory of computation, and his famous Turing Test continues to be a benchmark in determining machine intelligence.

In today’s world, Turing would see his dream of intelligent machines realized—and then some. From self-driving cars to voice assistants like Siri and Alexa, AI systems are increasingly mimicking human cognition human capabilities in specific tasks like data analysis, pattern recognition, and simple problem-solving. While Turing would likely be excited by this progress, he might also wrestle with the ethical dilemmas arising from AI, such as autonomy, job displacement, and the dangers of creating highly autonomous AI systems as well as calling bluff on the fact that LLM systems do not reason in the same manner as human cognition on basing the results on probabilistic convex optimizations. His work on breaking the Enigma code might inspire him to delve into modern cryptography and cybersecurity challenges as well. His reaction-diffusion model called Turings Metapmorphsis equation, is foundational in explaining biological systems:

Turing’s reaction-diffusion system is typically written as a system of partial differential equations (PDEs):

    \[\frac{\partial u}{\partial t} &= D_u \nabla^2 u + f(u, v),\]


    \[\frac{\partial v}{\partial t} &= D_v \nabla^2 v + g(u, v),\]

where:

    \[\begin{itemize}\item $u$ and $v$ are concentrations of two chemical substances (morphogens),\item $D_u$ and $D_v$ are diffusion coefficients for $u$ and $v$,\item $\nabla^2$ is the Laplacian operator, representing spatial diffusion,\item $f(u, v)$ and $g(u, v)$ are reaction terms representing the interaction between $u$ and $v$.\end{itemize}\]

In addition to this, his contributions to cryptography and game theory alone are infathomable.
In his famous paper, Computing Machinery and Intelligence,” Turing posed the question, “Can machines think?” He proposed the Turing Test as a way to assess whether a machine can exhibit intelligent behavior indistinguishable from a human. This test has been a benchmark in AI for evaluating a machine’s ability to imitate human intelligence.

Given the recent advances made with large language models, I believe he would find it amusing, not that they think or reason.

Norbert Wiener: The Father of Cybernetics

Norbert Wiener’s theory of cybernetics explored the interplay between humans, machines, and systems, particularly how systems could regulate themselves through feedback loops. His ideas greatly influenced robotics, automation, and artificial intelligence. He wrote the books “Cybernetics” and “The Human Use of Humans”. During World War II, his work on the automatic aiming and firing of anti-aircraft guns caused Wiener to investigate information theory independently of Claude Shannon and to invent the Wiener filter. (The now-standard practice of modeling an information source as a random process—in other words, as a variety of noise—is due to Wiener.) Initially, his anti-aircraft work led him to write, with Arturo Rosenblueth and Julian Bigelow, the 1943 article ‘Behavior, Purpose and Teleology. He was also a complete pacifist. What was said about those who can hold two opposing views?

If Wiener were alive today, he would be fascinated by the rise of autonomous systems, from drones to self-regulated automated software, and the increasing role of cybernetic organisms (cyborgs) through advancements in bioengineering and robotic prosthetics. He, I would think, would also be amazed that we could do real-time frequency domain filtering based on his theories. However, Wiener’s warnings about unchecked automation and the need for human control over machines would likely be louder today. He might be deeply concerned about the potential for AI-driven systems to exacerbate inequalities or even spiral out of control without sufficient ethical oversight. The interaction between humans and machines in fields like healthcare, where cybernetics merges with biotechnology, would also be a keen point of interest for him.

John von Neumann: The Architect of Modern Computing

John von Neumann’s contributions span so many disciplines that it’s difficult to pinpoint just one. He’s perhaps most famous for his von Neumann architecture, the foundation of most modern computer systems, and his contributions to quantum mechanics and game theory. His visionary thinking on self-replicating machines even predated discussions of nanotechnology.

Von Neumann would likely be astounded by the ubiquity and power of modern computers. His architectural design is the backbone of nearly every device we use today, from smartphones to supercomputers. He would also find significant developments in quantum computing, aligning with his quantum mechanics work. As someone who worked on the Manhattan Project (also Opphenhiemer), von Neumann might also reflect on the dual-use nature of technology—the incredible potential of AI, nuclear power, and autonomous weapons to both benefit and harm humanity. His early concerns about the potential for mutual destruction could be echoed in today’s discussions on AI governance and existential risks.

What Would They Think Overall?

Together, these visionaries would undoubtedly marvel at how their individual contributions have woven into the very fabric of today’s society. The rapid advancements in AI, data transmission, computing power, and autonomous systems would be thrilling, but they might also feel a collective sense of responsibility to ask:

Where do we go from here?

Once again Oh Dear Reader You pre-empt me….

A colleague sent me this paper, which was the impetus for this blog:

My synopsis of said paper:


The Tensor as an Informational Resource” discusses the mathematical and computational importance of tensors as resources, particularly in quantum mechanics, AI, and computational complexity. The authors propose new preorders for comparing tensors and explore the notion of tensor rank and transformations, which generalize key problems in these fields. This paper is vital for understanding how the foundational work of Nash, Shannon, Turing, Wiener, and von Neumann has evolved into modern AI and quantum computing. Tensors offer a new frontier in scientific discovery, building on their theories and pushing the boundaries of computational efficiency, information processing, and artificial intelligence. It’s an extension of their legacy, providing a mathematical framework that could revolutionize our interaction with quantum information and complex systems. Fundamental to systems that appear to learn where the information-theoretic transforms are the very rosetta stone of how we perceive the world through perceptual filters of reality.

This shows the continuing relevance in ALL their ideas in today’s rapidly advancing AI and fluid computing technological landscape.

They might question whether today’s technology has outpaced ethical considerations and whether the systems they helped build are being used for the betterment of all humanity. Surveillance, privacy, inequality, and autonomous warfare would likely weigh heavily on their minds. Yet, their boundless curiosity and intellectual rigor would inspire them to continue pushing the boundaries of what’s possible, always seeking new answers to the timeless question of how to create the future we want and live better, more enlightened lives through science and technology.

Their legacy lives on, but so does their challenge to us: to use the tools they gave us wisely for the greater good of all.

Or would they be dismayed that we use all of this technology to make a powerpoint to save time so we can watch tik tok all day?

Until Then,

#iwishyouwater <- click and see folks who got the memo

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

Music To blog by: Bach: Mass in B Minor, BWV 232. By far my favorite composer. The John Eliot Gardiner and Monterverdi Choir version circa 1985 is astounding.

SnakeByte[16]: Enhancing Your Code Analysis with pyastgrep

Dalle 3’s idea of an Abstract Syntax Tree in R^3 space

If you would know strength and patience, welcome the company of trees.

~ Hal Borland

First, I hope everyone is safe. Second, I am changing my usual SnakeByte [] stance process. I am pulling this from a website I ran across. I saw the library mentioned, so I decided to pull from the LazyWebTM instead of the usual snake-based tomes I have in my library.

As a Python developer, understanding and navigating your codebase efficiently is crucial, especially as it grows in size and complexity. Trust me, it will, as does Entropy. Traditional search tools like grep or IDE-based search functionalities can be helpful, but they cannot often “‘understand” the structure of Python code – sans some of the Co-Pilot developments. (I’m using understand here *very* loosely Oh Dear Reader).

This is where pyastgrep it comes into play, offering a powerful way to search and analyze your Python codebase using Abstract Syntax Trees (ASTs). While going into the theory of ASTs is tl;dr for a SnakeByte[] , and there appears to be some ambiguity on the history and definition of Who actually invented ASTs, i have placed some references at the end of the blog for your reading pleasure, Oh Dear Reader. In parlance, if you have ever worked on compilers or core embedded systems, Abstract Syntax Trees are data structures widely used in compilers and the like to represent the structure of program code. An AST is usually the result of the syntax analysis phase of a compiler. It often serves as an intermediate representation of the program through several stages that the compiler requires and has a strong impact on the final output of the compiler.

So what is the Python Library that you speak of? i’m Glad you asked.

What is pyastgrep?

pyastgrep is a command-line tool designed to search Python codebases with an understanding of Python’s syntax and structure. Unlike traditional text-based search tools, pyastgrep it leverages the AST, allowing you to search for specific syntactic constructs rather than just raw text. This makes it an invaluable tool for code refactoring, auditing, and general code analysis.

Why Use pyastgrep?

Here are a few scenarios where pyastgrep excels:

  1. Refactoring: Identify all instances of a particular pattern, such as function definitions, class instantiations, or specific argument names.
  2. Code Auditing: Find usages of deprecated functions, unsafe code patterns, or adherence to coding standards.
  3. Learning: Explore and understand unfamiliar codebases by searching for specific constructs.

I have a mantra: Reduce, Refactor, and Reuse. Please raise your hand of y’all need to refactor your code? (C’mon now no one is watching… tell the truth…). See if it is possible to reduce the code footprint, refactor the code into more optimized transforms, and then let others reuse it across the enterprise.

Getting Started with pyastgrep

Let’s explore some practical examples of using pyastgrep to enhance your code analysis workflow.

Installing pyastgrep

Before we dive into how to use pyastgrep, let’s get it installed. You can install pyastgrep via pip:

(base)tcjr% pip install pyastgrep #dont actually type the tctjr part that is my virtualenv

Example 1: Finding Function Definitions

Suppose you want to find all function definitions in your codebase. With pyastgrep, this is straightforward:

pyastgrep 'FunctionDef'

This command searches for all function definitions (FunctionDef) in your codebase, providing a list of files and line numbers where these definitions occur. Ok pretty basic string search.

Example 2: Searching for Specific Argument Names

Imagine you need to find all functions that take an argument named config. This is how you can do it:

pyastgrep 'arg(arg=config)'

This query searches for function arguments named config, helping you quickly locate where configuration arguments are being used.

Example 3: Finding Class Instantiations

To find all instances where a particular class, say MyClass, is instantiated, you can use:

pyastgrep 'Call(func=Name(id=MyClass))'

This command searches for instantiations of MyClass, making it easier to track how and where specific classes are utilized in your project.

Advanced Usage of pyastgrep

For more complex queries, you can combine multiple AST nodes. For instance, to find all print statements in your code, you might use:

pyastgrep 'Call(func=Name(id=print))'

This command finds all calls to the print function. You can also use more detailed queries to find nested structures or specific code patterns.

Integrating pyastgrep into Your Workflow

Integrating pyastgrep into your development workflow can greatly enhance your ability to analyze and maintain your code. Here are a few tips:

  1. Pre-commit Hooks: Use pyastgrep in pre-commit hooks to enforce coding standards or check for deprecated patterns.
  2. Code Reviews: Employ pyastgrep during code reviews to quickly identify and discuss specific code constructs.
  3. Documentation: Generate documentation or code summaries by extracting specific patterns or structures from your codebase.

Example Script

To get you started, here’s a simple Python script using pyastgrep to search for all function definitions in a directory:

import os
from subprocess import run

def search_function_definitions(directory):
result = run(['pyastgrep', 'FunctionDef', directory], capture_output=True, text=True)
print(result.stdout)

if __name__ == "__main__":
directory = "path/to/your/codebase" #yes this is not optimal folks just an example.
search_function_definitions(directory)

Replace "path/to/your/codebase" with the actual path to your Python codebase, and run the script to see pyastgrep in action.

Conclusion

pyastgrep is a powerful tool that brings the capabilities of AST-based searching to your fingertips. Understanding and leveraging the syntax and structure of your Python code, pyastgrep allows for more precise and meaningful code searches. Whether you’re refactoring, auditing, or simply exploring code, pyastgrep it can significantly enhance your productivity and code quality. This is a great direct addition to your arsenal. Hope it helps and i hope you found this interesting.

Until Then,

#iwishyouwater <- The best of the best at Day1 Tahiti Pro presented by Outerknown 2024

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

MUZAK to Blog By: SweetLeaf: A Stoner Rock Salute to Black Sabbath. While i do not really like bands that do covers, this is very well done. For other references to the Best Band In Existence ( Black Sabbath) i also refer you to Nativity in Black Volumes 1&2.

References:

[1] Basics Of AST

[2] The person who made pyastgrep

[3] Wikipedia page on AST

Execution Is Everything

bulb 2 warez

Even if we crash and burn and loose everthing the experience is worth ten times the cost.

~ S. Jobs

As always, Oh Dear Readers, i trust this finds you safe. Second, to those affected by the SVB situation – Godspeed.

Third, i was inspired to write a blog on “Doing versus Thinking,” and then i decided on the title “Execution Is Everything”. This statement happens to be located at the top of my LinkedIn Profile.

The impetus for this blog came from a recent conversation where an executive who told me, “I made the fundamental mistake of falling in love with the idea and quickly realized that ideas are cheap, it is the team that matters.”

i’ve written about the very issue on several occasions. In Three T’s of a Startup to Elite Computing, i have explicitly stated ideas are cheap, a dime a dozen. Tim Ferris, in the amazing book “Tools Of Titans,” interviews James Altuchur, and he does this exercise every day:

This is taken directly from the book in his words, but condensed for space, here are some examples of the types of lists James makes:

  • 10 olds ideas I can make new
  • 10 ridiculous things I would invent (e.g., the smart toilet)
  • 10 books I can write (The Choose Yourself Guide to an Alternative Education, etc).
  • 10 business ideas for Google/Amazon/Twitter/etc.
  • 10 people I can send ideas to
  • 10 podcast ideas or videos I can shoot (e.g., Lunch with James, a video podcast where I just have lunch with people over Skype and we chat)
  • 10 industries where I can remove the middleman
  • 10 things I disagree with that everyone else assumes is religion (college, home ownership, voting, doctors, etc.)
  • 10 ways to take old posts of mine and make books out of them
  • 10 people I want to be friends with (then figure out the first step to contact them)
  • 10 things I learned yesterday
  • 10 things I can do differently today
  • 10 ways I can save time
  • 10 things I learned from X, where X is someone I’ve recently spoken with or read a book by or about. I’ve written posts on this about the Beatles, Mick Jagger, Steve Jobs, Charles Bukowski, the Dalaï Lama, Superman, Freakonomics, etc.
  • 10 things I’m interested in getting better at (and then 10 ways I can get better at each one)
  • 10 things I was interested in as a kid that might be fun to explore now (like, maybe I can write that “Son of Dr. Strange” comic I’ve always been planning. And now I need 10 plot ideas.)
  • 10 ways I might try to solve a problem I have. This has saved me with the IRS countless times. Unfortunately, the Department is Motor Vehicles is impervious to my superpowers

Is your brain tired of just “thinking” about doing those gymnastics?

i cannot tell you how many people have come to me and said “hey I have an idea!” Great, so do you and countless others. What is your plan of making it a reality? What is your maniacal passion every day to get this thing off the ground and make money?

The statement “Oh I/We thought about that 3 years ago” is not a qualifier for anything except that fact you thought it and didn’t execute on said idea.  You know why?

Creating software from an idea that runs 24/7 is still rather difficult. In fact VERY DIFFICULT.

“Oh We THOUGHT about that <insert number of days or years ago here>. i call the above commentary “THOUGHTING”. Somehow the THOUGHT is manifested from Ideas2Bank? If that is a process, i’d love to see the burndown chart on that one. No Oh Dear Readers, THOUGHTING is about as useful as that overly complex PowerPoint that gets edited ad nauseam, and people confuse the “slideware” with “software”. The only code that matters is this:

Code that is written with the smallest OPEX and Highest Margins thereby increasing Revenue Per Employee unless you choose to put it in open source for a wonderful plethora of reasons or you are providing a philanthropic service.

When it comes to creating software, “Execution is everything.” gets tossed around just like the phrase “It Just Works” as a requirement. At its core, this phrase means that the ability to bring an idea to life through effective implementation is what separates successful software from failed experiments.

The dynamic range between average and the best is 2:1. In software it is 50:1 maybe 100:1 very few things in life are like this. I’ve built a lot of my sucess on finding these truly gifted people.

~ S. Jobs

In order to understand why execution is so critical in software development, it’s helpful first to consider what we mean by “execution.” Simply put, execution refers to the process of taking an idea or concept and turning it into a functional, usable product. This involves everything from coding to testing, debugging to deployment, and ongoing maintenance and improvement.

When we say that execution is everything in software development, what we’re really saying is that the idea behind a piece of software is only as good as the ability of its creators to make it work in the real world. No matter how innovative or promising an idea may seem on paper, it’s ultimately worthless if it can’t be brought to life in a way that users find valuable and useful.

You can fail at something you dislike just as easily as something you like so why not choose what you like?

~ J. Carey

This is where execution comes in. In order to turn an idea into a successful software product, developers need to be able to navigate a complex web of technical challenges, creative problem-solving, and user feedback. They need to be able to write code that is clean, efficient, and scalable. They need to be able to test that code thoroughly, both before and after deployment. And they need to be able to iterate quickly and respond to user feedback in order to improve and refine the product continually.

The important thing is to dare to dream big, then take action to make it come true.

~ J. Girard

All of these factors require a high degree of skill, discipline, and attention to detail. They also require the ability to work well under pressure, collaborate effectively with other team members, and stay focused on the ultimate goal of creating a successful product.

The importance of execution is perhaps most evident when we consider the many examples of software projects that failed despite having what seemed like strong ideas behind them. From buggy, unreliable apps to complex software systems that never quite delivered on their promises, there are countless examples of software that fell short due to poor execution.

On the other hand, some of the most successful software products in history owe much of their success to strong execution. Whether we’re talking about the user-friendly interface of the iPhone or the robust functionality of Paypal’s Protocols, these products succeeded not just because of their innovative ideas but because of the skill and dedication of the teams behind them.

The only sin is mediocrity[1].

~ M. Graham

In the end, the lesson is clear: when it comes to software development, execution really is everything. No matter how brilliant your idea may be, it’s the ability to turn that idea into a functional, usable product that ultimately determines whether your software will succeed or fail. By focusing on the fundamentals of coding, testing, and iterating, developers can ensure that their software is executed to the highest possible standard, giving it the best chance of success in an ever-changing digital landscape.

So go take that idea and turn it into a Remarkable Viable Product, not a Minimum Viable Product! Who likes Minimum? (thanks R.D.)

Be Passionate! Go DO! Go Create!

Go Live Your Personal Legend!

A great video stitching of discussions from Steve Jobs on execution, and passion – click here-> The Major Thinkers Steve Jobs

Until then,

#iwishyouwater <- yours truly hitting around 31 meters (~100ft) on #onebreath

@tctjr

Muzak To Blog By: Todd Hannigan “Caldwell County.”

[1] The only sin is mediocrity is not true if there were a real Sin it should be Stupidity but the quote fits well in the narrative.

What Is Your KulChure?

Got It?

We are organized like a startup. We are the biggest startup on the planet.

S. Jobs

First, i hope everyone is safe. Second, this blog is about something everyone seems to be asking me about and talking about, but no one seems to be able to execute the concept much like interoperability in #HealthIT. Third, it is long-form content so in most cases tl;dr.

CULTURE.

Let us look to Miriam-Websters OnLine Dictionary for a definition – shall we?

cul·​ture <ˈkəl-chər>

1

a: the customary beliefs, social forms, and material traits of a racial, religious, or social group also the characteristic features of everyday existence (such as diversions or a way of life) shared by people in a place or time ; popular culture, Southern culture

b: the set of shared attitudes, values, goals, and practices that characterizes an institution or organization a corporate culture focused on the bottom line

c: the set of values, conventions, or social practices associated with a particular field, activity, or societal characteristic studying the effect of computers on print culture

d: the integrated pattern of human knowledge, belief, and behavior that depends upon the capacity for learning and transmitting knowledge to succeeding generations

2

a: enlightenment and excellence of taste acquired by intellectual and aesthetic training

b: acquaintance with and taste in fine arts, humanities, and broad aspects of science as distinguished from vocational and technical skills; a person of culture

3: the act or process of cultivating living material (such as bacteria or viruses) in prepared nutrient media also a product of such cultivation

4: CULTIVATIONTILLAGE

5: the act of developing the intellectual and moral faculties especially by education

6: expert care and training; beauty culture

Wow, This sounds complicated. Which one to leave in and which one to leave out?

Add to this complexity the fact that creating and executing production software is almost an insurmountable task. i have said for years software creation is one of the most significant human endeavors of all time. i also believe related to these concerns the interplay between comfort and solutions. Most if not all humans desire solutions however as far as i can tell solutions are never comfortable. Solutions involve change most humans are homeostatic. Juxtapose this against the fact that humans love comfort. So what do you do?

So why does it seem like everyone is talking about kəl-chər? i consider this to be like Fight Club. 1st rule of kəl-chər is you don’t talk about culture. It should be an implicit aspect of your organization. Build or Re-Build it at a first principles engineering practice. Perform root cause analysis of the behaviors within the company. If it does in fact need to be re-built start with you and your leadership. Turn the mirror on you first. Understand that you must lead by example. Merit Not Inherit.

i’ve recently been asked how you change and align culture. Well here are my recommendations and it comes down to TRUST at ALL levels of the company.

Create an I3 Lab: Innovation, Incubation, Intrapreneurship:

Innovation without code is just ideas and everyone has them. Ideas are cheap. Incubation without product market fit is a dead code base. Intrapreneurship is the spirit of a system that encourages employees to think and act like individual entrepreneurs and empowers them to take action, embrace risk, and make decisions as if they had founded the company themselves. Innovate – create the idea – Incubate – create the Maximum Viable Product (not minimum) – Intrapreneurship – spin out the Maximum Viable Product. As an aside Minimum Viable Product sounds like you bailed out making the best you possibly could in the moment. Take that Maximum Viable product and roll it into a business vertical and go to market strategy – then spin the wheel again.

I think it’s very important to have a feedback loop, where you’re constantly thinking about what you’ve done and how you could be doing it better.

E. Musk

Value The Most Important Asset – Your People

Managing high-performance humans is a difficult task because most high-performance humans do not like to be managed they love to be led. Lead them by example. Value them and compensate them accordingly. Knowledge workers love achievement and goals. Lead them into the impossible, gravitate toward dizzying heights, and be there for them. Be completely transparent and communicate. Software is always broken. If anyone states differently they are not telling the truth. There is always refactoring, retargeting, more code coverage and nascent bugs. Let them realize you realize that however let them know that if they do make a mistake escalate immediately. Under no circumstances can you tolerate surprises. Give them the framework with OKRs and KPIs that let them communicate openly, efficiently and effectively and most important transparently. Great teams will turn pencils into Mount Blanc Fountain Pens. Let them do what they do best and reward them!

Process Doesn’t Make A Culture

Nor does it make great products. Many focus on some software process. Apple used and as far as i know still uses strict waterfall. As far as i am concerned, we are now trending towards a Holacracy type of environment which is a self-organizing environment. However, this only can be achieved with the proper folks that appreciate the friction of creating great products from the best ideas. The Process of evolving from an idea to a product is magic. You learn you evolve; you grow your passion for and into the product as it becomes itself the team that built the product. Your idea and passion are inherent in that shipping software (or hardware).

What do you want me to do

To do for you to see you through?

The Grateful Dead

Empower Your People

Provide your people the ability to manage themselves and have autonomy. Set them free. Trust them to make the decisions that will drive the company and projects into world-class endeavors. Take a chance with them, Let a new college graduate push some code to production. Let a new sales associate push a deal with a customer. Let your new marketing person design an area on the company site. Allow them to evolve grow and be a part of the Great Endeavor. Put them in charge and provide the framework for autonomy to make decisions and when they deliver – award them not with something ephemeral but volitional. Money and Stock work wonders. Empower. Align. Evolve.

Provide and Articulate a Common Vision

Provide a vision of the company or project. Two sentences that everyone understands. Most people who are empowered and given the frameworks to create within know what to do in these circumstances. Articulate the vision and gain common alignment across the organization or project. That is leadership that high performance teams desire. Take this alignment then map it into the OKRs and KPIs then in turn pick a process and let everyone know how this aligns to the vision. Create the environment that every line of code maps to that vision. Show commitment on this vision.

Give FeedBack

Communicate. Communicate. Communicate. Collaborate. Collaborate. Collaborate. Till you puke. i cannot emphasize this enough. You must be prepared everyday to manically interact with your teams and have the hard friction filled uncomfortable discussions. You want to keep the top performers let them know where they stand, how they stand and why they stand in the rankings and how they are contributing to the vision. Again attempt to create coder-metrics across your organization or project that exemplifies this performance. Interact with your most important asset your people. Over communicate. We have the ability to reach everyone at anytime, email, zoom, slack, granite tablet where once used to message. Write the message and give feedback. Better yet go take a walk with them. Have 1:1s. Listen to your people receptively and without bias and judgment about their concerns, passions, what scares them, what makes them happy, their joys, goals, and aspirations so they feel validated and understood. Solicit feedback, shut up and listen.

What all of this comes down to what i call – Amplifying_OthersTM. This is easier said than done. Personally, i believe that you need to commit even to the point of possibly finding them a better fit for a position at another company. This goes back to understanding what truly drives the only asset there is in technology the people. Always Be Listening, Always Be Networking, and Always Be Recruiting.

This brings up the next big question for your company – How do you attract the best right talent? Hmmmm… that might be another blog. Let me know your thought on these matters in the comments.

Until Then,

#IWishYouWater <- Psycho Session In Mentawis

@tctjr

Music To Blog By:

American Beauty by The Grateful Dead. Box of Rain and Ripple are amazing. Also if you haven’t heard Jane’s Addiction’s cover of Ripple check it out. i am not a Dead fan but the lyrics on some of these songs are monumental.

References (click for purchase link):

The Psychology of Computer Programming

Mythical Man Month

The Essence of Software: Why Concepts Matter for Great Design

Freedom From The Flesh

The human body resonates at the same frequency as Mother Earth. So instead of only focusing on trying to save the earth, which operates in congruence to our vibrations, I think it is more important to be one with each other. If you really want to remedy the earth, we have to mend mankind. And to unite mankind, we heal the Earth. That is the only way. Mother Earth will exist with or without us. Yet if she is sick, it is because mankind is sick and separated. And if our vibrations are bad, she reacts to it, as do all living creatures.

Suzy Kassem, Rise Up and Salute the Sun: The Writings of Suzy Kassem

Image from: The Outer Limits (1963) S 2 E 15 “The Brain of Colonel Barham”

i have been considering writing a series of blogs on the coming age of Cybernetics. For those unaware Cybernetics was a term coined by one of my heroes Dr. Norbert Weiner. Dr. Weiner was a professor of mathematics at MIT who early on became interested in stochastic and noise processes and has an entire area dedicated to him in the areas of Weiner Estimation Theory. However, he also is credited as being one of the first to theorize that all intelligent behavior was the result of feedback mechanisms, that could possibly be simulated by machines thereby coining the phrase “Cybernetics”. He wrote two seminal books in this area: (1) “Cybernetics” (2) “The Human Use of Humans”. This brings us to the present issue at hand.

The catalyst for the blog came from a tweet:

More concerning Ted is how long before people start paying for upgrades. What effects will this have when you have achieved functional immortality?

@mitchparkercisco

This was in response to me RT’ing a tweet from @alisonBLowndes concerning the International Manifesto of the “2045” Strategic Social Initiative. An excerpt from said manifesto:

We believe that before 2045 an artificial body will be created that will not only surpass the existing body in terms of functionality but will achieve perfection of form and be no less attractive than the human body. 

2045 Manifesto

Now for more context. I am a proponent of using technology to allow for increased human performance as i am an early adopter if you will of the usage of titanium to repair the skeletal system. i have staples, pins, plates and complete joints of titanium from pushing “Ye Ole MeatBag” into areas where it did not fair so well.

There are some objectives of the movement of specific interest is Number Two:

To create an international research center for cybernetic immortality to advance practical implementations of the main technical project – the creation of the artificial body and the preparation for subsequent transfer of individual human consciousness to such a body.

2045 Manifesto

This is closely related to Transhumanism which is more of a philosophy than an execution. The way i frame it is Transhumanism sets the philosophical framework for cybernetics. The contemporary meaning of the term “transhumanism” was foreshadowed by one of the first professors of futurology, a man who changed his name to FM-2030. In the 1960s, he taught “new concepts of the human” at The New School when he began to identify people who adopt technologies, lifestyles and worldviews “transitional” to post-humanity as “transhuman“.

Coming from a software standpoint we could map the areas into pipelines and deploy as needed either material biological or conscious. We could map these areas into a CI/CD deployment pipeline. .

For a direct reference, i work with an amazing practicing nocturnist who is also a great coder as well as a medical 3D Printing expert! He prints body parts! It is pretty amazing to think that something your holding that was printed that morning is going to enable someone to use their respective limbs or walk again. So humbling. By the way, the good doctor is also a really nice person. Just truly a great human. Health practitioners are truly some of humanity’s rockstars.

He printed me a fully articulated purple octopus that doesn’t go in the body:

Building upon this edict and for those who have read William Gibson’s “Neuromancer,” or Rudy Ruckers The Ware Tetralogy: Software, Wetware, Realware, Freeware “it calls into question the very essence of the use for the human body? Of the flesh is the only aspect we truly do know and associate with this thing called life. We continually talk about the ‘Big C Word” – Consciousness. However, we only know the body. We have no idea of the mind. Carnality seems to rule the roost for the Humans.

In fact most of the acts that we perform on a daily basis trend toward physical pleasure. However what if we can upgrade the pleasure centers? What if the whole concept of dysphoria goes away and you can order you a net-new body? What *if* this allows us as the above ponders to upgrade ad nauseam and live forever? Would you? Would it get tiresome and boring?

i can unequivocally tell you i would if given the chance. Why? Because if there *is* intelligent life somewhere then they have a much longer evolutionary scale that we mere humans on earth do not and they have figured out some stuff let’s say that can possibly change the way we view life in longer time scales ( or time loops or even can we say Infinite_Human_Do_Loops? ).

i believe we are coming to an age where instead of the “50 is the new 30” we can choose our age – lets say at 100 i choose new core and legs and still run a 40-yard dash in under 5 seconds? i am all for it.

What if you choose a body that is younger than your progeny?

What if you choose a body that isnt a representation of a human as we know it?

All with immortality?

i would love to hear folks thoughts on the matter in the comments below.

For reference again here is the link -> 2045 Manifesto.

Until then,

Be Safe.

#iwishyouwater

@tctjr

Muzak To Blog By: Maddalena by Ennio Morricone

Rolling Ubuntu On An Old Macintosh Laptop

“What we’re doing here will send a giant ripple through the universe.”

Steve Jobs

I have an old mac laptop that was not doing anyone much use sitting around the house.  i had formatted the rig and due to it only being an i7 Pentium series mac you could only roll up to the Lion OS.  Also, i wanted a “pure” Linux rig and do not like other form factors (although i do dig the System 76 rigs).

So i got to thinking why dont i roll Ubuntu on it and let one cat turn into another cat?  See what i did there? Put a little shine on Ye Ole Rig? Here Kitty Kitty!

Anyways here are the steps that i found worked the most painless.

Caveat Emptor:  these steps completely wipe the partition and Linux does run natively wiping out any and all OSes. You WILL lose your OS X Recovery Partition, so returning to OS X or macOS can be a more long-winded process, but we have instructions here on how to cope with this: How to restore a Mac without a recovery partition.  You are going All-In!

On that note i also don’t recommend trying to “dual-boot” OS X and Linux, because they use different filesystems and it will be a pain.  Anyways this is about bringing new life to an old rig if you have a new rig with Big Sur roll Virtual Box and run whatever Linux distro you desire.

What you need:

  • A macintosh computer is the point of the whole exercise.  i do recommend having NO EXTERNAL DRIVES connected as you will see below.
  • A USB stick with at least 8 gig of storage.  This to will be formatted and all data lost.
  • Download your Linux distribution to the Mac. We recommend Ubuntu 16.04.4 LTS if this is your first Linux install. Save the file to your ~/Downloads folder.
  • Download and install an app called Etcher from Etcher.io. This will be used to copy the Linux install .ISO file to your USB drive.

Steps to Linux Freedom:

  • Insert your USB Thumb Drive. A reminder that the USB Flash drive will be erased during this installation process. Make sure you’ve got nothing you want on it.

  • Open Etcher Click Select “Image”. Choose ubuntu-16.04.1-desktop-amd64.iso (the image you downloaded in Step 1).  NOTE: i had some problems with 20.x latest release with wireless so i rolled back to 16.0x just to get it running. 
  • Click “Change” under Select Drive. 

  • Pick the drive that matches your USB Thumb Drive in size. It should be  /dev/disk1 if you only have a single hard drive in your Mac. Or /dev/disk2, /dev/disk3 and so on (if you have more drives attached). NOTE: Do not pick /dev/disk0. That’s your hard drive! Pick /dev/disk0 and you’ll wipe your macOS hard drive HEED THY WARNING YOU HAVE BEEN WARNED! This is why i said its easier if you have no external media.

  • Click “Flash!” Wait for the iso file to be copied to the USB Flash Drive. Go browse your favorite socnet as this will take some time or hop on your favorite learning network and catch up on those certificates/badges.

  • Once it is finished remove the USB Flash Drive from your Mac. This is imperative.
  • Now SHUTDOWN the mac and plug the Flashed USB drive into the mac.

  • Power up and hold the OPTION key while you boot.
  • Choose the EFI Boot option from the startup screen and press Return.
  • IMMEDIATELY press the “e” key.  i found you need to do this quickly otherwise the rig tries to boot.

  • Pressing the “e” key will enter you into “edit mode” you will see a black and white screen with options to try Ubuntu and Install Ubuntu. Don’t choose either yet, press “e” to edit the boot entry.
  • This step is critical and the font maybe really small so take your time.  Edit the line that begins with Linux and place the word "nomodeset" after "quiet splash". The whole line should read: "linux /casper/vmlinuz.efifile=/cdrom/preseed/ubuntu.seed boot=casper quiet splash nomodeset --

  • Now Press F10 on the mac.
  • Now its getting cool! Your mac and Ubuntu boots into trial mode!

(Note: at this point also go browse your favorite socnet as this will take some time or hop on your favorite learning network and catch up on those certificates/badges.)

  • Double-click the icon marked “Install Ubuntu”. (get ready! Here Kitty Kitty!)
  • Select your language of choice.
  • Select “Install this third-party software” option and click Continue. Once again important.
  • Select “Erase disk and install Ubuntu” and click Continue.
  • You will be prompted for geographic area and keyboard layout.
  • You will be prompted to enter the name and password you want to use (make it count!).
  • Click “Continue” and Linux will begin installing!
  • When the installation has finished, you can log in using the name and password you chose during installation!
  • At this point you are ready to go!  i recommend registering for an ubuntu “Live Update” account once it prompts you.
  • One side note:  on 20.x update there was an issue with the Broadcom wireless adapter crashing which then reboots you without wireless.  i am currently working through that and will get back you on the fix!

Executing the command less /proc/cpuinfo will detail the individual cores. It looks like as i said the i7 Pentium series!

Happy Penguin and Kitten Time!  Now you can customize your rig!

Screen shot of keybase running on my ubuntu mac rig!

And that is a wrap! As a matter of fact i ate my own dog food and wrote this blog on the “new” rig!

Until Then,

#iwishyouwater

@tctjr

Muzak to blog by: Dreamways Of The Mystic by Bobby Beausoliel

Book Review: The Cathedral and The Bazaar (Musings On Linux and Open Source By An Accidental Revolutionary

“Joy, humor, and playfulness are indeed assets;” 

~ Eric S. Raymond

As of late, i’ve been asked by an extreme set of divergent individuals what does “Open Source Software” mean? 

That is a good question.  While i understand the words and words do have meanings i am not sure its the words that matter here.  Many people who ask me that question hear “open source” and hear or think “free’ which is not the case.  

Also if you have been on linkedin at all you will see #Linux, #LinuxFoundation and #OpenSource tagged constantly in your feeds.

Which brings me to the current blog and book review.

(CatB)as it is affectionately known in the industry started out and still is a manifesto as well accessible via the world wide web.  It was originally published in 1997 on the world wide wait and then in print form circa 1999.  Then in 2001 was a revised edition with a foreword by Bob Young, the founding chairman and ceo of Redhat.

Being i prefer to use plain ole’ books we are reviewing the physical revised and extended paperback edition in this blog circa 2001. Of note for the picture, it has some wear and tear.

To start off as you will see from the cover there is a quote by Guy Kawasaki, Apple’s first Evangelist:

“The most important book about technology today, with implications that go far beyond programming.”

This is completely true.  In the same train of thought, it even goes into the aspects of propriety and courtesy within conflict environments and how such environments are of a “merit not inherit” world, and how to properly respond when you are in vehement disagreement.  

To relate it to the book review: What is a cathedral development versus a bazaar environment?

Cathedral is a tip of the fedora if you will to the authoritarian view of the world where everything is very structured and there are only a few at most who will approve moving the codebase forward.

Bazaar refers to the many.  The many coding and contributing in a swarm like fashion.  

In this book, closed source is described as a cathedral development model and open source as a bazaar development model. A cathedral is vertically and centrally controlled and planned. Process and governance rule the project – not coding.  The cathedral is homeostatic. If you build or rebuild Basilica Sancti Petri within Roma you will not be picking it up by flatbed truck and moving it to Firenze.

The forward in the 2001 edition is written by Bob Young co-founder and original CEO of RedHat.  He writes:

“ There have always been two things that would be required if open-source software was to materially change the world; one was for open-source software to become widely used and the other was the benefits this software development model supplied to its users had to be communicated and understood.”

Users here are an interesting target.  Users could be developers and they could be end-users of warez.  Nevertheless, i believe both conditions have been met accordingly.  

i co-founded a machine learning and nlp service as a company in 2007 wherein i had the epiphany after my “second” read of Catb that the future is in fact open source.  i put second in quotes as the first time i read it back in 1998 it wasn’t really a read in depth nor having fully internalized it while i was working at Apple in the CPU software department on OS9/OSX and while at the same time knowing full well that OSX was based on the Mach kernel.  The Mach kernel is often mentioned as one of the earliest examples of a microkernel. However, not all versions of Mach are microkernels. Mach’s derivatives are the basis of the operating system kernel in GNU Hurd and of Apple’s XNU kernel used in macOS, iOS, iPadOS, tvOS, and watchOS.

That being said after years of working with mainly closed source systems in 2007 i re-read Catb.  i literally had a deep epiphany that the future of all development would be open source distributed machine learning – everywhere.

Then i read it recently – deeply – a third time.  This time nearly every line in the book resonates.

The third time with almost anything seems to be the charm.  This third time through i realized not only is this a treatise for the open-source movement it is a call to arms if you will for the entire developer community to behave appropriately with propriety and courtesy in a highly matrixed collaborative environment known as the bazaar.

The most obvious question is:  Why should you care?  i’m glad you asked.

The reason you care is that you are part of the information economy.  The top market cap companies are all information-theoretic developer-first companies.  This means that these companies build things so others can build things.  Software is truly eating the world.  Think in terms of the recent pandemic.  Work (code) is being created at an amazing rate due to the fact that the information work economy is distributed and essentially schedule free.  She who has distributed wins and she who can code anytime wins.  This also means that you are interested in building world-class software and the building of this software is now a decentralized peer reviewed transparent process.  

The book is organized around Raymond’s various essays.   It is important to note that just as software is an evolutionary process by definition so are the essays in this book.  They can also be found online.  The original collection of essays date back to 1992 on the internet: “A Brief History Of Hackerdom.’

The book is not a “how-to” cookbook but rather what i call a “why to” map of the terrain.  While you can learn how to hack and code i believe it must be in your psyche.  The book also uses the term “hacker” in a positive sense to mean one who creates software versus one who cracks software or steals information.

While the history and the methodology is amazing to me the cogent commentary on the types of the reasoning behind why hackers go into open source vary as widely as ice cream flavors.

Raymond goes into the theory of incentives with respect to the instinctive wiring of humans beings.  

“The verdict of history seems to be free-market capitalism is the globally optimal way to cooperate for economic efficiency; perhaps in a similar way to cooperate for generating (and checking!) high-quality creative work.”

He categorizes command hierarchy, exchange economy, and gift culture to address these incentives.  

Command hierarchy:

Goods are allocated in a scarce economy model by one central authority.

Exchange Economy:

The allocation of scarce goods is accomplished in a decentralized manner allowing scale through trade and voluntary cooperation.

Gift Culture:

This is very different than the other two methods or cultures.  Abundance makes command and control relationships difficult to sustain.  In gift cultures, social status is determined not by what you control but by what you give away.

It is clear that if we define the open source hackerdom it would be a gift culture.  (It is beyond the current scope of this blog but it would be interesting to do a neuroscience project on the analysis of open source versus closed source hackers brain chemistry as they work throughout the day)

Given these categories, the essays then go onto define the written and many times unwritten (read secrets) that operate within the open-source world via a reputation game. If you are getting the idea it is tribal you are correct.  Interestingly enough the open source world has in many cases very divergent views on all prickly things within the human condition such as religion and politics but one thing is a constant – ship high-quality code.

Without a doubt the most glaring cogent commentary comes in a paragraph from the essay “The Magic Cauldron.” entitled “Open Source And Strategic Business Risk.”   

Ultimately the reasons open source seems destined to become a widespread practice have more to do with customer demand and market pressures than with supply-efficiencies for vendors.”

And further:

“Put yourself for the moment in the position of a CTO at a Fortune 500 corporation contemplating a build or upgrade of your firm’s IT infrastructure.  Perhaps you need to choose a network operating system to be deployed enterprise-wide; perhaps your concerns involve 24/7 web service and e-commerce, perhaps your business depends on being able to field high-volume, high-reliability transaction databases.  Suppose you go the conventional closed-source route.  If you do, then you put your firm at the mercy of a supplier monopoly – because by definition there is only one place you can go to for support, bug fixes, and enhancements.  If the supplier doesn’t perform, you will have no effective recourse because you are effectively locked by your initial investment.”

FURTHER:

“The truth is this: when your key business processes are executed by opaque blocks of bits that you cant even see inside (let alone modify) you have lost control of your business.”

“Contrast this with the open-source choice.  If you go this route, you have the source code, and no one can take that away from you. Instead of a supplier monopoly with a choke-hold on your business, you now have multiple service companies bidding for your business – and you not only get to play them against each other, but you also have the option of building your own captive support organization if that looks less expensive than contracting out.  The market works for you.”

“The logic is compelling; depending on closed-source code is an unacceptable strategic risk  So much so that I believe it will not be very long until closed-source single-vendor acquisitions when there is an open source alternative available will be viewed as a fiduciary irresponsibility, and rightly grounds for a share-holder lawsuit.”

THIS WAS WRITTEN IN 1997. LOOK AROUND THE WORLD WIDE WAIT NOW… WHAT DO YOU SEE?  

Open Source – full stop.

i will add that there was no technical explanation here only business incentive and responsibility to the company you are building, rebuilt, or are scaling.  Further, this allows true software malleability and reach which is the very reason for software.

i will also go on a limb here and say if you are a software corporation one that creates software you can play the monopoly and open-source models against each other within your corporation. Agility and speed to ship code is the only thing that matters these days. Where is your github? Or why is this not shipping TODAY?

This brings me to yet another amazing prescient prediction in the book that Raymond says that applications are ultimately where we will land for monetary scale.  Well yes, there is an app for that….

While i have never met Eric S. Raymond he is a legend in the field.  We have much to thank him for in the areas of software.  If you have not read CatB and work in the information sector do yourself a favor: buy it today.

As a matter of fact here is the link: The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary

Muzak To Blog To:  “Morning Phase” by Beck 

Resources:

http://www.opensource.org

https://www.apache.org/foundation/