Pre-Computing War

Pre-Computing War

Sora’s idea of a Neon Drone War With Audio Track

Sometimes it is the people no one can imagine anything of who do the things no one can imagine.

~ Alan Turing

First, i trust everyone is safe. Second, i had a creative spurt of late and wrote the following blog in one sitting after waking up thinking about the subject.

NOTE: This post is in no way indicative of any stance or provides any classified information whatsoever. It is only a thought piece concerning current technology-driven areas of concern.

There is a paradigm shift happening that will affect future generations and possibly the very essence of what it means to be human, and this comes from how technology is transforming war. We stand at a precipice, gazing into a future where the tools of war no longer resemble the clashing steel and human courage of centuries past. How important is conflict to humanity? What is the essence and desire for this conflict? The current uptick in drone usage of the past years has created an inflection point for what I am terming “abstraction levels for engagement.”

We continue to underestimate how important drones are going to be in warfare—a miscalculation that echoes through history’s long ledger of missed signals. I’m going to go out on a long limb here and say the future of most warfare will be enabled by, driven by, and spearheaded by drones. In fact, other than some other types of autonomous vehicles, there might not be anything else on the battlefield.

Here is a definition (of course, AI bot generated because really who reads Webster’s Dictionary nowadays? For the record, i have read Webster’s 3 times front to back):

“A military drone, also known as an unmanned aerial vehicle (UAV) or unmanned aircraft system (UAS), is an aircraft flown without a human pilot on board, controlled remotely or autonomously, and used for military missions like surveillance, reconnaissance, and potentially combat operations. “

This isn’t hyperbole; it’s the logical endpoint of a trajectory we’ve been on since the first unmanned systems took flight. And at the end of that trajectory lies something even more radical: autonomous bullets—not merely guided, but self-directed, a fusion of machine intelligence and lethal intent that could redefine conflict itself.

Guided autonomous bullets, often referred to as “smart bullets,” represent an advanced leap in projectile technology, blending precision guidance systems with small-caliber ammunition. These bullets are designed to adjust their flight path mid-air to hit a target with exceptional accuracy, even if the target is moving or environmental factors like wind interfere. The concept builds on precision guided munitions used in larger systems like missiles, but shrinks the technology to fit within the constraints of a bullet fired from a firearm.

One prominent example is the EXACTO (Extreme Accuracy Tasked Ordnance – don’t ya just love DOD acronyms?) program developed by DARPA (Defense Advanced Research Projects Agency). Initiated in 2008, EXACTO focuses on a .50-caliber bullet equipped with optical sensors in its nose and tiny fins for steering. The system works by tracking a laser-designated target, similar to laser-guided bombs or missiles, allowing the bullet to make real-time course corrections. Tests conducted as early as 2014 and 2015 demonstrated its ability to hit moving and evading targets, with footage showing the bullet sharply altering its trajectory mid-flight. DARPA has stated that this technology enables both expert marksmen and novices to achieve pinpoint accuracy at ranges traditional bullets can’t match, potentially extending effective sniper ranges up to 2,000 meters or more.

Another effort comes from Sandia National Laboratories, which in 2012 unveiled a prototype for a self-guided .50-caliber bullet. Unlike EXACTO’s reliance on an integrated guidance system, Sandia’s design uses an optical sensor to detect a laser beam illuminating the target, paired with electromagnetic actuators that adjust tiny fins to steer the projectile. This bullet, roughly four inches long, can update its position 30 times per second and has shown promise in hitting targets over a mile away (about 2,000 meters). Sandia’s tests included high-speed video capturing the bullet stabilizing in flight, improving accuracy at longer distances—a phenomenon they likened to the bullet “going to sleep” as it settles.

Imagine a battlefield stripped of human presence, not out of cowardice but necessity. The skies a deafening hum with swarms of drones (if a drone makes a sound and no one is there to hear it, does it make a sound?), each a “node” orchestrated via particle swarm AI models in a vast, decentralized network of artificial minds.

No generals barking orders, no soldiers trudging through mud, just silicon and steel executing a dance of destruction with precision beyond human capacity. The end state of drones isn’t just remote control or pre-programmed strikes; it’s autonomy so complete that the machines themselves decide who lives and who dies – no human in the loop. Self-directed projectiles, bullets with brains roaming the theater of war, seeking targets based on algorithms fed by real-time data streams. The vision feels like science fiction, yet the pieces already fall into place.

Generals gathered in their masses
just like witches at black masses
evil minds that plot destruction
sorcerers of death’s construction
in the fields the bodies burning
as the war machine keeps turning
death and hatred to mankind
poisoning their brainwashed minds, oh lord yeah!

~ War Pigs, Black Sabbath 1970

This shift isn’t merely tactical; it’s existential. Warfare has always been a contest of wills, a brutal arithmetic of resources and resolve. But what happens when we can compute the outcome before the first shot is fired? Drones, paired with advanced AI, offer the tantalizing possibility of simulating conflicts down to the last variable terrain, weather, enemy morale, and supply lines, all processed in milliseconds by systems that learn as they go. The autonomous bullet isn’t just a weapon; it’s a data point in a larger Markovian equation, one that could predict victory or defeat with chilling accuracy.

We’re not far from a world where wars are fought first in the cloud, their outcomes modeled and refined, before a single drone lifts off.If the future of warfare is of drone swarms of autonomous systems culminating in self-directed bullets, then pre-computing its outcomes becomes not just feasible but imperative. The battlefield of tomorrow isn’t a chaotic melee; it’s a high-stakes game, a multidimensional orchestrated chessboard where game theory, geopolitics, and macroeconomics converge to predict the endgame before the first move. To compute warfare in this way requires us to distill its essence into variables, probabilities, and incentives a task as daunting as it is inevitable. Yet again, there exists a terminology for this orchestrated chess game. Autonomous asymmetric mosaic warfighting, a concept explored by DARPA, envisions turning complexity into an asymmetric advantage by using networked, smaller, and less complex systems to overwhelm an adversary with a multitude of capabilitiesnvisions turning complexity into an asymmetric advantage by using networked, smaller, and less complex systems to overwhelm an adversary with a multitude of capabilities.

“A Nash equilibrium is a set of strategies that players act out, with the property that no player benefits from changing their strategy. ~ Dr John Nash.”

Computational Game Theory: The Logic of Lethality

At its core, warfare is a strategic interaction, a contest where players, nations, factions, or even rogue actors vie for dominance under the constraints of resources and information. Game theory offers the scaffolding to model this. Imagine a scenario where drones dominate: each side deploys autonomous swarms, programmed with decision trees that weigh attack, retreat, or feint based on real-time data. The payoff matrix isn’t just about territory or casualties, it’s about disruption, deterrence, and psychological impact. A swarm’s choice to strike a supply line rather than a command center could shift an enemy’s strategy, forcing a cascade of recalculations.

Now, introduce autonomous bullets, self-directed agents within the swarm. Each bullet becomes a player in a sub-game, optimizing its path to maximize damage while minimizing exposure. The challenge lies in anticipating the opponent’s moves: if both sides rely on AI-driven systems, the game becomes a duel of algorithms, each trying to out-predict the other. Zero-sum models give way to dynamic equilibria, where outcomes hinge on how well each side’s AI can bluff, adapt, or exploit flaws in the other’s logic. Pre-computing this requires vast datasets, historical conflicts, behavioral patterns, and even cultural tendencies fed into simulations that run millions of iterations, spitting out probabilities of victory, stalemate, or collapse.

Geopolitics: The Board Beyond the Battlefield

Warfare doesn’t exist in a vacuum; the shifting tectonic plates of geopolitics shape it. To pre-compute outcomes, we must map the global chessboard—alliances, rivalries, and spheres of influence. Drones level the playing field, but their deployment reflects deeper asymmetries. A superpower with advanced AI and manufacturing might flood the skies with swarms, while a smaller state leans on guerrilla tactics, using cheap, hacked drones to harass and destabilize. The game-theoretic model expands: players aren’t just combatants but also suppliers, proxies, and neutral powers with their own agendas.

Take energy as a (the main) variable: drones require batteries, rare earths, and infrastructure. A nation controlling lithium mines or chip fabs holds leverage, tipping the simulation’s odds. Sanctions, trade routes, and cyber vulnerabilities—like a rival hacking your drone fleet’s firmware—become inputs in the equation. Geopolitical stability itself becomes a factor: if a war’s outcome hinges on a fragile ally, the model must account for the likelihood of defection or collapse. Pre-computing warfare here means forecasting not just the battle, but the ripple effects—will a decisive drone strike trigger a refugee crisis, a shift in NATO’s posture, or a scramble for Arctic resources? The algorithm must think in networks, not lines.

I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.

~ Claude Shannon

Macroeconomics: The Sinews of Silicon War

No war is won without money, and drones don’t change that; they just rewrite the budget. Pre-computing conflict demands a macroeconomic lens: how much does it cost to field a swarm versus defend against one? The economics of autonomous warfare favor scale mass-produced drones and bullets could outpace legacy systems like jets or tanks in cost-efficiency. A simulation might pit a 10 billion dollar defense budget against a 1 billion dollar insurgent force, factoring in production rates, maintenance, and the price of countermeasures like EMPs or jamming tech.

But it’s not just about direct costs. Markets react to war’s shadow oil spikes, currencies wobble, tech stocks soar or crash based on who controls the drone supply chain (it is all about that theta/beta folks). A protracted conflict could drain a nation’s reserves, while a swift, computed victory might bolster its credit rating. The model must integrate these feedback loops: if a drone war craters a rival’s economy, their ability to replenish dwindles, tilting the odds. And what of the peacetime economy? States that mastering autonomous tech could dominate postwar reconstruction, turning military R&D into a geopolitical multiplier. Pre-computing this requires economic forecasts layered atop the game-theoretic core—GDP growth, inflation, and consumer confidence as resilience proxies.

The Supreme Lord said: I am mighty Time, the source of destruction that comes forth to annihilate the worlds. Even without your participation, the warriors arrayed in the opposing army shall cease to exist.~ Bhagavad Gita 11:32

The Synthesis: Simulating the Unthinkable

To tie it all together, picture a supercomputer or a distributed AI network running a grand simulation. It ingests game-theoretic strategies (strike patterns, bluffing probabilities), geopolitical alignments (alliances, resource choke points), and macroeconomic trends (war budgets, trade disruptions). Drones and their autonomous bullets are the pawns, but the players are human decision-makers, constrained by politics and profit. The system runs countless scenarios: a drone swarm cripples a port, triggering a naval response, spiking oil prices, and collapsing a coalition. Another sees a small state’s cheap drones hold off a giant, forcing a negotiated peace.

The output isn’t a single prediction, but a spectrum 75% chance of victory if X holds, 40% if Y defects, 10% if the economy tanks. Commanders could tweak inputs more drones, better AI, a preemptive cyberstrike and watch the probabilities shift. It’s not infallible; black swans like a rogue AI bug or a sudden uprising defy the math. But it’s close enough to turn war into a science, reducing the fog Clausewitz warned of to a manageable haze [1].

The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.

~ Ecclesiastes 1:9, KJV

Yet, this raises a haunting question: If we can compute warfare’s endgame, do we lose something essential in the process?

The chaos of flawed, emotional, unpredictable human decision-making has long been the wildcard that defies calculation. Napoleon’s audacity, the Blitz’s resilience, and the guerrilla fighters’ improvisation are not easily reduced to code. Drones and their self-directed progeny promise efficiency, but they also threaten to strip war of its human texture, turning it into a sterile exercise in optimization. And what of accountability? When a bullet chooses its target, who bears the moral weight—the coder, the commander, or the machine itself?

The implications stretch beyond the battlefield. If drones dominate warfare, the barriers to entry collapse. No longer will nations need vast armies or industrial might; a few clever engineers and a swarm of cheap, autonomous systems could level the playing field. We’ve seen glimpses of this in Ukraine, where off-the-shelf drones have humbled tanks and disrupted supply lines. Scale that up, and the future isn’t just drones it’s a proliferation of power, a democratization of destruction. Autonomous bullets could become the ultimate equalizer or the ultimate chaos agent, depending on who wields them.

Fighting for peace is like screwing for virginity.

~ George Carlin

A Moment of Clarity

i wonder: are we ready to surrender the reins? The dream of computing warfare’s outcome is seductive and humans are carnal creatures we lust for other humans and things, thus it promises to minimize loss, to replace guesswork with certainty, but it also risks turning us into spectators of our own fate, watching as machines play out scenarios we’ve set in motion (that which we lust after).

The end state of drones may indeed be a battlefield of self-directed systems, but the end state of humanity in that equation remains unclear. Perhaps the true revolution isn’t in the technology but in how we grapple with a world where war becomes a problem to be solved rather than a story to be lived.

We underestimate drones at our peril. They’re not just tools; they’re harbingers of a paradigm shift. The future is coming, and it’s buzzing overhead—relentless, autonomous, and utterly indifferent to our nostalgia for the wars of old.

Pre-computing warfare might make us too confident. Leaders who trust the model might rush to conflict, assuming the odds are locked. But humans aren’t algorithms; we rebel, err, and surprise. And what of ethics? A simulation that optimizes for victory might greenlight drone strikes on civilians to break morale, justified by a percentage point. The autonomous bullet doesn’t care; it’s our job to decide if the computation is worth the soul it costs.

In this drone-driven future, pre-computing warfare isn’t just possible—it’s already beginning. Ukraine’s drone labs, China’s swarm tests, the Pentagon’s AI budgets—they’re all steps toward a world where conflict is a solvable problem. It has been said that fighting and sex are the two book ends but one in the same.  But as we build the machine to predict the fight, we must ask: are we mastering war, or merely handing it a new master for something else entirely?  

Music To Blog By: Project-X “Closing Down The Systems.  Actually, I wouldn’t listen to this if i were you, unless you want to have nightmares.  Fearless (MZ412 Remix) does sound like computational warfare.

Until then,

#iwishyouwater <- recent raw Pipe footage of folks that got the memo.

Ted ℂ. Tanner Jr. (@tctjr) / X

References:

[1] “On War” by Carl von Clausewitz.  He called it “The Fog of War”: Clausewitz stressed the importance of understanding the unpredictable nature of war, noting that the “fog of war” (i.e., incomplete, dubious, and often erroneous information and great fear, doubt, and excitement) can lead to rapid decisions by alert commanders. 

[2] Thanks to Jay Sales for being the catalyst for this blog. If you do not know who he is look him up here. Jay Sales. One of the best engineering executives and dear friend.

NVIDIA GTC 2025: The Time Has Come The Valley Said

OpenAI’s idea of The Valley – Its Been A Minute

Embrace the unknown and embrace change. That’s where true breakthroughs happen.

~Jensen Huang

First i trust everyone is safe. Second i usually do not write about discrete events or “work” related items but this is an exception. March 17-21, 2025 i and some others attended NVIDIA GTC2025. It warranted a long writeup. Be Forewarned: tl;dr. Read on Dear Reader. Hope you enjoy this one as it is a sea change in computing and a tectonic ocean shift in technology.

NVIDIA GTC 2025: AI’s Raw Hot Buttered Future

March 17-21, 2025, San Jose became geek central for NVIDIA’s GTC—aka the “Super Bowl of AI.” Hybrid setup, in-person or virtual, didn’t matter; thousands of devs, researchers, and suits swarmed to see what’s cooking in AI, GPUs, and robotics. Jensen Huang dropped bombs in his keynote, 1,000+ sessions drilled into the guts of it, and big players flexed their wares. Here’s the raw dog buttered scoop—and why you should care if you sling code or ship product.

The time has come,’ the Walrus said,

      To talk of many things:

Of shoes — and ships — and sealing-wax —

      Of cabbages — and kings —

And why the sea is boiling hot —

      And whether pigs have wings.’


~ The Walrus and The Carpenter

All The Libraries

Jensen’s Keynote: AI’s Next Gear, No Hype

March 18, 2025 SAP Center and the MCenery Civic Center, over 28,000 geeks packed in both halls and out in the streets . Jensen Huang, NVIDIA’s leather-jacketed maestro, hit the stage and didn’t waste breath. 2.5 hours no notes and started with the top of the stack with all the libraries NVIDIA has “CUDA-ized” and went all the way down to the photonic ethernet cables. No corporate fluff, just tech meat for the developer carnivore. His pitch: AI’s not just chatbots anymore; it’s “agentic,” thinking and moving in the real world forward at the speed of thought. Backed up with specifications, cycles, cost and even calling out library function calls.

Here’s what he unleashed:

  • Blackwell Ultra (B300): Mid-cycle beast, 288GB memory, out H2 2025. Training LLMs that’d choke lesser rigs—AMD’s sniffing, but NVIDIA’s still king.
  • Rubin + Vera Rubin: GPU + CPU superchip combo, late 2026. Named for the galaxy guru, it’s Grace Blackwell’s heir. Full-stack domination vibes.
  • Physical AI & GR00T N1: Robots that do real things. GR00T’s a humanoid platform tying training together, synced with Omniverse and Cosmos for digital twin sims. Robotics just got real even surreal.
  • NVIDIA Dynamo: “AI Factory OS.” Data centers as reasoning engines, not just compute mules. Deploy AI without the usual ops nightmare. <This> will change it all.
  • Quantum Day: IonQ, D-Wave, Rigetti execs talking quantum. It’s distant, but NVIDIA’s planting CUDA flags for the long game.

Jensen’s big claim: AI needs 100 more computing than we thought. That’s not a flex it’s a warning. NVIDIA’s rigging the pipes to pump it.

He said thank you to the developer more than 5 times, mentioned open source at least 4 times and said ecosystem at least 5 times. It was possibly the best keynote i have ever seen and i have been to and seen some of the best. Zuckerburg was right – if you do not have a technical CEO and a technical board, you are not a technical company at heart.

Jensen with Disney Friend

What It Means: Unfiltered and Untrained Takeaways

As i said GTC 2025 wasn’t a bloviated sales conference taking over a city; it was the tech roadmap, raw and real:

  • AI’s Next Frontier: The shift to agentic AI and physical AI (e.g., robotics) suggests that AI is moving beyond chatbots and image generation into real-world problem-solving. NVIDIA’s hardware and software innovations—like Blackwell Ultra and Dynamo—position it as the enabler of this transition.
  • Compute Power Race: Huang’s claim of a 100x compute demand surge underscores the urgency for scalable, energy-efficient solutions. NVIDIA’s full-stack approach (hardware, software, networking) gives it an edge, though competition from AMD and custom chipmakers looms.
  • Robotics Revolution: With GR00T and related platforms, NVIDIA is betting big on robotics as a 50 trillion dollar opportunity. This could transform industries like manufacturing and healthcare, making 2025 a pivotal year for robotic adoption.
  • Ecosystem Dominance: NVIDIA’s partnerships with tech giants and startups alike reinforce its role as the linchpin of the AI ecosystem. Its 82% GPU market share may face pressure, but its software (e.g., CUDA, NIM) and services (e.g., DGX Cloud) create a formidable moat.
  • Long-Term Vision: The focus on quantum computing and the next-next-gen architectures (like Feynman, slated for 2028) shows NVIDIA isn’t resting on its laurels. It’s preparing for a future where AI and quantum tech converge.

Sessions: Ship Code, Not Slides

Over 1,000 sessions at the McEnery Convention Center. No hand-holding pure tech fuel for devs and decision-makers. Standouts:

  • Generative AI & MLOps: Scaling LLMs without losing your mind (or someone else’s). NVIDIA’s inference runtime and open models cut the fat—production-ready, not science-fair thoughting.
  • Robotics: Isaac and Cosmos hands-on. Simulate, deploy, done. Manufacturing and healthcare devs, this is your cue.
  • Data Centers: DGX Station’s 20 petaflops in a box. Next-gen networking talks had the ops crowd drooling.
  • Graphics: RTX for 2D/3D and AR/VR. Filmmakers and game devs got a speed boost—less render hell.
  • Quantum: Day-long deep dive. CUDA’s quantum bridge is speculative, but the math’s stacking up.
  • Digital Twins and Simulation: Omniverse™ provides advanced simulation capabilities for adding true-to-reality physics to scene compositions. Build on models from basic rigid-body simulation to destruction, fluid-dynamics-based fire simulation, and physics-based scene authoring.

Near Real-Time Digital Twin Rendering Of A Ship

The DGX Spark Computer

i personally thought this deserved its own call-out. The announcement of the DGX Spark Computer. It is a compact AI supercomputer. Let us unpack its specs and capabilities for training large language models (LLMs). This little beast is designed to bring serious AI firepower to your desk, so here’s the rundown based on what NVIDIA has shared at the conference.

The DGX Spark is powered by the NVIDIA GB10 Grace Blackwell Superchip, a tightly integrated combo of CPU and GPU muscle. Here’s what it’s packing:

  • GPU: Blackwell GPU with 5th-generation Tensor Cores, supporting FP4 precision (4-bit floating-point). NVIDIA claims it delivers up to 1,000 AI TOPS (trillions of operations per second) at FP4—insane compute for a desktop box.
  • CPU: 20 Armv9 cores (10 Cortex-X925 + 10 Cortex-A725), connected to the GPU via NVIDIA’s NVLink-C2C interconnect. This gives you 5x the bandwidth of PCIe Gen 5, keeping data flowing fast between CPU and GPU.
  • Memory: 128 GB of unified LPDDR5x with a 256-bit bus, clocking in at 273 GB/s bandwidth. This unified memory pool is shared between CPU and GPU, critical for handling big AI workloads without choking on data transfers.
  • Storage: Options for 1 TB or 4 TB NVMe SSD—plenty of room for datasets, models, and checkpoints.
  • Networking: NVIDIA ConnectX-7 with 200 Gb/s RDMA (scalable to 400 Gb/s when pairing two units), plus Wi-Fi 7 and 10GbE for wired connections. You can cluster two Sparks to double the power.
  • I/O: Four USB4 ports (40 Gbps), HDMI 2.1a, Bluetooth 5.3—modern connectivity for hooking up peripherals or displays.
  • OS: Runs NVIDIA DGX OS, a custom Ubuntu Linux build loaded with NVIDIA’s AI software stack (CUDA, NIM microservices, frameworks, and pre-trained models).
  • Power: Sips just 170W from a standard wall socket—efficient for its punch.
  • Size: Tiny at 150 mm x 150 mm x 50.5 mm (about 1.1 liters) and 1.2 kg—it’s palm-sized but packs a wallop.

The DGX Spark Computer

This thing’s a sleek, power-efficient monster styled like a mini NVIDIA DGX-1, aimed at developers, researchers, and data scientists who want data-center-grade AI on their desks – in gold metal flake!

Now, the big question: how beefy an LLM can the DGX Spark train? NVIDIA’s marketing pegs it at up to 200 billion parameters for local prototyping, fine-tuning, and inference on a single unit. Pair two Sparks via ConnectX-7, and you can push that to 405 billion parameters. But let’s break this down practically—training capacity depends on what you’re doing (training from scratch vs. fine-tuning) and how you manage memory.

  • Fine-Tuning: NVIDIA highlights fine-tuning models up to 70 billion parameters as a sweet spot for a single Spark. With 128 GB of unified memory, you’re looking at enough space to load a 70B model in FP16 (16-bit floating-point), which takes about 140 GB uncompressed. Techniques like quantization (e.g., 8-bit or 4-bit) or offloading to SSD can stretch this further, but 70B is the comfy limit for active fine-tuning without heroic optimization.
  • Training from Scratch: Full training (not just fine-tuning) is trickier. A 200B-parameter model in FP16 needs around 400 GB of memory just for weights, ignoring gradients and optimizer states, which can triple that to 1.2 TB. The Spark’s 128 GB can’t handle that alone without heavy sharding or clustering. NVIDIA’s 200B claim likely assumes inference or light fine-tuning with aggressive quantization (e.g., FP4 via Tensor Cores), not full training. For two units (256 GB total), you might train a 200B model with extreme optimization—think model parallelism and offloading—but it’s not practical for most users.
  • Real-World Limit: For full training on one Spark, you’re realistically capped at 20-30 billion parameters in FP16 with standard methods (weights + gradients + Adam optimizer fit in 128 GB). Push to 70B with quantization or two-unit clustering. Beyond that, 200B+ is more about inference or fine-tuning pre-trained models, not training from zero.

Not bad for 4000.00. Think of all the things you could do… All of the companies you could build… Now onto the sessions.

Speakings and Sessions

There were 2,000+ speakers, some Nobel-tier, delivered. Straight no chaser – code, tools, and war stories. Hardcore programming sessions on CUDA, NVIDIA’s parallel computing platform, and tools like Dynamo (the new AI Factory OS). Think line-by-line breakdowns of optimizing AI models or squeezing performance from Blackwell Ultra GPUs. Once again, slideware jockeys need not apply.

The speaker list was a who’s-who of brainpower and hustle. Nobel laureates like Frances Arnold brought scientific heft—imagine her linking GPU-accelerated protein folding to drug discovery. Meanwhile, Yann LeCun and Noam Brown (OpenAI) tackled AI’s bleeding edge, like agentic reasoning or game theory hacks. Then you had practitioners Joe Park (Yum! Brands) on AI for fast food RJ Scaringe (Rivian) on autonomous driving, grounding it in real-world stakes.

Literally, a who-who of the AI developer world baring souls (if they have one) and scars from the war stories, and they do have them.

There was one talk in particular that was probably one of the best discussions i have seen in the past decade. SoFar Ocean Technologies is partnering with MITRE and NVIDIA to power the future of ocean AI!

MITRE announced a joint effort to build an AI-powered ocean digital twin fueled by real-time data from the global Spotter network. Researchers, government, and industry will use the digital twin to simulate and better understand the marine environments in which they operate.

As AI supercharges weather prediction, even the most advanced models will need more ocean data to be effective. Sofar provides these essential observations at scale. To power the digital twin, SoFar will deliver data from their global network of real-time ocean sensors and collaborate with MITRE to rapidly expand the adoption of the Bristlemouth open connectivity standard. Live data will feed into the NVIDIA Omniverse and open up new pathways for AI-powered ocean understanding.

BristleMouth Open Source Orchestration UxV Platform

The systems of systems and ecosystem reach are spectacular. The effort is monumental, and only through software can this scale be achievable. Of primary interest to this ecosystem effort they have partnered with Ocean Exploration Trust and the Nautilus Exploration Program to seek out new discoveries in geology, biology, and archaeology while conducting scientific exploration of the seafloor. The expeditions launch aboard Exploration Vessel Nautilus — a 68-meter research ship equipped with live-streaming underwater vehicles for scientists, students, and the public to explore the deep sea from anywhere in the world. We embed educators and interns in our expeditions who share their hands-on experiences via ship-to-shore connections with the next generation. Even while they are not at sea, explorers can dive into Nautilus Live to learn more about our expeditions, find educational resources, and marvel at new encounters.

“The most powerful technologies are the ones that empower others.”

~Jensen Huang

The Nautilus Live Mapping Software

At the end of the talk, I asked a question on the implementation of AI Orchestration for sensors underwater as well as personally thanked Dr Robert Ballard, who was in the audience, for his amazing work. Best known for his 1985 discovery of the RMS Titanic, Dr. Robert Ballard has succeeded in tracking down numerous other significant shipwrecks, including the German battleship Bismarck, the lost fleet of Guadalcanal, the U.S. aircraft carrier Yorktown (sunk in the World War II Battle of Midway), and John F. Kennedy’s boat, PT-109.

Again Just amazing. Check out the work here: SoFar Ocean.

What Was What: Big Dogs and Upstarts

The Exhibit hall was a technology zoo and smorgasbord —400+ OGs and players showing NVIDIA’s reach. (An Introvert’s Worst Nightmare.) Who showed up:

  • Tech Giants: Adobe, Amazon, Microsoft, Google, Oracle. AWS and Azure lean hard on NVIDIA GPUs—cloud AI’s backbone.
  • AI Hotshots: OpenAI and DeepSeek. ChatGPT’s parents still ride NVIDIA silicon; efficiency debates be damned.
  • Robots & Cars: Tesla hinting at autonomy juice, Delta poking at aviation AI. NVIDIA’s tentacles stretch wide.
  • Quantum Crew: Alice & Bob, D-Wave, IonQ, Rigetti. Quantum’s sci-fi, but they’re here.
  • Hardware: Dell, Supermicro, Cisco with GPU-stuffed rigs. Ecosystem’s locked in.
  • AI Platforms: Edge Impulse, Clear ML, Haystack – you need training and ML deployment they had it.

Inception Program: Fueling the Next Wave

Now, the Inception program—NVIDIA’s startup accelerator—is the unsung hero of GTC. With over 22,000 members worldwide, it’s a breeding ground for AI innovation, and GTC 2025 was their stage. Nearly 250 Inception startups showed up, from healthcare disruptors to robotics trailblazers like Stelia (shoutout to their “petabit-scale data mobility” talk). These aren’t pie-in-the-sky outfits—100+ had speaking slots, and their demos at the Inception Pavilion were hands-on proof of GPU-powered breakthroughs.

The program’s a sweet deal: free to join, no equity grab, just pure support—100K in DGX Cloud credits, Deep Learning Institute training, VC intros via the VC Alliance. They even had a talk on REVERSE VC pitches. What the VCs in Silicon Valley are looking for at the moment, and they were funding companies at the conference! It’s NVIDIA saying, “We’ll juice your tech, you change the game.” At GTC, you saw the payoff—startups like DeepSeek and Baseten flexing optimized models or enterprise tools, all built on NVIDIA’s stack. Critics might say it locks startups into NVIDIA’s ecosystem, but with nearly 300K in credits and discounts on tap, it’s hard to argue against the boost. The war stories from these founders—like scaling AI infra without frying a data center—were gold for any dev in the trenches.

GTC 2025 and Inception are two sides of the same coin. GTC’s the megaphone—blasting NVIDIA’s vision (and hardware) to the world—while Inception’s the incubator, quietly powering the startups that’ll flesh out that vision. Huang’s keynote hyped a token-driven AI economy, and Inception’s crew is already living it, churning out reasoning models and robotics on NVIDIA’s gear. It’s a symbiotic flex: GTC shows the “what,” Inception delivers the “how.”

We’re here to put a dent in the universe. Otherwise, why else even be here? 

~ Steve Jobs

Micheal Dell and Your Humble Narrator at the Dell Booth

I did want to call out one announcement that I think has been a long time in the works in the industry, and I have been a very strong evangelist for, and that is a distributed inference OS.

Dynamo: The AI Factory OS That’s Too Cool to Gatekeep

NVIDIA unleashed Dynamo—think of it as the operating system for tomorrow’s AI factories. Huang’s pitch? Data centers aren’t just server farms anymore; they’re churning out intelligence like Willy Wonka’s chocolate factory but with fewer Oompa Loompas (queue the imagination song). Dynamo’s got a slick trick: it’s built from the ground up to manage the insane compute loads of modern AI, whether you’re reasoning, inferring, or just flexing your GPU muscle. And here’s the kicker—NVIDIA’s tossing the core stack into the open-source wild via GitHub. Yep, you heard that right: free for non-commercial use under an Apache 2.0 license. It’s like they’re saying, “Go build your own AI empire—just don’t sue us!” For the enterprise crowd, there’s a beefier paid version with extra bells and whistles (of course). Open-source plus premium? Whoever heard of such a thing! That’s a play straight out of the Silicon Valley handbook.

Dynamo High-Level Architecture


Dynamo is high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as

  • Disaggregated prefill & decode inference – Maximizes GPU throughput and facilitates trade off between throughput and latency.
  • Dynamic GPU scheduling – Optimizes performance based on fluctuating demand
  • LLM-aware request routing – Eliminates unnecessary KV cache re-computation
  • Accelerated data transfer – Reduces inference response time using NIXL.
  • KV cache offloading – Leverages multiple memory hierarchies for higher system throughput

Dynamo enables dynamic worker scaling, responding to real-time deployment signals. These signals, captured and communicated through an event plane, empower the Planner to make intelligent, zero-downtime adjustments. For instance, if an increase in requests with long input sequences is detected, the Planner automatically scales up prefill workers to meet the heightened demand.

Beyond efficient event communication, data transfer across multi-node deployments is crucial at scale. To address this, Dynamo utilizes NIXL, a technology designed to expedite transfers through reduced synchronization and intelligent batching. This acceleration is particularly vital for disaggregated serving, ensuring minimal latency when prefill workers pass KV cache data to decode workers.

Dynamo prioritizes seamless integration. Its modular design allows it to work harmoniously with your existing infrastructure and preferred open-source components. To achieve optimal performance and extensibility, Dynamo leverages the strengths of both Rust and Python. Critical performance-sensitive modules are built with Rust for speed, memory safety, and robust concurrency. Meanwhile, Python is employed for its flexibility, enabling rapid prototyping and effortless customization.

Oh yeah, and for all the naysayers over the years, it uses Nats.io as the messaging bus. Here is the Github. Get your fork on, but please contribute back – ya hear?

Tokenized Reasoning Economy

Along with this Dynamo announcement, NVidia has created an economy around tokenized reasoning models, in a monetary sense. This is huge. Let me break this down.

Now, why call this an economy? In a monetary sense, NVIDIA’s creating a system where compute power (delivered via its GPUs) and tokens (the output of reasoning models) act like resources and currency in a marketplace. Here’s how it works:

  • Compute as the Factory: NVIDIA’s GPUs—think Blackwell Ultra or Hopper—are the engines that power these reasoning models. The more compute you throw at a problem (more GPUs, more time), the more tokens you can generate, and the smarter the AI’s answers get. It’s like a factory producing goods, but the goods here are tokens representing intelligence.
  • Tokens as Currency: In the AI world, tokens aren’t just data—they’re value. Companies running AI services (like chatbots or analytics tools) often charge based on tokens processed—say, (X) dollars per million tokens. NVIDIA’s optimizing this with tools like Dynamo, which boosts token output while cutting costs, essentially making the “token economy” more efficient. More tokens per dollar = more profit for businesses using NVIDIA’s tech. Tokens Per Second will be the new metric.
  • Supply and Demand: Demand for reasoning AI is skyrocketing—enterprises, developers, and even robotics firms want smarter systems. NVIDIA supplies the hardware (GPUs) and software (like Dynamo and NIM microservices) to meet that demand. The more efficient their tech, the more customers flock to them, driving sales of GPUs and services like DGX Cloud.
  • Revenue Flywheel: Here’s the monetary kicker—NVIDIA’s raking in billions ($39.3B in a single quarter, per GTC 2025 buzz) because every industry needs this tech. They sell GPUs to data centers, cloud providers, and enterprises, who then use them to generate tokens and charge end users. NVIDIA reinvests that cash into better chips and software, keeping the cycle spinning.

NVIDIA’s “tokenized reasoning model economy” is about turning AI intelligence into a scalable, profitable commodity—where tokens are the product, GPUs are the means of production, and the tech industry is the market. The Developers power the Flywheel. Makes the mid-90s look like Bush League sports ball.

Tori MCcaffrey Technical Product Manager Extraordinaire and Your Humble Narrator

All that is really missing is a good artificial intelligence to control the whole process. And that is the trick, isnt it? These types of blue-sky discussions always assume certain advances for a sucessful implmentation. Unfortunately, A.I. is the bottleneck in this case. We’re close with replication and manufacturing processes and we could probably build sufficiently effective ion drives if we had the budget. But we lack a way to provide enought intelligence for the probe to handle all the situations it could face.

~ Eduard Guijpers from the Convention Panel -Designing a Von Nueman Probe

Daily and Lecun – Fireside

Lecun FireSide Chat

Yann LeCun, Turing Award badass and Meta’s AI Chief Scientist brain, sat down for a fireside chat with Bill Daily, Chief Scientist at NVIDIA that cut through the AI hype. No fluffy TED Talk (or me talking) vibes here just hot takes from a guy who’s been torching (get it?) neural net limits since the ‘80s. With Jensen Huang’s “agentic AI” bomb still echoing from the keynote, LeCun brought the dev crowd at the McEnery Civic Center a dose of real talk on where deep learning’s headed.

LeCun didn’t mince words: generative AI’s cool, but it’s a stepping stone. The future’s in systems that reason, not just parrot think less ChatGPT, and more “machines that actually get real work done.” He riffed on NVIDIA’s Blackwell Ultra and GR00T robotics push, nodding to the computing muscle needed for his vision. “You want AI that plans and acts? You’re burning 100x more flops than today,” he said, echoing Jensen’s compute hunger warning. No surprise—he’s been preaching energy-efficient architectures forever.

The discussion further dug into LeCun’s latest obsession: self-supervised learning on steroids. He’s betting it’ll crack real-world perception for robots and autonomous rigs stuff NVIDIA’s Cosmos and Isaac platforms are already juicing. “Supervised learning’s dead-end for scale,” he jabbed. “Data’s the bottleneck, not flops.” There were several nods from the devs in the Civic Center. He also said we would be managing hundreds of agents in the future, vertically trained – horizontally chained so to speak.

No slides once again, just LeCun riffing extempore, per NVIDIA’s style. He dodged the Meta AI roadmap but teased “open science” wins—likely a jab at closed-shop rivals. For devs, it was a call to arms: ditch the hype, build smarter, lean on NVIDIA’s stack. With Quantum Day buzzing next door, he left us with a zinger: “Quantum’s cute, but deep nets will out-think it first.”

GTC’s “Super Bowl of AI” rep held. LeCun proved why he’s still the godfather—unfiltered, technical, and ready to break the next ceiling and pragmatic.

Jay Sales, Engineering Executive Rockstar and Your Humble Narrator

Bottom Line

GTC2025 wasn’t just a conference. GTC 2025 was NVIDIA flipping the table: AI’s industrial now, not academic. Jensen’s vision, the sessions’ grit, and the hall’s buzz screamed one thing—build or get buried. For devs, it’s a CUDA goldmine. For suits, it’s strategy. For the industry, it’s NVIDIA steering the ship—full speed into an AI agentic and robotic future. With San Jose’s dust settling, the code’s just starting to run. Big fish and small fry are all feeding on bright green chips. 5 devs can now do the output of 50. Building stuff so others can build is Our developer mantra. Always has been, always will be – Gabba Gabba Hey One Of Us, One of Us!

Huang’s overarching message was clear: AI is evolving beyond generative models into “agentic AI”—systems that can reason, plan, and act autonomously. This shift demands exponentially more compute power (100x more than previously predicted, he noted), cementing NVIDIA’s role as the backbone of this transformation.

Despite challenges—early Blackwell overheating issues, U.S. export controls, and a 13% stock dip in 2025. Whatevs. NVIDIA’s record-breaking 39.3 billion dollar revenue quarter in February proves its resilience. GTC 2025 reaffirmed that NVIDIA isn’t just riding the AI wave; it’s creating it.

One last thought: a colleague was walking with me around the conference and inquired to me how did this feel and what i thought. Context: i was in The Valley from 1992-2001 and then had a company headquartered out there from 2011-2018. i thought for a moment, looked around, and said, “This feels like 90’s on steroids, which was the heyday of embedded programming and what i think was then the height of some of the most performant code in the valley.” i still remember when at Apple the Nvidia chip was chosen over ATI’s graphics chip. NVIDIA’s stock was something like 2.65 / share. i still remember when at Microsoft the NVIDIA chip was chosen for the XBox. NVIDIA the 33 year old start-up that analyst are talking of the demise. Just like music critics – right? As i drove up and down 101 and 280 i saw all of the new buildings and names – i realized – The Valley Is Back.

until then,

#iwishyouwater <- Mark Healy Solo Outer Reef Memo

@tctjr

Muzak To Blog By: Grotus, stylized as G̈r̈oẗus̈, was an industrial rock band from San Francisco, active from 1989 to 1996. Their unique sound incorporated sampled ethnic instruments, two drummers, and two bassists, and featured angry but humorous lyrics. NIN, Mr Bungle, Faith No More and Jello Biafra championed the band. Not for the faint of heart. Nevertheless great stuff.

Note: Rumor has it the Rivian SUV does in fact, go 0-60 in 2.6 seconds with really nice seats. Also thanks to Karen and Paul for the tea and sympathy steak supper in Palo Alto, Miss ya’ll!

Only In The Valley

In Memoriam – HAIL David Benson February 10,1976 – February 11th, 2025

Ted Tanner, I’ll wear a tutu if I have to…

~ David Benson

David Benson and Your Narrator at AWS Re-Invent 2015 (photo courtesy of Sim)

First i hope everyone is safe.

Second, this blog took much longer to get out than I expected. Of late, I have had several negative personal situations happen simultaneously. Situations and events that are negative seem or appear to happen in clusters. There must be some psychological word for this perceptual creation. Then again it could be a resonant issue in the universe that i am creating.

Of recent events, for those who know me, the plane crash into DCA hit one network hop away. The immediate family is okay, and we brought home a national medal in the US Figure Skating Novice Pairs. My sincere condolences to all of the families.

Third and of the utmost importance and much reverence, one of my dear friends has passed which occurred right in the middle of so many other occurrences.

David Benson was a son, a husband, and a father. He was also a dear friend who some of you knew personally and professionally.

He passed away due to brutally overwhelming liver cancer. He died a day after his 49th birthday, which is the most important holiday as far as I am concerned.

I had previously talked to him about a month and a half prior as he and his amazing superwomen wife, Jen Benson, were driving south to Cancun, where he was extremely stoked to be creating yet another company. He was a true entrepreneur. He was telling me all about it and how his cancer had gone into remission. I used to tell him instead of saying goodbye, I would say, “Don’t let them get you. They can’t get US!” Then, once over the past year, after much prodding and me asking he didn’t seem “right” to me he told me he had cancer. I adding ” i love you”.

As a man i believe we have three basic “things” we provide if we have a family: Physical Security, Financical Security and Mental Security. i call this the Male Trilogy. In most cases you get two out of someone not three. He had all three.

For those who have a problem with that and think that it reduces the alpha/sigma machismo, please, I’d like to have a word with you in private—five minutes alone, as the song goes, if you know it. I bet you haven’t lived loud enough to understand what it means between men.

David and I met in 2005, when he and Josh Kline were working on digital rights management associated with a global identifier for media called ISAN. We ended up traveling all over the world to some very nice places and worked together on some very cool projects.

As I wrote this piece, I realized that 20 years had passed in the blink of an eye. The days are long, and the years are short.

As time went on, instead of being in the entrepreneurship orbit, we became very close friends separated by the left and right coasts.

Some time ago, i had both of my hips replaced, and David was calling to check on me before and after the surgeries. Before the surgeries, i was all but crippled.

After the surgeries he said that his super wife could fix me. i told him my days of lifting heavy things, paddling into heaving slabs and freediving etc i think might be over but i am open to anything.

She ended up saving my life, both mentally and physically, and i can never thank them enough.

In addition to saving me and allowing me to do what i love, David did some things over the year that maybe one day i might talk about but suffice to say he did some things both professionally and personally i can never repay.

He was the originator of the quote at the top of this blog, which I now constantly utilize.

I’ll never forget the day he said the quote:

We were discussing how far one should go when creating a company to get product market fit and generate cash flow positive revenue.

Me: “i don’t think people really understand the magnitude of making your own company. It will tell you a lot about yourself.”

David: “Well Ted Tanner i know one thing it is part of us and what we do and I’ll wear a f***ing tutu if i have to complete with lipstick. Choose the shade i have a whole box!”

I still laugh. i say it almost once a week to someone. Do what it takes and wear a tutu if you have to get the job done.

David and Your Narrator out-da-back. (photo courtesy of Superwomen)

Fast forward to about a month ago and i got a call from him. He said “goose” ( i never asked him why he called me goose) he said the cancer has come back and I’m having problems. i said whatever you need, man, and know that no matter what happens, I’ll take care of everything as much as I possibly can in the future.

He texted me a week later and said he wanted me to come out to say goodbye. i texted him back but i didn’t here from him and he would always text or call me right back. i knew something was wrong. I was managing several concerns, as mentioned at the beginning of this blog, professionally and personally. i knew i had to make the call that was going to rock me and that was to Jen Benson. i called her on a sunday two weeks ago and asked if i needed to come out and she said he might not make it. i hung up dropped everything and make reservations to come see him.

i arrived and was taken back how far things had gotten in one month as we always utilized video for phone calls.

i sat down and made a joke about how far he would go to have sex with me. He laughed.

Then we were sitting there by ourselves in his den. He said ” i can go now.”

Remember when i mentioned the male trilogy? While we are all Sisyphus, at some point in time, the rock rolled backward over me when he whispered that to me. Stay solid, i was saying to myself.

He waited to go after his sons made him a birthday cake.

David is survived by his amazing, beautiful superwoman wife Jen, Oliver, a music virtuoso, and Tyler, a cook and clothing designer extraordinaire. To them, I will always support you, listen to you, and protect you.

He is also survived by his amazing parents gary and genie as well as his brother Andrew. Other notable folks are Greg, Natalie, Jeremy, Elon, Sean and of course Sim. In the comments feel free to add anyone else.

Note: i wrote a piece a while back entitled It is an honor to say goodbye. <- click here. it describes the feeling of grief as a never ending fractal. The last thing i said to him as i kissed him on the head was “I’ll be seeing you in your tutu. i love you”.

Until Then,

#iwishyouwater <- salt mines bonaire

Ted ℂ. Tanner Jr. (@tctjr) / X

Note: Both David and Jen love the water in fact David was a rescue scuba diver and had some amazing stories. Birds of a feather they say.

Music To Blog By: David Loved Bob Marley, Allman Brothers etc.

SnakeByte[19] – Software As A Religion

Religion is regarded by the common people as true, by the wise as false, and by the rulers as useful.

Lucius Annaeus Seneca

Dalle’s Idea of Religion

First as always i hope everyone is safe oh Dear Readers. Secondly, i am going to write about something that i have been pondering for quite some time, probably close to two decades.

What i call Religion Of Warez (ROWZ).

This involves someone who i hold in the highest regard YOU the esteemed developer.

Marc Andreeson famously said “Software Is Eating The Word”. Here is the original blog:

“Why Software Is Eating The World” by Marc Andreeson

There is a war going on for your attention. There is a war going on for your thumb typing. There is a war going on for your viewership. There is a war going on for your selfies. There is a war going on for your emoticons. There is a war going on for github pull requests.

There is a war going on for the output of your prompting.

We have entered into The Great Cognitive Artificial Intelligence Arms Race (TGCAIAR) via camps of Large Languge Model foundational model creators.

The ability to deploy the warez needed to wage war on YOU Oh Dear Reader is much more complex from an ideological perspective. i speculate that Software if i may use that term as an entity is a non-theistic religion. Even within the Main Tabernacle of Software (MTOS) there are various fissures of said religions whether it be languages, architectures or processes.


A great blog can be found here -> Software Development: It’s a Religion.

Let us head over to the LazyWebTM and do a cursory search and see what we find[1] concerning some comparison numbers for religions and software languages.

In going to wikipedia we find:

According to some estimates, there are roughly 4,200 religions, churches, denominations, religious bodies, faith groups, tribes, cultures, movements, ultimate concerns, which at some point in the future will be countless.

Wikipedia

Worldwide, more than eight-in-ten people identify with a religious group. i suppose even though we don’t like to be categorized, we like to be categorized as belonging to a particular sect. Here is a telling graphic:

Let us map this to just computer languages. Just how many computer languages are there? i guessed 6000 in aggregate. There are about 700 main programming languages, including esoteric coding languages. From what i can ascertain some only list notable languages add up to 245 languages. Another list called HOPL, which claims to include every programming language ever to exist, puts the total number of programming languages at 8,945.

So i wasn’t that far off.

Why so much kerfuffle on languages? For those that have ever had a language discussion, did it feel like you were discussing religion? Hmmmm?

Hey, my language does automatic heap management. Why are you messing with memory allocation via this dumb thing called pointers?

The Art of Computer Programming is mapping an idea into a binary computational translation (classical computing rates apply). This process is highly inefficient compared to having binary-to-binary discussions[2]. Note we are not even considering architectures or methods in this mapping. Let us keep it at English to binary representation. What is the dimensionality reduction for that mapping? What is lost in translation?

For reference, i found a very precise and well-written blog here -> How Much Code Has Ever Been Written?

The calculation involves the number of lines of code ever written up to that point sans the exponential rate from the past two years:

2,781,000,000,000

Roughly 2.8 Trillion Lines of Code have been written in the past 20 years.

Sage McEnery 2020

As i always like to do i refer to Miriam Webster Dictionary. It holds a soft spot in my heart strings as i used to read it in grade school. (Yes i read the dictionary…)

Religion: Noun

re·​li·​gion (ruh·li·jen)

: a cause, principle, or system of beliefs held to with ardor and faith

Well, now, Dear Reader, the proverbial plot thickens. A System of Beliefs held to faith. Nowadays, religion is utilized as a concept today applied for a genus of social formations that includes several members, a type of which there are many tokens or facets.

If this is, in fact, the case, I will venture to say that Software could be considered a Religion.

One must then ask? Is there “a model” to the madness? Do we go the route of the core religions? Would we dare say the Belief System Of The Warez[3] be included as a prominent religion?

Symbols Of The World Religions

I have said several times and will continue to say that Software is one of the greatest human endeavors of all time. It is at the essence of ideas incarnate.

It has been said that if you adopt the science, you adopt the ideology. Such hatred or fear of science has always been justified in the name of some ideology or other.

If we take this as the undertone for many new aspects of software, we see that the continuum of mind varies within the perception of the universe by which we are affected by said software. It is extremely canonical and first order.

Most often, we anthropomorphize most things and our software is no exception. It is as though it were an entity or even a thing in the most straightforward cases. It is, in fact, neither. It is just information imputed upon our minds via probabilistic models via non convex optimization methods. It is as if it was a Rorschach test that allowed many people to project their own meaning onto it (sound familiar?). 

Let me say this a different way. With the advent of ChatGPT we seem to desire IT to be alive or reason somehow someway yet we don’t want it to turn into the terminator.

Stock market predictions – YES

Terminator – NO.

The Thou Shalts Will Kill You

~ Joseph Campbell

Now we are entering a time very quickly where we have “agentic” based large language models that can be scripted for specific tasks and then chained together to perform multiple tasks.

Now we have large language models distilling information gleaned from other LLMs. Who’s peanut butter is in the chocolate? Is there a limit of growth here for information? Asymptotic token computation if you will?

We are nowhere near the end of writing the Religion Of Warez (ROWZ) sacred texts compared to the Bible, Sutras, Vedas, the Upanishads, and the Bhagavad Gita, Quran, Agamas, Torah, Tao Te Ching or Avesta, even the Satanic Bible. My apologies if i left your special tome out it wasn’t on purpose. i could have listed thousands. BTW for reference there is even a religion called the Partridge Family Temple. The cult’s members believe the characters are archetypal gods and goddesses.

In fact we have just begun to author the Religion Of Warez (ROWZ) sacred text. The next chapters are going be accelerated and written via generative adversarial networks, stable fusion and reinforcement learning transformer technologies.

Which, then, one must ask which Diety are YOU going to choose?

i wrote a little stupid python script to show relationships of coding languages based on dates for the main ones. Simple key value stuff. All hail the gods K&R for creating C.

import networkx as nx
import matplotlib.pyplot as plt

def create_language_graph():
    G = nx.DiGraph()
    
    # Nodes (Programming languages with their release years)
    languages = {
        "Fortran": 1957, "Lisp": 1958, "COBOL": 1959, "ALGOL": 1960,
        "C": 1972, "Smalltalk": 1972, "Prolog": 1972, "ML": 1973,
        "Pascal": 1970, "Scheme": 1975, "Ada": 1980, "C++": 1983,
        "Objective-C": 1984, "Perl": 1987, "Haskell": 1990, "Python": 1991,
        "Ruby": 1995, "Java": 1995, "JavaScript": 1995, "PHP": 1995,
        "C#": 2000, "Scala": 2003, "Go": 2009, "Rust": 2010,
        "Common Lisp": 1984
    }
    
    # Adding nodes
    for lang, year in languages.items():
        G.add_node(lang, year=year)
    
    # Directed edges (influences between languages)
    edges = [
        ("Fortran", "C"), ("Lisp", "Scheme"), ("Lisp", "Common Lisp"),
        ("ALGOL", "Pascal"), ("ALGOL", "C"), ("C", "C++"), ("C", "Objective-C"),
        ("C", "Go"), ("C", "Rust"), ("Smalltalk", "Objective-C"),
        ("C++", "Java"), ("C++", "C#"), ("ML", "Haskell"), ("ML", "Scala"),
        ("Scheme", "JavaScript"), ("Perl", "PHP"), ("Python", "Ruby"),
        ("Python", "Go"), ("Java", "Scala"), ("Java", "C#"), ("JavaScript", "Rust")
    ]
    
    # Adding edges
    G.add_edges_from(edges)
    
    return G

def visualize_graph(G):
    plt.figure(figsize=(12, 8))
    pos = nx.spring_layout(G, seed=42)
    years = nx.get_node_attributes(G, 'year')
    
    # Color nodes based on their release year
    node_colors = [plt.cm.viridis((years[node] - 1950) / 70) for node in G.nodes]
    
    nx.draw(G, pos, with_labels=True, node_color=node_colors, edge_color='gray', 
            node_size=3000, font_size=10, font_weight='bold', arrows=True)
    
    plt.title("Programming Language Influence Graph")
    plt.show()

if __name__ == "__main__":
    G = create_language_graph()
    visualize_graph(G)

Programming Relationship Diagram

So, folks, let me know what you think. I am considering authoring a much longer paper comparing behaviors, taxonomies and the relationship between religions and software.

i would like to know if you think this would be a worthwhile piece?

Until Then,

#iwishyouwater <- Banzai Pipeline January 2023. Amazing.

@tctjr

MUZAK TO BLOG BY: Baroque Ensemble Of Vienna – “Classical Legends of Baroque”. i truly believe i was born in the wrong century when i listen to this level of music. Candidly J.S. Bach is by far my favorite composer going back to when i was in 3rd grade. BRAVO! Stupdendum Perficientur!

[1] Ever notice that searching is not finding? i prefer finding. Someone needs to trademark “Finding Not Searching” The same vein as catching ain’t fishing.

[2] Great paper from OpenAI on just this subject: two agents having a discussion (via reinforcement learning) : https://openai.com/blog/learning-to-communicate/ (more technical paper click HERE)

[3] For a great read i refer you to the The Ware Tetralogy by Rudy Rucker: Software (1982), Wetware (1988), Freeware (1997), Realware (2000)

[4] When the words “software” and “engineering” were first put together [Naur and Randell 1968] it was not clear exactly what the marriage of the two into the newly minted term really meant. Some people understood that the term would probably come to be defined by what our community did and what the world made of it. Since those days in the late 1960’s a spectrum of research and practice has been collected under the term.

What Is Your Eulogy? (Memento Mori – Memento Vivre)

Dalle’s Idea of a Crypt Monument

One life on this earth is all that we get, whether it is enough or not enough, and the obvious conclusion would seem to be that at the very least we are fools if we do not live it as fully and bravely and beautifully as we can.

Frederick Buechner

First, as always, i trust everyone is safe. Second, i trust everyone had an amazing holiday with your family and friends hopefully did something “screen-free”. It is the start of a new year.

i am changing gears just a little and writing on a subject that, at first blush, might appear morose, yet it is not. in fact quite the opposite.

What Is Your Eulogy?

Yep i went THERE. (Ever notice that once you arrive, you are there and think about somewhere else?)

If you go to my About page, you will see that I set this site up mainly to be a memory machine for me and a digital reference for My Family and Friends in addition, if along the way, i entertain someone on the WorldWideWait(tm) all the better. A reference for a future memory if you will.

I am taking complete editorial advantage of paying the AWS bill every month, and there is a “.org” at the end of the site name denoting a not-for-profit site supposedly like a religion. i can say what i want, i suppose—well, still within reason nowadays. Free Speech, They Said… Yet, I digress.

I will persist until I succeed.

I was not delivered unto this world in defeat, nor does failure course in my veins. I am not a sheep waiting to be prodded by my shepherd. I am a lion and I refuse to talk, to walk, to sleep with the sheep. I will hear not those who weep and complain, for their disease is contagious. Let them join the sheep. The slaughterhouse of failure is not my destiny.

I will persist until I succeed.

~ OG Mandino

For context, this subject matter was initiated on the conflagration of several disparate events:

  1. i introduced one of my progeny to Mozart’s Requiem in D minor, K. 626, aka Lacrimosa. We discussed the word Requiem, and then she immediately informed me that Lacrimosa means Sorrow in Latin and in the key of D minor. Wow, thank you, i said. (maybe something is sticking…)
  2. An old friend whom I hadn’t seen in years passed away the day after I emailed him I contacted him to discuss some audio subject material that I enjoyed speaking with him about in detail. Alas, another cancer victim.
  3. i took a class put on by Mathew McConaughey and Tony Robbins called “The Art of Living” and the book The Greatest Salesman by OG Madino was featured in class.
  4. i took yet another class from the amazing Flow Research Collective Group. You can read a review here.
  5. Since I started this piece, even more humans who are dear to me have passed or received extremely dire news.
  6. i just wanted to scribe these thoughts in order to “remind me to remember”.

Life Should be One Great Adventure or Nothing.

Helen Keller

So here we go… it is tl;dr fo’ sho’.

In one of the aforementioned classes, the subject matter was the title of this blog. I originally had planned to calll this blog “Do Not Be Awed Into Submission,” where most people nowadays are “awed” by TikTok,Instagram or YouTube videos of people doing stuff and keep themselves from truly creating and DOING stuff in their own lives. They just sit and watch sports, listen to podcasts and “consume” without using that information to create. It seems to me, at least, that most people nowadays spectate instead of create or participate.

Yet i started reflecting on the subject matter as this blog has been in draft form for over a year. Another year passed, another amazing birthday (afaic the most important holiday), and here we are, a New Year into 2025.

So given all that context and background:

What do i want to be known for when Ye Ole #EndOTimes is forthcoming? (Note: for those word freaks out there, it is called Eschatology from Greek (who else?) ἔσχατος (éskhatos).

This is the CENTRAL SCRUTINIZER
Joe has just worked himself into an imaginary frenzy during the fade-out of his imaginary song,
He begins to feel depressed now. He knows the end is near. He has realized
at last that imaginary guitar notes and imaginary vocals exist only in the mind
of the imaginer.
And ultimately, who gives a f**k anyway? HAHAHAHA!…Excuse me…so who gives a f**k anyway? So he goes back to his ugly little room and quietly dreams his last imaginary guitar solo…

~ Frank Zappa From Watermelon In Easter Hey

i believe at this point, at the end of this thing called life, pretty much for me, are the following attributes that i want to be known for as best as possible i can be:

  • Honor and Integrity
  • Brutal Honesty
  • Living Life Loud
  • Improving Oneself Daily (mentally, physically, emotionally)
  • Loving (and Hating)
  • Quality Over Quantity
  • Maintaining a sheer sense of wonder and awe for Life

If you note, most of these items are items i can control or affect. You say well, what about being a good friend, spouse, parent? Well, to the best of your ability, you can try to be the best at those, but ultimately, someone else is judging YOU. In fact, we are always judged, and in fact, I will say that most people judge – consciously or subconsciously, ergo, Judge as Ye Be Judged.

As well as, and i hope duly noted, some of those items are controversial. Oh Dear Reader, this wont be the first time i have been associated with controversial.

You have enemies? Why, it is the story of every man who has done a great deed or created a new idea. It is the cloud which thunders around everything that shines. Fame must have enemies, as light must have gnats. Do not bother yourself about it; disdain. Keep your mind serene as you keep your life clear.

~ Victor Hugo

To the best of my ability, I will attempt to provide definitions and context for the above attributes. One additional context is that these are couched in “individualistic” references, not societal norms, overlays or programming.

  1. Honor and Integrity

Honor and integrity are ethical concepts that are often intertwined but have distinct meanings:

Honor

Honor refers to high respect and esteem, often tied to one’s actions, character, and adherence to a code of conduct. It is about upholding a personal set of values considered virtuous and deserving of respect and maintaining one’s reputation and dignity through ethical behavior and moral decision-making.

Integrity

Integrity is the quality of being honest and having strong moral principles. It involves consistently adhering to ethical standards and being truthful, fair, and just in all situations. Key aspects of integrity include being truthful and transparent in one’s actions and communications and acting according to one’s values and principles even when it is challenging, inconvenient, or, in many cases, seemingly impossible.

Essentially, it is standing up for “what is right” (as one views in and unto oneself), even within and to the point of adversity or personal loss.

What is good? – All that heightens the feelings of power, the will to power, power itself in man. What is bad? – All that proceeds from weakness. What is happiness? – The feeling that power increases – that a resistance is overcome.

~ Friedrich Nietzsche

Honor and integrity form the foundation of a trustworthy and respected character. Honor emphasizes the external recognition of one’s ethical behavior, while integrity focuses on the internal adherence to moral principles. Your moral compass is extremely individualistic. In full transparency given that i believe there is no original sin some have questioned how in the world can i have such moral character. Literally, someone said to me: “Given how you view things, how do you have such high morals compared to everyone else.” (NOTE: This question came from a very religious, devout, wonderful person i love.).

It is better to be hated for what you are than to be loved for what you are not.

~ Andre Gide

Brutal Honesty

Brutal honesty refers to being extremely direct and unfiltered in communication, often to the point of being blunt or harsh. This form of honesty prioritizes telling the truth without considering the potential impact on the feelings or reactions of others. It sorta kinda exactly goes hand in hand with Integrity which in turn connects to Honor.

Key aspects of brutal honesty include:

Directness: Providing straightforward and unvarnished truth without sugarcoating or softening the message.

Bluntness: Being frank (or Ted) and candid, even if the truth may be uncomfortable or hurtful.

There isn’t a coffee table book entitled “Mediocore Humans In History”

~ C.T.T.

So why try to toe the Brutal Honesty Line?

Clarity: It can eliminate misunderstandings and provide a clear and unambiguous message. Also, it lets people know where you stand.

Trust: Some people appreciate brutal honesty because it demonstrates a commitment to truthfulness and transparency. I’ve had folks come back to me later and thanked me. Which is really rad of them.

Efficiency: It can get to the heart of an issue without dancing around the subject. Once again, note the time savings component. It saves a ton of time. HUUUUUUOOOOOGGGEEE time saver.

Potential Drawbacks

If you are delivering negative information to someone this can have drawbacks. If you are delivering positive news, do it with gusto! However, this situation can occur.

Hurt Feelings: It can cause emotional harm or strain relationships due to the harsh delivery. Deliver honest negative information with proper propriety and courtesy. They will hopefully get over it if they have any self-reflection.

Perception of Rudeness: It may be perceived as insensitive, disrespectful, lack of empathy, or unnecessarily harsh. However, if you are running a company or in a particularly toxic relationship, great results take drastic measures.

Conflict: It can lead to conflicts or defensive reactions from those who receive the message. Some say life is all conflict. Once again don’t go looking for trouble but you cannot shy away from interactions.

The harder the conflict, the more glorious the triumph.

 ~ Thomas Paine 

Caveat Emptor: As implicit in the above commentary, Brutal Honesty should be balanced with surgical and thoughtful empathy and, shall we say, nuance to ensure that the truth is communicated effectively and respectfully. For instance, it is okay to lie and say someone’s baby is cute. In the same fashion, eating everything on your plate when they have asked you over for supper at a neighbor’s house is also good manners, even though you probably do not like well-done pot roast and peas. Say thank you, and it was delicious. In Everything, practice propriety and courtesy.

When you have lived your individual life in YOUR OWN adventurous way and then look back upon its course, you will find that you have lived a model human life, after all.

Professor Joseph Campbell

2. “Living Life Loud” is a phrase that conveys embracing life with enthusiasm, boldness, and authenticity. It suggests living in a way that is vibrant, expressive, and true to oneself. To be authentic and true to yourself, and to embrace your passions and unique perspectives. It can also mean living intentionally and unapologetically, pursuing your dreams with enthusiasm, and stepping outside of your comfort zone.

Here are some aspects of what it means to Live Life Loud:

Authenticity: Being true to yourself and not being afraid to show your true colors, even if they differ from societal norms or expectations.

Boldness: Taking risks, stepping out of your comfort zone, and confidently pursuing your passions and dreams.

Enthusiasm: Approaching life with energy and excitement, making the most out of every moment.

Courage: Facing challenges head-on and standing up for what you believe in, even when it’s difficult.

I wonder, I wonder what you would do if you had the power to dream any dream you wanted to dream?

~ Alan Watts

This seems rather nebulous in some cases, so let us get a little more specific with some examples.

Pursuing Dreams: Actively chasing your goals and aspirations, regardless of how daunting they may seem. Most dreams are impossible; otherwise, they wouldn’t be dreams.

Taking Risks: Being willing to try new things, even if there’s a chance of failure. It goes hand in hand with Pursuing Your Dreams. Someone once said “I need to surf big waves with two oxygen tanks,” i said well you cant surf them then. In the same vein someone told me when discussing my view on creating companies: “I cant take that risk.”, i asked well you drive a car? Trust me that is a much larger risk everyday.”

In the next five seconds what are you going to do to make your life spectacular?

~ Tim O’Reilly

Being Outspoken: Sharing your opinions and ideas confidently, without fear of judgment. Not bragging. Being forthright in your views and taking responsibility for those views. Owning them and being prepared to defend them.

Celebrating Uniqueness: Embracing what makes you different and showcasing it proudly (not loudly). However, not to the point of narcism. Of course, I hear Tyler Durden saying, “You are not a unique snowflake,” whilst also saying, “You are not your f-ing khakis!”

So why live life loud? Well, I’m glad you asked. Here are just some that I wrote down: Being open and expressive can help build deeper, more meaningful relationships. Brutal Honesty with Onesself and the Universe.

This chooses by definition a life of surprise. Living outside the realm of societal norms in most cases.

Potential Challenges

Judgment: Judge So Ye Be Judged! Others may not always understand or accept your loud approach to life, which can lead to criticism or judgment. THEY are going to judge anyway. In fact THEY have judged even before you started living life loud. Why? Because most who judge follow The Herd mentality of Social Norms.

Risk of Failure: Taking bold steps can sometimes lead to setbacks or failures, which require resilience to overcome. However my “hot-take” (isn’t that the lingo?) is once you have stepped out on the edge and attempted to create or do or launch yourself into the air over ice or over the ledge of a heaving wave – YOU WON! Analysis to paralysis is death. Hesitation Kills folks. Remember if you fail you have no where to go but up and if it is a big enough failure you have a great story!

Vulnerability: Being authentic and expressive means being vulnerable, which will be in most cases uncomfortable, I’d rather crawl through glass attempting to obtain My Personal Legend that sit back and think i could have done or what might have been. In fact, most people are frightened more of living the extreme dream than failing. they would rather fail or even said they failed and quit.

All we hear is radio ga ga

Radio goo goo

Radio ga ga

All we hear is radio ga ga

Radio blah, blah

~ Radio GA GA, Queen

Living Life Loud is about making the most of your existence, embracing who you are, and not being afraid to live boldly and authentically. Go to the extreme of that dream, as extreme as you can obtain because, Dear Reader, there are no circumstances, and once you move toward Living Life Loud, there are even as i once believed – no Consequences.

Caveat Emptor: There is no free lunch here at all. The path you choose for your bliss is expensive. The collateral damage is mult-modal. it has been said Humans love a winner but they love a looser more because it makes them feel better about themselves. This also gets into our subconscious programming from society and our families. My Mother not too long ago when i was discussing a subject concerning “taking care of them” and she responded: You go live your life and make no decisions based on others. Others should be so lucky, but they aren’t. The hardest path is YOUR true path. Choose it. Hold It. Protect IT.

Respice post te. Hominem te esse memento. Memento mori.” (“Look after yourself. Remember you’re a man. Remember you will die.”). 

~The 2nd-century Christian writer Tertullian reports a general said this during a procession

3. Improving Oneself Daily

Improving oneself mentally, physically, and “spiritually” daily involves a commitment to continuous personal development in both the mind and body. This holistic approach to self-improvement includes activities and habits that promote mental clarity, emotional well-being, and physical health. Here’s a breakdown of what it means:

Mentally

Learning: Engaging in activities that stimulate your mind, such as reading, studying, or learning new skills.

Mindfulness: Practicing mindfulness or meditation to enhance self-awareness, reduce stress, and improve mental clarity.

Positive Thinking: Cultivating a positive mindset by focusing on gratitude, affirmations, and reframing negative thoughts. Stay away from pessimistic people and naysayers.

Problem-Solving: Challenging yourself with puzzles, games, or new experiences that require critical thinking and creativity. Study the subject of neo-plasticity. Brush your teeth with the opposite hand for a week. Drive a new path without Apple/Google/Waze Maps. Or do what i like to do Freedive. Click and read.

Emotional Health: Managing emotions effectively through journaling, therapy, or talking to trusted friends or family members. Take martial arts for defense and emotional health. Punch a bag. Lift heavy weights. Love animals.

Reading: Read, Read and Read More. Not trash novels but deep nonfiction and fiction. Write, take notes when you read.

Physically

Exercise: Engaging in regular physical activity, whether it’s strength training, cardio, yoga, or any other form of exercise that keeps your body active and strong. Get up and MOVE!

Nutrition: Eating a balanced and nutritious diet that fuels your body and supports overall health. i happen to trend towards canivore. It’s difficult, but it changed my life. Again, eating meat lifts heavy things.

Sleep: Ensuring you get adequate and quality sleep to allow your body and mind to recover and function optimally. i can sleep standing up in an airport. Learn how to take power naps.

Daily Habits

Consistency: Make these activities a part of your daily routine to ensure continuous improvement. Discipline above all. Not grit or determination but Discipline. Have a morning routine. Or any routine then allows you the mental freedom to go to other places mentally and physically. Takes cognitive load off you and reduces friction. Eat the same things, dress the same way.

Goal Setting: Setting small, achievable goals that contribute to your long-term personal development. Make your bed everyday. Set goals in the am then reflect in pm. How could you do better tomorrow? Take time each day to reflect on your progress, identify areas for improvement, and celebrate your achievements.

Adaptability: Being open to change and willing to adjust your habits and routines as you learn what works best for you. Try things you wouldn’t normally do – listen to smooth jazz. Try Hot Yoga. Do stuff then you can optimize to your liking. You might try it and like it.

Improving oneself mentally and physically daily is a lifelong commitment to becoming the best version of yourself. It involves dedication, consistency, and a willingness to learn and adapt continually. It is all based on discipline. Full stop. Not motivation, not grit not anything but getting up and MOVING. Go do the thing that scares. you the most or the thing that you deplore the most – D I S C I P L I NE. i lift every day and read something every day.

Without contrairies there no progression. Attraction and replusion, reason and energy, love and hate are necessary for human existence.

~ William Blake

4. Loving (and Hating)

The idea of experiencing both love and hate at their fullest potential emphasizes the importance of embracing the full spectrum of human emotions to lead a richer, more authentic life.

Emotional Authenticity

Full Range of Experience: Experiencing the full range of emotions allows for a deeper understanding of oneself and others. It means accepting and acknowledging all feelings rather than suppressing them. i call this the dynamic range of life. Western society suppresses everything except sadness. it is ok to be sad. Be enraged. Be Full Of Lust and Desire. Know were your limits are if there are any and learn to regulate them as needed.

Self-Awareness: Fully engaging with both love and hate can lead to greater self-awareness and insight into what matters to you and why. If i have been guilty of something is not being aware enough. If there is original sin afaic it is stupidity and non-awareness. Funny how they go hand in hand and do related to loving and hating.

Learning Opportunities: Intense emotions, whether positive or negative, can be powerful teachers. They provide opportunities to learn about your triggers, strengths, weaknesses, and values. Putting yourself out there past the pale teaches you quickly and well. Strong emotions can inspire creativity, leading to profound art, writing, music, and other forms of expression.

Resilience: Navigating through both love and hate can build emotional resilience, helping you manage future challenges more effectively. Experiencing hate or intense dislike can make you appreciate love and positive emotions more deeply, providing a balanced perspective on life. Salt and Pepper anyone?

Remember when you were young, you shown like the Sun. Shine On you Crazy Diamond!

~ Pink Floyd “Shine On You Crazy Diamond”

Loving and Hating will lead to Authentic Relationships.

Deeper Connections: Loving deeply fosters strong, meaningful relationships. Being open about negative emotions can also lead to more honest and authentic interactions. Strong emotions can inspire creativity, leading to profound art, writing, music, and other forms of expression. Confronting and understanding negative emotions can lead to healthier conflict resolution and stronger relationships in the long term.

Caveats and Considerations when Loving and Hating

Caveat Emptor: It’s important to express both love and hate in healthy, constructive ways. While deep emotions are natural, how you act on them matters significantly. Ensure that the expression of intense emotions does not harm yourself or others. Finding healthy outlets for negative emotions is crucial. While experiencing emotions entirely is valuable, maintaining a balance is important. Overwhelming negativity or unchecked hatred can be destructive, so it’s essential to seek ways to manage and balance these emotions. Also sometimes we must practice complete indifference. Embracing both love and hate fully can lead to a richer, more nuanced understanding of life, fostering personal growth, deeper relationships, and a more authentic existence.

And the Germans killed the Jews
And the Jews killed the Arabs
And Arabs killed the hostages
And that is the news
And is it any wonder
That the monkey’s confused

~ Perfect Sense Part 1, Roger Waters

5. Quality Over Quantity

The phrase “quality over quantity” as a human value emphasizes prioritizing the excellence, depth, or meaningfulness of something over merely having more of it. It’s a mindset that values richness, purpose, and intentionality over excess or superficial accumulation. i have a saying: “Best Fewest.” You get the best humans that know how to do something together they can create anything.

Relationships: Valuing meaningful, deep connections with a few people rather than having a large network of acquaintances. Iihave a very small network i can count on one hand, i completely trust. Once you get over 30 you find out who really cares about you. See the quote at the end of the blog. Really those who matter just want you truly happy.

Work: Focusing on producing exceptional work or projects instead of completing many tasks without significant impact or value. That 9 am standup is it really needed? Cant we automate this excel spreadsheet? Think much? Work yourself out of a job and into your passion.

Material Possessions: Preferring fewer high-quality, durable items rather than many cheap, disposable ones. But a high quality custom suit or dress – three of them. Prada, Sene etc. Black, navy, or dark blue with custom shirts. i happen to prefer fench cuffs with cuff links. They never go out of style and will last forever.

There are many who would take my time, I shun them. There are some who share my time, I am entertained by them. There are precious few who contribute to my time, I cherish them.

~ A.S.L.

Time Management: Spending your time on activities that matter and bring fulfillment rather than filling your schedule with things that feel busy but are unimportant or things that people put on you. The above quote is my favorite quote in my life, and if i do have a tombstone, i want it on it. EMBLAZONED!

Essentially, it’s a principle that asks, “What truly matters?” and reminds us to focus on what brings genuine value and satisfaction rather than chasing quantity for the sake of just having more of something.

6. Maintaining a sheer sense of wonder and awe for life

Maintaining a sheer sense of wonder and awe for life means approaching the world with curiosity, gratitude, and an openness to its beauty and mysteries. BE AMAZED AT THE THRALL OF IT ALL! It’s about deeply appreciating the small and large marvels around you—whether it’s the intricacies of nature, the complexities of human connections, or the endless potential for discovery and growth. YOU ARE READING <THIS>. Check out my blog Look Up and Down and All Around – has some cool pictures as well.

It involves letting go of jadedness or routine and instead choosing to see the extraordinary in the ordinary. This mindset keeps you engaged, inspired, and connected to the richness of life, no matter the circumstances. It’s like seeing the world through the eyes of a child, where everything holds the potential for fascination and joy. Turn up the back channel like when you were a child. Be Aware! Be Amazed! Wonder what it is like to be a tree or a rock!

i can say unequivocally that while i have many more mistakes than “performing tasks in a correct fashion” that i have lived a loud and truly individuated life. Would i do things differently? Sure some. I probably would have “sent” it even harder, and past eleven pretty much on everything. i can truly say that i left everything out in the ocean, nothing in the bag and gave it my all. Remember: Take care of those you call your own and keep good company:, storms never last and the forecast calls for Blue Skies!

Enough for now.

For those that truly know me, you know, and I cherish you. 🤘🏻💜.

Until Then,

@tctjr

#iwishyouwater <- if i could do it again, i would live this life. He got the memo.

Music To Blog By: All Of the versions of “Watermelon in Easter Hay”, full name “Playing a Guitar Solo With This Band is Like Trying To Grow a Watermelon in Easter Hay, by Frank Zappa (covers etc) i could find and just loop them. There is even a blue grass version. In their review of the album, Down Beat magazine criticized the song (i despise critics), but subsequent reviewers championed it as Zappa’s masterpiece. Kelly Fisher Lowe called it the “crowning achievement of the album” and “one of the most gorgeous pieces of music ever produced.” I must agree. Supposedly, Zappa told Neil Slaven that he thought it was “the best song on the album. “Watermelon in Easter Hay” is in 9/4 time. The song’s hypnotic arpeggiated pattern is played throughout the song’s nine minutes. The 9/4 time signature keeps the song’s two-chord harmonic structure which until you really listen you don’t realize its a two chord structure.  For me i think it is one of the most sonically amazing pieces of music ever written and produced. Sonically, the reverb is amazing. Sonically, the maribas are astounding. Sonically the orchestral percussion is mesmerizing. The song after Watermelon on Joe’s Garage is completely hilarious, “Little Green Rosetta,”and I am putting that on the going away party playlist, and I hope people dance in a conga or kick line and sing it. The grass bone to the ankle bone (listen to the song…).

Think about it a very mediocre guy imagining how he could play, if he could play anything that he wanted to play? Get the reference to the entire blog? Ala Alan Watts, if you could dream any dream, you want to dream? Then what?

The song is, in effect, a dream of freedom.

Here are some other details about “Watermelon in Easter Hay”:

  • The song’s two alternating harmonies are A and B / E, linked by a G#. 
  • The song is introduced by Zappa as the Central Scrutinizer, which then gives way to a guitar solo. 
  • The song’s snare accents have a lot of reverb and delay, creating a swooosh sound that sometimes sounds like wind. 
  • The song’s guitar solo is the only guitar solo specifically recorded for the album.  All others are from a technique known as xenochronous.
  • Rumor has it Dweezil Zappa is the only person allowed to play it.
  • Someone called the song intoxicating in one of my other blogs on the Zappa Documentary. Kind of like a really good baklava.

And a couple more items for your thoughts:

Its so hard to forget pain but its even harder to remember hapiness. We have no scar to show for hapiness. We learn so little from peace.

~ Chuck Palahnuik (author of fight club, choke etc)

Those who mind don’t matter and those who matter don’t mind.

~ Dr. Suess

i listen to this every morning. Rest In Power Maestro with the amazing Susanna Rigacci:

SnakeByte[18] Function Optimization with OpenMDAO

DALLE’s Rendering of Non-Convex Optimization

In Life We Are Always Optimizing.

~ Professor Benard Widrow (inventor of the LMS algorithm)

Hello Folks! As always, i hope everyone is safe. i also hope everyone had a wonderful holiday break with food, family, and friends.

The first SnakeByte of the new year involves a subject near and dear to my heart: Optimization.

The quote above was from a class in adaptive signal processing that i took at Stanford from Professor Benard Widrow where he talked about how almost everything is a gradient type of optimization and “In Life We Are Always Optimizing.”. Incredibly profound if One ponders the underlying meaning thereof.

So why optimization?

Well glad you asked Dear Reader. There are essentially two large buckets of optimization: Convex and Non Convex optimization.

Convex optimization is an optimization problem has a single optimal solution that is also the global optimal solution. Convex optimization problems are efficient and can be solved for huge issues. Examples of convex optimization include maximizing stock market portfolio returns, estimating machine learning model parameters, and minimizing power consumption in electronic circuits. 

Non-convex optimization is an optimization problem can have multiple locally optimal points, and it can be challenging to determine if the problem has no solution or if the solution is global. Non-convex optimization problems can be more difficult to deal with than convex problems and can take a long time to solve. Optimization algorithms like gradient descent with random initialization and annealing can help find reasonable solutions for non-convex optimization problems. 

You can determine if a function is convex by taking its second derivative. If the second derivative is greater than or equal to zero for all values of x in an interval, then the function is convex. Ah calculus 101 to the rescue.

Caveat Emptor, these are very broad mathematically defined brush strokes.

So why do you care?

Once again, Oh Dear Reader, glad you asked.

Non-convex optimization is fundamentally linked to how neural networks work, particularly in the training process, where the network learns from data by minimizing a loss function. Here’s how non-convex optimization connects to neural networks:

A loss function is a global function for convex optimization. A “loss landscape” in a neural network refers to representation across the entire parameter space or landscape, essentially depicting how the loss value changes as the network’s weights are adjusted, creating a multidimensional surface where low points represent areas with minimal loss and high points represent areas with high loss; it allows researchers to analyze the geometry of the loss function to understand the training process and potential challenges like local minima. To note the weights can be millions, billions or trillions. It’s the basis for the cognitive AI arms race, if you will.

The loss function in neural networks, measures the difference between predicted and true outputs, is often a highly complex, non-convex function. This is due to:

The multi-layered structure of neural networks, where each layer introduces non-linear transformations and the high dimensionality of the parameter space, as networks can have millions, billions or trillions of parameters (weights and biases vectors).

As a result, the optimization process involves navigating a rugged loss landscape with multiple local minima, saddle points, and plateaus.

Optimization Algorithms in Non-Convex Settings

Training a neural network involves finding a set of parameters that minimize the loss function. This is typically done using optimization algorithms like gradient descent and its variants. While these algorithms are not guaranteed to find the global minimum in a non-convex landscape, they aim to reach a point where the loss is sufficiently low for practical purposes.

This leads to the latest SnakeBtye[18]. The process of optimizing these parameters is often called hyperparameter optimization. Also, relative to this process, designing things like aircraft wings, warehouses, and the like is called Multi-Objective Optimization, where you have multiple optimization points.

As always, there are test cases. In this case, you can test your optimization algorithm on a function called The Himmelblau’s function. The Himmelblau Function was introduced by David Himmelblau in 1972 and is a mathematical benchmark function used to test the performance and robustness of optimization algorithms. It is defined as:

    \[f(x, y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2\]

Using Wolfram Mathematica to visualize this function (as i didn’t know what it looked like…) relative to solving for f(x,y):

Wolfram Plot Of The Himmelblau Function

This function is particularly significant in optimization and machine learning due to its unique landscape, which includes four global minima located at distinct points. These minima create a challenging environment for optimization algorithms, especially when dealing with non-linear, non-convex search spaces. Get the connection to large-scale neural networks? (aka Deep Learnin…)

The Himmelblau’s function is continuous and differentiable, making it suitable for gradient-based methods while still being complex enough to test heuristic approaches like genetic algorithms, particle swarm optimization, and simulated annealing. The function’s four minima demand algorithms to effectively explore and exploit the gradient search space, ensuring that solutions are not prematurely trapped in local optima.

Researchers use it to evaluate how well an algorithm navigates a multi-modal surface, balancing exploration (global search) with exploitation (local refinement). Its widespread adoption has made it a standard in algorithm development and performance assessment.

Several types of libraries exist to perform Multi-Objective or Parameter Optimization. This blog concerns one that is extremely flexible, called OpenMDAO.

What Does OpenMDAO Accomplish, and Why Is It Important?

OpenMDAO (Open-source Multidisciplinary Design Analysis and Optimization) is an open-source framework developed by NASA to facilitate multidisciplinary design, analysis, and optimization (MDAO). It provides tools for integrating various disciplines into a cohesive computational framework, enabling the design and optimization of complex engineering systems.

Key Features of OpenMDAO Integration:

OpenMDAO allows engineers and researchers to couple different models into a unified computational graph, such as aerodynamics, structures, propulsion, thermal systems, and hyperparameter machine learning. This integration is crucial for studying interactions and trade-offs between disciplines.

Automatic Differentiation:

A standout feature of OpenMDAO is its support for automatic differentiation, which provides accurate gradients for optimization. These gradients are essential for efficient gradient-based optimization techniques, particularly in high-dimensional design spaces. Ah that calculus 101 stuff again.

It supports various optimization methods, including gradient-based and heuristic approaches, allowing it to handle linear and non-linear problems effectively.

By making advanced optimization techniques accessible, OpenMDAO facilitates cutting-edge research in system design and pushes the boundaries of what is achievable in engineering.

Lo and Behold! OpenMDAO itself is a Python library! It is written in Python and designed for use within the Python programming environment. This allows users to leverage Python’s extensive ecosystem of libraries while building and solving multidisciplinary optimization problems.

So i had the idea to use and test OpenMDAO on The Himmelblau function. You might as well test an industry-standard library on an industry-standard function!

First things first, pip install or anaconda:

>> pip install 'openmdao[all]'

Next, being We are going to be plotting stuff within JupyterLab i always forget to enable it with the majik command:

## main code
%matplotlib inline 

Ok lets get to the good stuff the code.

# add your imports here:
import numpy as np
import matplotlib.pyplot as plt
from openmdao.api import Problem, IndepVarComp, ExecComp, ScipyOptimizeDriver
# NOTE: the scipy import 

# Define the OpenMDAO optimization problem - almost like self.self
prob = Problem()

# Add independent variables x and y and make a guess of X and Y:
indeps = prob.model.add_subsystem('indeps', IndepVarComp(), promotes_outputs=['*'])
indeps.add_output('x', val=0.0)  # Initial guess for x
indeps.add_output('y', val=0.0)  # Initial guess for y

# Add the Himmelblau objective function. See the equation from the Wolfram Plot?
prob.model.add_subsystem('obj_comp', ExecComp('f = (x**2 + y - 11)**2 + (x + y**2 - 7)**2'), promotes_inputs=['x', 'y'], promotes_outputs=['f'])

# Specify the optimization driver and eplison error bounbs.  ScipyOptimizeDriver wraps the optimizers in *scipy.optimize.minimize*. In this example, we use the SLSQP optimizer to find the minimum of the "Paraboloid" type optimization:
prob.driver = ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.driver.options['tol'] = 1e-6

# Set design variables and bounds
prob.model.add_design_var('x', lower=-10, upper=10)
prob.model.add_design_var('y', lower=-10, upper=10)

# Add the objective function Himmelblau via promotes.output['f']:
prob.model.add_objective('f')

# Setup and run the problem and cross your fingers:
prob.setup()
prob.run_driver()

Dear Reader, You should see something like this:

Optimization terminated successfully (Exit mode 0)
Current function value: 9.495162792777827e-11
Iterations: 10
Function evaluations: 14
Gradient evaluations: 10
Optimization Complete
———————————–
Optimal x: [3.0000008]
Optimal y: [1.99999743]
Optimal f(x, y): [9.49516279e-11]

So this optimized the minima of the function relative to the bounds of x and y and \epsilon.

Now, lets look at the cool eye candy in several ways:

# Retrieve the optimized values
x_opt = prob['x']
y_opt = prob['y']
f_opt = prob['f']

print(f"Optimal x: {x_opt}")
print(f"Optimal y: {y_opt}")
print(f"Optimal f(x, y): {f_opt}")

# Plot the function and optimal point
x = np.linspace(-6, 6, 400)
y = np.linspace(-6, 6, 400)
X, Y = np.meshgrid(x, y)
Z = (X**2 + Y - 11)**2 + (X + Y**2 - 7)**2

plt.figure(figsize=(8, 6))
contour = plt.contour(X, Y, Z, levels=50, cmap='viridis')
plt.clabel(contour, inline=True, fontsize=8)
plt.scatter(x_opt, y_opt, color='red', label='Optimal Point')
plt.title("Contour Plot of f(x, y) with Optimal Point")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.colorbar(contour)
plt.show()

Now, lets try something that looks a little more exciting:

import numpy as np
import matplotlib.pyplot as plt

# Define the function
def f(x, y):
    return (x**2 + y - 11)**2 + (x + y**2 - 7)**2

# Generate a grid of x and y values
x = np.linspace(-6, 6, 500)
y = np.linspace(-6, 6, 500)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)

# Plot the function
plt.figure(figsize=(8, 6))
plt.contourf(X, Y, Z, levels=100, cmap='magma')  # Gradient color
plt.colorbar(label='f(x, y)')
plt.title("Plot of f(x, y) = (x² + y - 11)² + (x + y² - 7)²")
plt.xlabel("x")
plt.ylabel("y")
plt.show()

That is cool looking.

Ok, lets take this even further:

We can compare it to the Wolfram Function 3D plot:

from mpl_toolkits.mplot3d import Axes3D

# Create a 3D plot
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')

# Plot the surface
ax.plot_surface(X, Y, Z, cmap='magma', edgecolor='none', alpha=0.9)

# Labels and title
ax.set_title("3D Plot of f(x, y) = (x² + y - 11)² + (x + y² - 7)²")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("f(x, y)")

plt.show()

Which gives you a 3D plot of the function:

3D Plot of f(x, y) = (x² + y – 11)² + (x + y² – 7)²

While this was a toy example for OpenMDAO, it is also a critical tool for advancing multidisciplinary optimization in engineering. Its robust capabilities, open-source nature, and focus on efficient computation of derivatives make it invaluable for researchers and practitioners seeking to tackle the complexities of modern system design.

i hope you find it useful.

Until Then,

#iwishyouwater <- The EDDIE – the most famous big wave contest ran this year. i saw it on the beach in 2004 and got washed across e rivermouth on a 60ft clean up set that washed out the river.

@tctjr

Music To Blog By: GodSpeedYouBlackEmperor “No Title As of 13 February 2024” – great band if you enjoy atmospheric compositional music.

How One Of The G.O.A.T.(s) Changed My Life

A mentor is someone who sees more talent and ability within you, than you see in yourself, and helps bring it out of you.

Bob Proctor
The Religious Tomes Of Digital Audio by Professor Ken Pohlmann

First, i trust this finds everyone well. All kinds of craziness abound in the world; for those affected by recent events, my condolences. Second, I was compelled to write a blog after some commentary on LinkedIn concerning mentors and people who changed some of our lives.

You can find the discussion here. <- Click

Dear reader this is a very personal blog so bear with me i have told few if any this story. Oftentimes, the Universe speaks, and when it does, listen.

i had the extreme luxury and luck to attend graduate school at The University Of Miami Frost School Of Music, specializing in Music Engineering. Here is a little history copypasta’d from the website:

“The Graduate Music Engineering Technology degree (GMUE) was introduced in 1986 and has consistently placed graduates into high-tech engineering fields that emphasize audio technology, usually in audio software and hardware design engineering and product engineering or development. Our graduates have enjoyed employment at companies specifically aimed at high-tech audio such as Sonos, Amazon Lab126, Avid, Universal Audio, Soundtoys, iZotope, Waves LLC, Smule, Apple, Facebook Reality Labs, Microsoft, Eventide, Bose, Shure, Dolby Laboratories, Roland, Beats by Dr. Dre, Spotify, Harman International, JBL, Analog Devices, Biamp, QSC, Motorola, Texas Instruments, Cirrus Logic, Audio Precision, and many more.

In most cases, applicants to the M.S. in Music Engineering Technology typically hold a bachelor of science degree in electrical engineering, computer engineering, computer science, math, physics, or other hard sciences and are passionate about combining their love of music and engineering. A few hold dual degrees in music and other engineering/technology areas. The Music Engineering Technology program enjoys being part of a world-class, top-ranked School of Music, and students may become licensed to use the new $1.2 million state-of-the-art recording studio if they wish.”

I would rather be blind than deaf.

Handel from “Listening”

In 1987, Oh Dear Reader, i had a “really good job” with GE Medical Systems working in the Magnetic Resonance Imaging and Cat Scan field service organization. Yet i longed for truly understanding the science and perception of how we as humans process sound physically, neuro-scientifically, and mentality, then how we design that product to reproduce the creation of sound to its fullest extent. I loved mixing sound and thought in would be the end all to work at a “mixing desk” manufacturer such as MCI in Fort Lauderdale, used at Criteria Studios, where such groups as The Allman Brothers, etc, were the pinnacle of audio engineering. i was also particularly fascinated with the perception of reverberation and accurate modeling of acoustics. In undergraduate school i did an extracurricular paper on digital audio circa 1985. Where I analyzed analog-to-digital and digital-to-analog recording techniques. The paper discussed the Shannon Limit theorem and the science of sampling a sound to reconstruct it in full digital form. i also discussed how in the future most (or so i surmised) sound would eventually be played on a chip or transmitted with no medium. i also created a fiber optic transmission network to transmit and modify my voice. However the “riff” of the paper compelled me.

Said pedantic paper figure 1.1

One day i was sitting listening to Al Dimeola’s Elegant Gypsy album in Little Havanna, Miami, FL (where i presided not far from Crescent Moon Studios) and reading an article by a human named Professor Ken Pohlmann. The year was 1989. The magazine was Mix Magazine as i “used to be” a recording engineer having graduated from Full Sail Of The Recording Arts and then went on to obtain a BSEET at Devry Institute of Technology. i still kept up on recording and live sound and every once in a while i would mix for someone.

As they say, I am a recovering sound engineer now.

Mentoring is a brain to pick, an ear to listen, and a push in the right direction.

John Crosby

At the end of the article, it said something to the effect:

“Professor Ken Pohlmann is the founder of the prestigious program for the Graduate School Of Music Engineering at the University Of Miami, where he teaches Propeller Heads to create world class digital effects.” Apologies, folks i’m going off memory here, but i specifically remember reading the article and thinking “ok i am going to drive down to Coral Gables all two miles and walk in and ask for Professor Polhmann to accept me into the program.”

i walked in and asked for Professor Pohlmann. The nice woman at the desk said let me see if he is here. She said yes he is and will see me now.

Awe hell game on.

He sat down with me and asked what i could do for you. i still remember i was “dressed” in a tie with braces (suspenders) and full button down shirt with tassle dress shoes (full corporate mode). Yes tassle loafers.

i said “i want you to accept me into your program and when i get out i am going to work for (this) company and build reverberation algorithms.” i showed him the Mix Magazine where he was mentioned and in the back of Mix Magazine was an advertisement for a “startup” audio company called digidesign. i also showed him my paper on Digital Audio Recording and Editing circa 1985.

(NOTE: If you never ask for the biggest piece of cake you never get it. Worse thing he could say was no.)

He was really cool on the response. He said well i appreciate the passion but you need to go through all of the process and gave me all the paperwork take the GRE etc.

i was also acutely aware that i was a mutt compared to the other students where he only accepted two per year out of several high pedigree applicants. Most of the students where from real engineering schools.

i’ll never forget when i called to see if i was accepted. i called and the women said: “Theodore Tanner Jr. right? Oh Yes you can start fall of 1990.”

I RESIGNED from GE right after the phone call.

Fast forward to the year 1992. My friend Toby Dunn and i where sitting in MTC 667 graduate thesis class for Professor Ken Pohlmann.

Toby and i had done all kinds of awesome projects for the two years at UMiami but now we are sitting in the classroom breeze coming in watching the palm trees and chatting about who knows what waiting for the GOAT.

Professor Pohlmann walks in with a stack of books and sits down and says:

“What do you guys want to talk about? This class is about thinking up brilliant ideas and taking them into execution and also publishing your thesis at a conference.”

“Which conference?” i asked?

He said: “The Audio Engineering Society Conference this coming Fall.”

We both laughed. I specifically remember thinking back in the day when I didn’t even understand most of Stereo Review Magazine when I was in high school, and now it reads like Cat In Hat, BUT The AES Conference is THE SUPER BOWL OF AUDIO ENGINEERING?!

He said: “What are you laughing at? If you don’t get the paper accepted and given at the conference, you can’t graduate as it’s most of the grade along with your thesis and discussion here in class.”

“We haven’t even got started on our thesis or even selected a subject.” i said

He then said: “I asked what do you want to talk about and you didn’t say anything.”

He sat there in silence for a while then He then picked up his books and said: ” i don’t have time for this.”

He got up and left.

Toby and I just sat there (this was before the acronym WTF), but that was the look on our faces. WTF?

We sat there for a while and then i got the courage up to go into his office.

i felt like Charlie walking up to Willy Wonka.

“Professor Polhmann? , i said tentatively, ” i think we are ready to talk ideas.”

He came back in sat on the desk and said (and i will never ever forget this….)

“You two are the people that will change this industry and as such you are expected to come up with the ideas that can be executed upon and that is what i expect from you now as that is what will be expected of you in industry.”

Thus, Spake The GOAT. Amen.

We then had an amazing conversation of thesis topics.

Toby presented his paper on noise reduction, which was amazing. I presented my paper on Subband audio coding methods at the AES in New York in 1992, complete with an AES scholarship stipend. I also got to hang out with Jeff Beck and Les Paul at a Toys R Us BASF party, but that is another story.

We then went on to work for digidesign circa 1992. Toby is one of the most amazing signal-processing audio engineers in the industry. He was at Digidesign for 20 years and is now at Universal Audio. He wrote the original noise reduction plugin for Digidesign on Sound Designer and worked on the digital audio engine as well as several start plugins (dynamics, chorus/flange, etc.).

Excerpt from 1985 Neophyte paper 1.2 and 1.3

Side Note: One cool thing i got to personally tell Al Dimeola and Steve Vai that i assisted in creating some of the original protools and sounder designer plugins and APIs while listening to Elegant Gypsy and Passion Grace and Warfare. One of them is the same album I mentioned at the beginning of this blog. Also, if you not familiar, both are the GOATs of guitar.

Oh, and one more thing—I worked at Criteria Studios for a while and got to mix on the MCI console in Studio C, which was used to record several famous albums, which was a full-circle aspect for me professionally.

Then, later on, in 1993, another mentor, Phil Ramone, called me (yes that phil, he called me his 8th child…) while I was working on Protron Plugin at the amazing company called Crystal River Engineering, founded by Scott Foster. Scott Foster originated interpolated Head Related Transfer Function six degrees of freedom spatial audio for Jaron Laniers VPL Research and Dr. Beth Wenzel at Nasa Ames Research Lab and essentially started full localized spatial audio. Phil called me to come down to Crescent Moon Studios (Gloria Estafan and The Miami Sound Machine) and listen to the Duets Album he was mixing. He wanted me to analyze the reverb tails going through the defunct ATT Disq system versus a Neve IV console. He used three EMT reverbs (left, center, right) feedback to each other. i knew this previously and used this technique in the original Dveb.

To anyone reading this, find your passion and execute those brilliant ideas. Find the right mentor who will push you beyond anything you ever thought possible.

i am lucky enough to have had several mentors in my life. However, it all started with someone taking a chance on me.

Toby if you are out there hope you and sue and the family are well.

To the GOAT, Professor Ken Pohlmann. Thank you for that day. Without it i would not be where i am without that happening and i cannot thank you enough for taking a chance on me when i knew damn good and well i didnt have the resume or pedigree to ever compete at the scholastic level. However, I do hope I have made up for the deficiencies since that time.

Be safe.

Until Then,

#iwshyouwater (thunders in mentawis with a yacht)

@tctjr

Muzak To Blog By: Bach: Goldberg Variations, BWV 988 (The 1955 & 1981 Recordings). Dear Reader tread lightly within the aural halls there are several caves you can go into here with his interpretations. Enjoy. For those that know you know.

SnakeByte[17] The Metropolis Algorithm

Frame Grab From the movie Metropolis 1927

Who told you to attack the machines, you fools? Without them you’ll all die!!

~ Grot, the Guardian of the Heart Machine

First, as always, Oh Dear Reader, i hope you are safe. There are many unsafe places in and around the world in this current time. Second, this blog is a SnakeByte[] based on something that i knew about but had no idea it was called this by this name.

Third, relative to this, i must confess, Oh, Dear Reader, i have a disease of the bibliomaniac kind. i have an obsession with books and reading. “They” say that belief comes first, followed by admission. There is a Japanese word that translates to having so many books you cannot possibly read them all. This word is tsundoku. From the website (if you click on the word):

“Tsundoku dates from the Meiji era, and derives from a combination of tsunde-oku (to let things pile up) and dokusho (to read books). It can also refer to the stacks themselves. Crucially, it doesn’t carry a pejorative connotation, being more akin to bookworm than an irredeemable slob.”

Thus, while perusing a math-related book site, i came across a monograph entitled “The Metropolis Algorithm: Theory and Examples” by C Douglas Howard [1].

i was intrigued, and because it was 5 bucks (Side note: i always try to buy used and loved books), i decided to throw it into the virtual shopping buggy.

Upon receiving said monograph, i sat down to read it, and i was amazed to find it was closely related to something I was very familiar with from decades ago. This finally brings us to the current SnakeByte[].

The Metropolis Algorithm is a method in computational statistics used to sample from complex probability distributions. It is a type of Markov Chain Monte Carlo (MCMC) algorithm (i had no idea), which relies on Markov Chains to generate a sequence of samples that can approximate a desired distribution, even when direct sampling is complex. Yes, let me say that again – i had no idea. Go ahead LazyWebTM laugh!

So let us start with how the Metropolis Algorithm and how it relates to Markov Chains. (Caveat Emptor: You will need to dig out those statistics books and a little linear algebra.)

Markov Chains Basics

A Markov Chain is a mathematical system that transitions from one state to another in a state space. It has the property that the next state depends only on the current state, not the sequence of states preceding it. This is called the Markov property. The algorithm was introduced by Metropolis et al. (1953) in a Statistical Physics context and was generalized by Hastings (1970). It was considered in the context of image analysis (Geman and Geman, 1984) and data augmentation (Tanner (I’m not related that i know of…) and Wong, 1987). However, its routine use in statistics (especially for Bayesian inference) did not take place until Gelfand and Smith (1990) popularised it. For modern discussions of MCMC, see e.g. Tierney (1994), Smith and Roberts (1993), Gilks et al. (1996), and Roberts and Rosenthal (1998b).

Ergo, the name Metropolis-Hastings algorithm. Once again, i had no idea.

Anyhow,

A Markov Chain can be described by a set of states S and a transition matrix P , where each element P_{ij} represents the probability of transitioning from state i to state j .

Provide The Goal: Sampling from a Probability Distribution \pi(x)

In many applications (e.g., statistical mechanics, Bayesian inference, as mentioned), we are interested in sampling from a complex probability distribution \pi(x). This distribution might be difficult to sample from directly, but we can use a Markov Chain to create a sequence of samples that, after a certain period (called the burn-in period), will approximate \pi(x) .

Ok Now: The Metropolis Algorithm

The Metropolis Algorithm is one of the simplest MCMC algorithms to generate samples from \pi(x). It works by constructing a Markov Chain whose stationary distribution is the desired probability distribution \pi(x) . A stationary distribution is a probability distribution that remains the same over time in a Markov chain. Thus it can describe the long-term behavior of a chain, where the probabilities of being in each state do not change as time passes. (Whatever time is, i digress.)

The key steps of the algorithm are:

Initialization

Start with an initial guess x_0 , a point in the state space. This point can be chosen randomly or based on prior knowledge.

Proposal Step

From the current state x_t , propose a new state x^* using a proposal distribution q(x^*|x_t) , which suggests a candidate for the next state. This proposal distribution can be symmetric (e.g., a normal distribution centered at x_t ) or asymmetric.

Acceptance Probability

Calculate the acceptance probability \alpha for moving from the current state x_t to the proposed state x^* :

    \[\alpha = \min \left(1, \frac{\pi(x^) q(x_t | x^)}{\pi(x_t) q(x^* | x_t)} \right)\]

In the case where the proposal distribution is symmetric (i.e., q(x^|x_t) = q(x_t|x^)), the formula simplifies to:

    \[\alpha = \min \left(1, \frac{\pi(x^*)}{\pi(x_t)} \right)\]

Acceptance or Rejection

Generate a random number u from a uniform distribution U(0, 1)
If u \leq \alpha , accept the proposed state x^* , i.e., set x_{t+1} = x^* .
If u > \alpha , reject the proposed state and remain at the current state, i.e., set x_{t+1} = x_t .

Repeat

Repeat the proposal, acceptance, and rejection steps to generate a Markov Chain of samples.

Convergence and Stationary Distribution:

Over time, as more samples are generated, the Markov Chain converges to a stationary distribution. The stationary distribution is the target distribution \pi(x) , meaning the samples generated by the algorithm will approximate \pi(x) more closely as the number of iterations increases.

Applications:

The Metropolis Algorithm is widely used in various fields such as Bayesian statistics, physics (e.g., in the simulation of physical systems), machine learning, and finance. It is especially useful for high-dimensional problems where direct sampling is computationally expensive or impossible.

Key Features of the Metropolis Algorithm:

  • Simplicity: It’s easy to implement and doesn’t require knowledge of the normalization constant of \pi(x) , which can be difficult to compute.
  • Flexibility: It works with a wide range of proposal distributions, allowing the algorithm to be adapted to different problem contexts.
  • Efficiency: While it can be computationally demanding, the algorithm can provide high-quality approximations to complex distributions with well-chosen proposals and sufficient iterations.

The Metropolis-Hastings Algorithm is a more general version that allows for non-symmetric proposal distributions, expanding the range of problems the algorithm can handle.

Now let us code it up:

i am going to assume the underlying distribution is Gaussian with a time-dependent mean \mu_t, which changes slowly over time. We’ll use a simple time-series analytics setup to sample this distribution using the Metropolis Algorithm and plot the results. Note: When the target distribution is Gaussian (or close to Gaussian), the algorithm can converge more quickly to the true distribution because of the symmetric smooth nature of the normal distribution.

import numpy as np
import matplotlib.pyplot as plt

# Time-dependent mean function (example: sinusoidal pattern)
def mu_t(t):
    return 10 * np.sin(0.1 * t)

# Target distribution: Gaussian with time-varying mean mu_t and fixed variance
def target_distribution(x, t):
    mu = mu_t(t)
    sigma = 1.0  # Assume fixed variance for simplicity
    return np.exp(-0.5 * ((x - mu) / sigma) ** 2)

# Metropolis Algorithm for time-series sampling
def metropolis_sampling(num_samples, initial_x, proposal_std, time_steps):
    samples = np.zeros(num_samples)
    samples[0] = initial_x

    # Iterate over the time steps
    for t in range(1, num_samples):
        # Propose a new state based on the current state
        x_current = samples[t - 1]
        x_proposed = np.random.normal(x_current, proposal_std)

        # Acceptance probability (Metropolis-Hastings step)
        acceptance_ratio = target_distribution(x_proposed, time_steps[t]) / target_distribution(x_current, time_steps[t])
        acceptance_probability = min(1, acceptance_ratio)

        # Accept or reject the proposed sample
        if np.random.rand() < acceptance_probability:
            samples[t] = x_proposed
        else:
            samples[t] = x_current

    return samples

# Parameters
num_samples = 10000  # Total number of samples to generate
initial_x = 0.0      # Initial state
proposal_std = 0.5   # Standard deviation for proposal distribution
time_steps = np.linspace(0, 1000, num_samples)  # Time steps for temporal evolution

# Run the Metropolis Algorithm
samples = metropolis_sampling(num_samples, initial_x, proposal_std, time_steps)

# Plot the time series of samples and the underlying mean function
plt.figure(figsize=(12, 6))

# Plot the samples over time
plt.plot(time_steps, samples, label='Metropolis Samples', alpha=0.7)

# Plot the underlying time-varying mean (true function)
plt.plot(time_steps, mu_t(time_steps), label='True Mean \\mu_t', color='red', linewidth=2)

plt.title("Metropolis Algorithm Sampling with Time-Varying Gaussian Distribution")
plt.xlabel("Time")
plt.ylabel("Sample Value")
plt.legend()
plt.grid(True)
plt.show()

Output of Python Script Figure 1.0

Ok, What’s going on here?

For the Target Distribution:

The function mu_t(t) defines a time-varying mean for the distribution. In this example, it follows a sinusoidal pattern.
The function target_distribution(x, t) models a Gaussian distribution with mean \mu_t and a fixed variance (set to 1.0).


Metropolis Algorithm:

The metropolis_sampling function implements the Metropolis algorithm. It iterates over time, generating samples from the time-varying distribution. The acceptance probability is calculated using the target distribution at each time step.


Proposal Distribution:

A normal distribution centered around the current state with standard deviation proposal_std is used to propose new states.


Temporal Evolution:

The time steps are generated using np.linspace to simulate temporal evolution, which can be used in time-series analytics.


Plot The Results:

The results are plotted, showing the samples generated by the Metropolis algorithm as well as the true underlying mean function \mu_t (in red).

The plot shows the Metropolis samples over time, which should cluster around the time-varying mean \mu_t of the distribution. As time progresses, the samples follow the red curve (the true mean) as time moves on like and arrow in this case.

Now you are probably asking “Hey is there a more pythonic library way to to this?”. Oh Dear Reader i am glad you asked! Yes There Is A Python Library! AFAIC PyMC started it all. Most probably know it as PyMc3 (formerly known as…). There is a great writeup here: History of PyMc.

We are golden age of probabilistic programming.

~ Chris Fonnesbeck (creator of PyMC) 

Lets convert it using PyMC. Steps to Conversion:

  1. Define the probabilistic model using PyMC’s modeling syntax.
  2. Specify the Gaussian likelihood with the time-varying mean \mu_t .
  3. Use PyMC’s built-in Metropolis sampler.
  4. Visualize the results similarly to how we did earlier.
import pymc as pm
import numpy as np
import matplotlib.pyplot as plt

# Time-dependent mean function (example: sinusoidal pattern)
def mu_t(t):
    return 10 * np.sin(0.1 * t)

# Set random seed for reproducibility
np.random.seed(42)

# Number of time points and samples
num_samples = 10000
time_steps = np.linspace(0, 1000, num_samples)

# PyMC model definition
with pm.Model() as model:
    # Prior for the time-varying parameter (mean of Gaussian)
    mu_t_values = mu_t(time_steps)

    # Observational model: Normally distributed samples with time-varying mean and fixed variance
    sigma = 1.0  # Fixed variance
    x = pm.Normal('x', mu=mu_t_values, sigma=sigma, shape=num_samples)

    # Use the Metropolis sampler explicitly
    step = pm.Metropolis()

    # Run MCMC sampling with the Metropolis step
    samples_all = pm.sample(num_samples, tune=1000, step=step, chains=5, return_inferencedata=False)

# Extract one chain's worth of samples for plotting
samples = samples_all['x'][0]  # Taking only the first chain

# Plot the time series of samples and the underlying mean function
plt.figure(figsize=(12, 6))

# Plot the samples over time
plt.plot(time_steps, samples, label='PyMC Metropolis Samples', alpha=0.7)

# Plot the underlying time-varying mean (true function)
plt.plot(time_steps, mu_t(time_steps), label='True Mean \\mu_t', color='red', linewidth=2)

plt.title("PyMC Metropolis Sampling with Time-Varying Gaussian Distribution")
plt.xlabel("Time")
plt.ylabel("Sample Value")
plt.legend()
plt.grid(True)
plt.show()

When you execute this code you will see the following status bar:

It will be a while. Go grab your favorite beverage and take a walk…..

Output of Python Script Figure 1.1

Key Differences from the Previous Code:

PyMC Model Usage Definition:
In PyMC, the model is defined using the pm.Model() context. The x variable is defined as a Normal distribution with the time-varying mean \mu_t . Instead of manually implementing the acceptance probability, PyMC handles this automatically with the specified sampler.

Metropolis Sampler:
PyMC allows us to specify the sampling method. Here, we explicitly use the Metropolis algorithm with pm.Metropolis().

Samples Parameter:
We specify shape=num_samples in the pm.Normal() distribution to indicate that we want a series of samples for each time step.

Plotting:
The resulting plot will show the sampled values using the PyMC Metropolis algorithm compared with the true underlying mean, similar to the earlier approach. Now, samples has the same shape as time_steps (in this case, both with 10,000 elements), allowing you to plot the sample values correctly against the time points; otherwise, the x and y axes would not align.

NOTE: We used this library at one of our previous health startups with great success.

Optimizations herewith include several. There is a default setting in PyMC which is called NUTS.
No need to manually set the number of leapfrog steps. NUTS automatically determines the optimal number of steps for each iteration, preventing inefficient or divergent sampling. NUTS automatically stops the trajectory when it detects that the particle is about to turn back on itself (i.e., when the trajectory “U-turns”). A U-turn means that continuing to move in the same direction would result in redundant exploration of the space and inefficient sampling. When NUTS detects this, it terminates the trajectory early, preventing unnecessary steps. Also the acceptance rates on convergence are higher.

There are several references to this set of algorithms. It truly a case of both mathematical and computational elegance.

Of course you have to know what the name means. They say words have meanings. Then again one cannot know everything.

Until Then,

#iwishyouwater <- Of all places Alabama getting the memo From Helene 2024

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

Music To Blog By: View From The Magicians Window, The Psychic Circle

References:

[1] The Metropolis Algorithm: Theory and Examples by C Douglas Howard

[2] The Metropolis-Hastings Algorithm: A note by Danielle Navarro

[3] Github code for Sample Based Inference by bashhwu

Entire Metropolis Movie For Your Viewing Pleasure. (AFAIC The most amazing Sci-Fi movie besides BladeRunner)

What Would Nash,Shannon,Turing, Wiener and von Neumann Think?

An image of the folks as mentioned above via the GAN de jour

First, as usual, i trust everyone is safe. Second, I’ve been “thoughting” a good deal about how the world is being eaten by software and, recently, machine learning. i personally have a tough time with using the words artificial intelligence.

What Would Nash, Shannon, Turing, Wiener, and von Neumann Think of Today’s World?

The modern world is a product of the mathematical and scientific brilliance of a handful of intellectual pioneers who happen to be whom i call the Horsemen of The Digital Future. i consider these humans to be my heroes and persons that i aspire to be whereas most have not accomplished one-quarter of the work product the humans have created for humanity. Among these giants are Dr. John Nash, Dr. Claude Shannon, Dr. Alan Turing, Dr. Norbert Wiener, and Dr. John von Neumann. Each of them, in their own way, laid the groundwork for concepts that now define our digital and technological age: game theory, information theory, artificial intelligence, cybernetics, and computing. But what would they think if they could see how their ideas, theories and creations have shaped the 21st century?

A little context.

John Nash: The Game Theorist

John Nash revolutionized economics, mathematics, and strategic decision-making through his groundbreaking work in game theory. His Nash Equilibrium describes how parties, whether they be countries, companies, or individuals, can find optimal strategies in competitive situations. Today, his work influences fields as diverse as economics, politics, and evolutionary biology. NOTE: Computational Consensus Not So Hard; Carbon (Human) Consensus Nigh Impossible.

The Nash equilibrium is the set of degradation strategies 

    \[(E_i^*,E_j^*)\]

 

such that, if both players adopt it, neither player can achieve a higher payoff by changing strategies. Therefore, two rational agents should be expected to pick the Nash equilibrium as their strategy.

If Nash were alive today, he would be amazed at how game theory has permeated decision-making in technology, particularly in algorithms used for machine learning, cryptocurrency trading, and even optimizing social networks. His equilibrium models are at the heart of competitive strategies used by businesses and governments alike. With the rise of AI systems, Nash might ponder the implications of intelligent agents learning to “outplay” human actors and question what ethical boundaries should be set when AI is used in geopolitical or financial arenas.

Claude Shannon: The Father of Information Theory

Claude Shannon’s work on information theory is perhaps the most essential building block of the digital age. His concept of representing and transmitting data efficiently set the stage for everything from telecommunications to the Internet as we know it. Shannon predicted the rise of digital communication and laid the foundations for the compression and encryption algorithms protecting our data. He also is the father of my favorite equation mapping the original entropy equation from thermodynamics to channel capacity:

    \[H=-1/N \sum_{i=1}^{N} P_i\,log_2\,P_i\]

The shear elegance and magnitude is unprecedented. If he were here, Shannon would witness the unprecedented explosion of data, quantities, and speeds far beyond what was conceivable in his era. The Internet of Things (IoT), big data analytics, 5G/6G networks, and quantum computing are evolutions directly related to his early ideas. He might also be interested in cybersecurity challenges, where information theory is critical in protecting global communications. Shannon would likely marvel at the sheer volume of information we produce yet be cautious of the potential misuse and the ethical quandaries regarding privacy, surveillance, and data ownership.

Alan Turing: The Architect of Artificial Intelligence

Alan Turing’s vision of machines capable of performing any conceivable task laid the foundation for modern computing and artificial intelligence. His Turing Machine is still a core concept in the theory of computation, and his famous Turing Test continues to be a benchmark in determining machine intelligence.

In today’s world, Turing would see his dream of intelligent machines realized—and then some. From self-driving cars to voice assistants like Siri and Alexa, AI systems are increasingly mimicking human cognition human capabilities in specific tasks like data analysis, pattern recognition, and simple problem-solving. While Turing would likely be excited by this progress, he might also wrestle with the ethical dilemmas arising from AI, such as autonomy, job displacement, and the dangers of creating highly autonomous AI systems as well as calling bluff on the fact that LLM systems do not reason in the same manner as human cognition on basing the results on probabilistic convex optimizations. His work on breaking the Enigma code might inspire him to delve into modern cryptography and cybersecurity challenges as well. His reaction-diffusion model called Turings Metapmorphsis equation, is foundational in explaining biological systems:

Turing’s reaction-diffusion system is typically written as a system of partial differential equations (PDEs):

    \[\frac{\partial u}{\partial t} &= D_u \nabla^2 u + f(u, v),\]


    \[\frac{\partial v}{\partial t} &= D_v \nabla^2 v + g(u, v),\]

where:

    \[\begin{itemize}\item $u$ and $v$ are concentrations of two chemical substances (morphogens),\item $D_u$ and $D_v$ are diffusion coefficients for $u$ and $v$,\item $\nabla^2$ is the Laplacian operator, representing spatial diffusion,\item $f(u, v)$ and $g(u, v)$ are reaction terms representing the interaction between $u$ and $v$.\end{itemize}\]

In addition to this, his contributions to cryptography and game theory alone are infathomable.
In his famous paper, Computing Machinery and Intelligence,” Turing posed the question, “Can machines think?” He proposed the Turing Test as a way to assess whether a machine can exhibit intelligent behavior indistinguishable from a human. This test has been a benchmark in AI for evaluating a machine’s ability to imitate human intelligence.

Given the recent advances made with large language models, I believe he would find it amusing, not that they think or reason.

Norbert Wiener: The Father of Cybernetics

Norbert Wiener’s theory of cybernetics explored the interplay between humans, machines, and systems, particularly how systems could regulate themselves through feedback loops. His ideas greatly influenced robotics, automation, and artificial intelligence. He wrote the books “Cybernetics” and “The Human Use of Humans”. During World War II, his work on the automatic aiming and firing of anti-aircraft guns caused Wiener to investigate information theory independently of Claude Shannon and to invent the Wiener filter. (The now-standard practice of modeling an information source as a random process—in other words, as a variety of noise—is due to Wiener.) Initially, his anti-aircraft work led him to write, with Arturo Rosenblueth and Julian Bigelow, the 1943 article ‘Behavior, Purpose and Teleology. He was also a complete pacifist. What was said about those who can hold two opposing views?

If Wiener were alive today, he would be fascinated by the rise of autonomous systems, from drones to self-regulated automated software, and the increasing role of cybernetic organisms (cyborgs) through advancements in bioengineering and robotic prosthetics. He, I would think, would also be amazed that we could do real-time frequency domain filtering based on his theories. However, Wiener’s warnings about unchecked automation and the need for human control over machines would likely be louder today. He might be deeply concerned about the potential for AI-driven systems to exacerbate inequalities or even spiral out of control without sufficient ethical oversight. The interaction between humans and machines in fields like healthcare, where cybernetics merges with biotechnology, would also be a keen point of interest for him.

John von Neumann: The Architect of Modern Computing

John von Neumann’s contributions span so many disciplines that it’s difficult to pinpoint just one. He’s perhaps most famous for his von Neumann architecture, the foundation of most modern computer systems, and his contributions to quantum mechanics and game theory. His visionary thinking on self-replicating machines even predated discussions of nanotechnology.

Von Neumann would likely be astounded by the ubiquity and power of modern computers. His architectural design is the backbone of nearly every device we use today, from smartphones to supercomputers. He would also find significant developments in quantum computing, aligning with his quantum mechanics work. As someone who worked on the Manhattan Project (also Opphenhiemer), von Neumann might also reflect on the dual-use nature of technology—the incredible potential of AI, nuclear power, and autonomous weapons to both benefit and harm humanity. His early concerns about the potential for mutual destruction could be echoed in today’s discussions on AI governance and existential risks.

What Would They Think Overall?

Together, these visionaries would undoubtedly marvel at how their individual contributions have woven into the very fabric of today’s society. The rapid advancements in AI, data transmission, computing power, and autonomous systems would be thrilling, but they might also feel a collective sense of responsibility to ask:

Where do we go from here?

Once again Oh Dear Reader You pre-empt me….

A colleague sent me this paper, which was the impetus for this blog:

My synopsis of said paper:


The Tensor as an Informational Resource” discusses the mathematical and computational importance of tensors as resources, particularly in quantum mechanics, AI, and computational complexity. The authors propose new preorders for comparing tensors and explore the notion of tensor rank and transformations, which generalize key problems in these fields. This paper is vital for understanding how the foundational work of Nash, Shannon, Turing, Wiener, and von Neumann has evolved into modern AI and quantum computing. Tensors offer a new frontier in scientific discovery, building on their theories and pushing the boundaries of computational efficiency, information processing, and artificial intelligence. It’s an extension of their legacy, providing a mathematical framework that could revolutionize our interaction with quantum information and complex systems. Fundamental to systems that appear to learn where the information-theoretic transforms are the very rosetta stone of how we perceive the world through perceptual filters of reality.

This shows the continuing relevance in ALL their ideas in today’s rapidly advancing AI and fluid computing technological landscape.

They might question whether today’s technology has outpaced ethical considerations and whether the systems they helped build are being used for the betterment of all humanity. Surveillance, privacy, inequality, and autonomous warfare would likely weigh heavily on their minds. Yet, their boundless curiosity and intellectual rigor would inspire them to continue pushing the boundaries of what’s possible, always seeking new answers to the timeless question of how to create the future we want and live better, more enlightened lives through science and technology.

Their legacy lives on, but so does their challenge to us: to use the tools they gave us wisely for the greater good of all.

Or would they be dismayed that we use all of this technology to make a powerpoint to save time so we can watch tik tok all day?

Until Then,

#iwishyouwater <- click and see folks who got the memo

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

Music To blog by: Bach: Mass in B Minor, BWV 232. By far my favorite composer. The John Eliot Gardiner and Monterverdi Choir version circa 1985 is astounding.

SnakeByte[16]: Enhancing Your Code Analysis with pyastgrep

Dalle 3’s idea of an Abstract Syntax Tree in R^3 space

If you would know strength and patience, welcome the company of trees.

~ Hal Borland

First, I hope everyone is safe. Second, I am changing my usual SnakeByte [] stance process. I am pulling this from a website I ran across. I saw the library mentioned, so I decided to pull from the LazyWebTM instead of the usual snake-based tomes I have in my library.

As a Python developer, understanding and navigating your codebase efficiently is crucial, especially as it grows in size and complexity. Trust me, it will, as does Entropy. Traditional search tools like grep or IDE-based search functionalities can be helpful, but they cannot often “‘understand” the structure of Python code – sans some of the Co-Pilot developments. (I’m using understand here *very* loosely Oh Dear Reader).

This is where pyastgrep it comes into play, offering a powerful way to search and analyze your Python codebase using Abstract Syntax Trees (ASTs). While going into the theory of ASTs is tl;dr for a SnakeByte[] , and there appears to be some ambiguity on the history and definition of Who actually invented ASTs, i have placed some references at the end of the blog for your reading pleasure, Oh Dear Reader. In parlance, if you have ever worked on compilers or core embedded systems, Abstract Syntax Trees are data structures widely used in compilers and the like to represent the structure of program code. An AST is usually the result of the syntax analysis phase of a compiler. It often serves as an intermediate representation of the program through several stages that the compiler requires and has a strong impact on the final output of the compiler.

So what is the Python Library that you speak of? i’m Glad you asked.

What is pyastgrep?

pyastgrep is a command-line tool designed to search Python codebases with an understanding of Python’s syntax and structure. Unlike traditional text-based search tools, pyastgrep it leverages the AST, allowing you to search for specific syntactic constructs rather than just raw text. This makes it an invaluable tool for code refactoring, auditing, and general code analysis.

Why Use pyastgrep?

Here are a few scenarios where pyastgrep excels:

  1. Refactoring: Identify all instances of a particular pattern, such as function definitions, class instantiations, or specific argument names.
  2. Code Auditing: Find usages of deprecated functions, unsafe code patterns, or adherence to coding standards.
  3. Learning: Explore and understand unfamiliar codebases by searching for specific constructs.

I have a mantra: Reduce, Refactor, and Reuse. Please raise your hand of y’all need to refactor your code? (C’mon now no one is watching… tell the truth…). See if it is possible to reduce the code footprint, refactor the code into more optimized transforms, and then let others reuse it across the enterprise.

Getting Started with pyastgrep

Let’s explore some practical examples of using pyastgrep to enhance your code analysis workflow.

Installing pyastgrep

Before we dive into how to use pyastgrep, let’s get it installed. You can install pyastgrep via pip:

(base)tcjr% pip install pyastgrep #dont actually type the tctjr part that is my virtualenv

Example 1: Finding Function Definitions

Suppose you want to find all function definitions in your codebase. With pyastgrep, this is straightforward:

pyastgrep 'FunctionDef'

This command searches for all function definitions (FunctionDef) in your codebase, providing a list of files and line numbers where these definitions occur. Ok pretty basic string search.

Example 2: Searching for Specific Argument Names

Imagine you need to find all functions that take an argument named config. This is how you can do it:

pyastgrep 'arg(arg=config)'

This query searches for function arguments named config, helping you quickly locate where configuration arguments are being used.

Example 3: Finding Class Instantiations

To find all instances where a particular class, say MyClass, is instantiated, you can use:

pyastgrep 'Call(func=Name(id=MyClass))'

This command searches for instantiations of MyClass, making it easier to track how and where specific classes are utilized in your project.

Advanced Usage of pyastgrep

For more complex queries, you can combine multiple AST nodes. For instance, to find all print statements in your code, you might use:

pyastgrep 'Call(func=Name(id=print))'

This command finds all calls to the print function. You can also use more detailed queries to find nested structures or specific code patterns.

Integrating pyastgrep into Your Workflow

Integrating pyastgrep into your development workflow can greatly enhance your ability to analyze and maintain your code. Here are a few tips:

  1. Pre-commit Hooks: Use pyastgrep in pre-commit hooks to enforce coding standards or check for deprecated patterns.
  2. Code Reviews: Employ pyastgrep during code reviews to quickly identify and discuss specific code constructs.
  3. Documentation: Generate documentation or code summaries by extracting specific patterns or structures from your codebase.

Example Script

To get you started, here’s a simple Python script using pyastgrep to search for all function definitions in a directory:

import os
from subprocess import run

def search_function_definitions(directory):
result = run(['pyastgrep', 'FunctionDef', directory], capture_output=True, text=True)
print(result.stdout)

if __name__ == "__main__":
directory = "path/to/your/codebase" #yes this is not optimal folks just an example.
search_function_definitions(directory)

Replace "path/to/your/codebase" with the actual path to your Python codebase, and run the script to see pyastgrep in action.

Conclusion

pyastgrep is a powerful tool that brings the capabilities of AST-based searching to your fingertips. Understanding and leveraging the syntax and structure of your Python code, pyastgrep allows for more precise and meaningful code searches. Whether you’re refactoring, auditing, or simply exploring code, pyastgrep it can significantly enhance your productivity and code quality. This is a great direct addition to your arsenal. Hope it helps and i hope you found this interesting.

Until Then,

#iwishyouwater <- The best of the best at Day1 Tahiti Pro presented by Outerknown 2024

𝕋𝕖𝕕 ℂ. 𝕋𝕒𝕟𝕟𝕖𝕣 𝕁𝕣. (@tctjr) / X

MUZAK to Blog By: SweetLeaf: A Stoner Rock Salute to Black Sabbath. While i do not really like bands that do covers, this is very well done. For other references to the Best Band In Existence ( Black Sabbath) i also refer you to Nativity in Black Volumes 1&2.

References:

[1] Basics Of AST

[2] The person who made pyastgrep

[3] Wikipedia page on AST