Wednesday, April 8, 2026

The Box That Changed Everything

 Standards That Move the World

The Box That Changed Everything

How a steel rectangle reshaped global trade, and why a JSON object is about to do the same thing to conversations.

Here's a story you already know, even if you've never really thought about it.

In 1956, a trucking entrepreneur from North Carolina named Malcom McLean loaded 58 steel boxes onto a refitted oil tanker in Newark, New Jersey. The ship sailed to Houston. Nobody threw a parade. But that voyage did something extraordinary: it proved that you could skip the misery of break-bulk shipping the sacks, the barrels, the longshoremen wrestling odd-shaped cargo on and off boats for days at a stretch—and just move the box.

The same box. Truck to crane to ship to crane to train. Never opened. Never repacked. Just… moved.

That one idea is not a breakthrough in materials science, not a revolutionary engine, just a standard-sized steel rectangle that dropped the cost of loading cargo from almost six dollars a ton to about sixteen cents. And then it quietly rewired the entire global economy.

I keep thinking about this story because I believe we're watching it happen again. Not with steel and ships, but with conversations and data. And the box this time is called a vCon.

Let's talk about what the shipping container actually did

People tend to remember the container as a logistics improvement. That undersells it by about three orders of magnitude.

Before containerization, a cargo ship could sit in port for weeks. Longshoremen worked in gangs of twenty or more, hauling goods out of the hold with pulleys and muscle. Stuff got stolen. Stuff got broken. Stuff got rained on. The port was, frankly, a mess—a tremendously expensive, accident-prone, unreliable mess. And because it was so expensive and unreliable, trade was mostly regional. You made things near where you'd sell them, because shipping was a nightmare.

The container changed all of that. Not overnight, it took decades of fights with dock workers' unions, arguments over standard sizes, massive port rebuilds, and new ship designs before the thing really took hold. But once it did, the impact was staggering. World exports went from $384 billion to over $25 trillion. The fraction of goods and services traded internationally doubled. Manufacturing moved to wherever it made the most economic sense, not wherever it was closest to the customer.

The container didn't just make shipping cheaper. It made an entirely different kind of economy possible.

China went from producing about 2% of the world's manufactured goods in the early 1980s to more than 30%. Just-in-time manufacturing, the backbone of companies like Toyota, and later every major manufacturer on earth, became viable because you could rely on a container showing up when it was supposed to. Walmart, Amazon, and IKEA, none of these business models work without containerization. The $5 Tshirt at Target? That's a container story.

And here's the part that always gets me: the container itself isn't impressive. It's corrugated steel. It has twist-lock fittings on the corners. It comes in two standard sizes (20 feet and 40 feet). A child could explain it. Its genius is not in what it is, but in what it standardizes.

Now let's talk about conversations

Think about every meaningful business conversation happening right now. Phone calls in call centers. Zoom meetings with clients. Slack threads about support tickets. Email chains negotiating deals. SMS messages confirming appointments. WhatsApp groups coordinating field teams.

Every one of those conversations generates data. Recordings, transcripts, metadata, who talked to whom, when, for how long, about what. And almost all of that data is trapped. It's locked in whatever platform hosted the conversation, stored in proprietary formats, and siloed away from everything else.

Sound familiar? It should. This is the fragmented shipping problem all over again.

Before the container, cargo existed in a thousand different shapes and packages, and getting it from point A to point B meant repacking it at every step. Before vCon, conversational data exists in a thousand different formats, and getting insights from it means building custom integrations for every platform, every analytics tool, every compliance system.

A vCon, short for "virtualized conversation," is a standardized container for conversational data. It's a JSON-based format being developed as an open standard through the IETF (the same body that gave us the protocols that run the internet). A single vCon packages up everything about a conversation: the participants, the actual dialogue (audio, video, or text), metadata like timestamps and call detail records, any analysis that's been performed (transcription, sentiment, summaries), attachments, consent records, and identity verification—all in one portable, signable, encryptable object.

It's a box. For conversations.

Why this matters right now

The timing is not accidental. We're living through an explosion in AI capabilities that can do remarkable things with conversational data if that data is clean, structured, and accessible. Large language models can analyze customer calls for sentiment. They can pull out action items from meetings. They can identify patterns across thousands of support interactions. They can flag compliance risks in real time.

But here's the catch: AI is only as good as its inputs. And right now, feeding conversational data into AI tools is a bespoke, fragile, expensive process. Every company builds its own pipelines. Every integration is custom. Every time you switch vendors, you start over.

This is the pre-container world. It "works," in the sense that break-bulk shipping "worked." Goods got moved. But it was slow, costly, and limited what was possible.

The parallels run deeper than you'd think

Here's what keeps striking me as I dig into the topic. The similarities aren't surface-level; they follow the same structural pattern.

Standardization unlocks interoperability. The shipping container didn't succeed because it was the best possible box. It succeeded because everyone agreed on the same dimensions. Once ISO standardized the sizes in the late 1960s, every port, every crane, every truck chassis, every railcar could handle the same unit. The ecosystem is aligned. vCon is doing the same thing for conversations. When every communication platform can output a vCon, and every analytics tool can ingest one, you get an ecosystem. You get a marketplace. You get competition on quality, not lock-in.

The "boring" standard enables the exciting stuff. Nobody gets excited about the twist-lock fitting on the corner of a container. But without it, you don't get global supply chains. Nobody is going to get excited about the JSON schema of a vCon. But without it, you don't get frictionless AI-powered conversation intelligence across every communication channel. The boring part is the important part.

Proprietary silos lose to open interchange. Before the container, every port, every shipping line, every railroad had its own way of doing things. That fragmentation was a moat for incumbents but a tax on everyone else. Sound like the communications industry today? Every platform locks in your data. Switching costs are enormous. vCon, like the ISO container, breaks those silos by making the cargo (conversation data) portable.

The most transformative standards don't create new capabilities. They make existing capabilities interchangeable.

The second-order effects dwarf the first. McLean was trying to save money on trucking. He did not set out to enable the rise of China as a manufacturing superpower or create the conditions for fast fashion. The container's consequences went wildly beyond anyone's imagination. I'd bet the same happens with vCon. Yes, it will simplify contact center analytics and ease compliance. Those are the obvious first-order effects. But what happens when every conversation a business has is structured, portable, and analyzable? When can AI work across all of your communications without custom plumbing? Can you easily move your entire conversation history from one vendor to another, like moving a contact list? The second-order effects could be enormous, and I don't think we can fully predict them yet.

The economic angle

The cargo container didn't just change logistics. It changed economics. It restructured labor markets (for better and worse). It shifted manufacturing geography. It made some ports obsolete and turned others into boomtowns. It made consumer goods radically cheaper. It was, in the words of economists, a general-purpose enabler, a thing that doesn't just improve one process but transforms the conditions under which many processes operate.

vCon has the same structural potential for the conversation economy as the conversation economy itself, which is bigger than most people realize. Think about every contact center, every sales team, every support desk, every healthcare provider documenting patient interactions, every financial services firm recording advisory calls, every legal team preserving conversations of record. Conversations are the raw material of the service economy, and the service economy is most of the economy.

Right now, working with that raw material is expensive and clunky, like loading cargo by hand. vCon is the standard that could make it cheap and fluid. And when you make a fundamental input to economic activity cheap and fluid, you tend to get a lot more of the activity that depends on it.

The container analogy in one sentence: Shipping containers standardized the movement of physical goods, unlocking globalization. vCons standardize the movement of conversational data and could unlock the next generation of AI-powered business intelligence.

What's different this time

The analogy jumps over two worlds.

The shipping container operated in the physical world, where the benefits were visible and measurable: fewer broken crates, faster turnaround, and lower costs per ton. 

vCon operates in the digital world, where the benefits are more abstract: better data portability, easier compliance, richer AI training inputs. It's harder to photograph a JSON object on a flatbed truck.

The container also had a more straightforward adoption incentive. Ship owners could see the cost savings immediately. With vCon, the value proposition requires a bit more imagination; it's about what becomes possible when your conversation data is structured and portable, not just about what gets cheaper.

And there's the privacy dimension, which has a parallel also in the container world. Conversations contain personal information. They contain things people said under an expectation of confidentiality. The vCon standard has to navigate consent management, data governance, and regulatory compliance in the same way that steel boxes handle chain of custody in the physical world, with locks, breakable seals, and passive temperature & humidity trackers. Yet privacy may seem to some people a slightly harder design problem in the digital world.

But these differences don't weaken the analogy. They strengthen it. The fact that vCon is tackling harder problems like privacy, trust, consent, security, while still aiming for the same radical simplification, suggests that the people building it understand the stakes.

We've seen this movie before

There's a pattern in the history of technology and commerce. A fragmented, expensive, friction-heavy system persists because nobody can agree on a standard. Then someone proposes a standard. It's resisted by incumbents who benefit from the fragmentation. Slowly, a few adopters prove it works. A critical mass forms. And then, suddenly, the old way looks insane. Who would go back to loading cargo by hand?

We're somewhere in the early chapters of that story with vCon. The IETF working group is active. Companies like VCONIC are building real products on the standard. Service providers are starting to see the commercial opportunity. Jeff Pulver, who knows a thing or two about what happens when you standardize communications infrastructure, having been around with Alon Cohen and Lior Haramaty at the birth of VoIP, is calling it a new industry in formation.

Maybe it takes off. Maybe it doesn't. Malcom McLean didn't know in 1956 that his steel boxes would reshape the world. He just knew that the current way of doing things was dumb and that a standard box was obviously better.

In my opinion, the people building vCon are in the same position. The current way of handling conversational data is dumb. A standard container is obviously better. The rest is a matter of time, adoption, and the kind of second-order effects that nobody can predict until they're already happening.

If history repeats, and it usually does, this box is worth paying attention to.

Wednesday, June 25, 2025

AI Ideas to fix development inefficiency problems

Date: June 25th, 2025

By: Alon Cohen + Grok


Now that we have AI to help us make things more efficient, I asked Grok the following question:


  • I am sure you are familiar with Fibonacci, Sprints, Kanban, Scrum, and prioritization. I noticed that when organizations work this way, development takes longer, and bugs take longer to fix (on the next sprint). Time buffers between tasks are out of control, as developers do not want to look bad for not finishing on time. Teams tend to get stuck when they depend on another team from time to time because the other team does not want to spend its own time helping others, which can lead to a deadlock. Can you suggest a better, more efficient methodology to manage development tasks?


Here is the summary of the answer I got from Grok.


Agile methodologies like Scrum (with sprints and Fibonacci estimation) and Kanban are widely used, but the issues you describe—slow development, delayed bug fixes, excessive time buffers, and inter-team dependencies—point to common pitfalls in their implementation.

These often stem from rigid adherence to frameworks, misaligned incentives, or poor coordination. Below, I propose a hybrid system to manage development tasks that addresses these pain points while maintaining flexibility and fostering collaboration. The system draws from Lean, DevOps principles, and modern workflow practices, tailored to reduce delays and improve responsiveness.


Proposed System: Flow-Based Collaborative Development

This system emphasizes continuous flow, cross-functional collaboration, and dynamic prioritization to minimize bottlenecks, reduce delays, and improve responsiveness to bugs and dependencies. It blends the strengths of Kanban, Lean, and team autonomy while avoiding the rigidity of sprints or over-reliance on estimation.

Core Principles

  • Continuous Flow Over Fixed Sprints:

    • Replace time-boxed sprints with a continuous flow model (inspired by Kanban) to allow tasks to move as soon as they’re ready, reducing delays caused by waiting for the next sprint.

    • Use Work-in-Progress (WIP) limits to prevent overloading teams and ensure focus on completing tasks before starting new ones.

  • Dynamic Prioritization:

    • Implement a weighted prioritization framework (e.g., Weighted Shortest Job First, WSJF) to rank tasks based on business value, urgency, and effort.

    • Bugs, especially critical ones, can be prioritized immediately rather than deferred to future sprints.

  • Cross-Functional Swarming:

    • Encourage teams to “swarm” on high-priority tasks or blockers, including bugs or dependency issues. This means temporarily pulling in expertise from other teams to resolve issues quickly, rather than waiting for knowledge transfer.

    • Foster a culture of shared ownership, where developers from different teams can contribute to critical fixes or features without bureaucratic handoffs.

  • Minimized Estimation, Maximized Predictability:

    • Move away from Fibonacci or story-point estimation, which can lead to inflated buffers as developers pad estimates to avoid looking bad. Instead, use cycle time metrics (time from start to completion) to forecast delivery based on historical data.

  • Dependency Mapping and Proactive Coordination:

    • Hold short, daily sync meetings (not full standups) for teams with dependencies to align on blockers and share knowledge in real-time.

  • Fast Feedback Loops:

    • Integrate automated testing and continuous integration/continuous deployment (CI/CD) to catch bugs early and deploy fixes rapidly.

    • Use real-time monitoring and alerting to prioritize production issues, ensuring critical bugs are addressed immediately rather than queued.

  • Team Autonomy with Guardrails:

    • Grant teams autonomy to decide how to tackle tasks, but enforce lightweight guardrails like WIP limits, clear acceptance criteria, and standardized tooling to maintain alignment.

    • Rotate team members periodically across squads to spread domain knowledge and reduce dependency on specific individuals or teams.

Key Practices

  • Visual Workflow Management:

    • Example: A critical bug is tagged and moved to the top of the “In Progress” column, with a WIP limit ensuring it gets immediate attention.

  • Daily Prioritization and Swarming:

    • Start each day with a 10-minute prioritization huddle (not a status update) to review the backlog, reprioritize tasks, and assign swarming resources for blockers.

    • Example: If Team A is stuck on a database issue, a database expert from Team B joins for a few hours to unblock them.

  • Dependency Resolution Framework:

    • Create a “Dependency Coordinator” role (rotating or dedicated) to facilitate knowledge sharing and unblock teams. This person ensures that dependent teams align quickly.

    • Example: If Team C needs an API from Team D, the coordinator arranges a 30-minute pairing session to clarify requirements and kickstart work.

  • Bug Triage and Response:

    • Critical bugs bypass the backlog and are assigned immediately.

    • Maintain a small “emergency lane” in the workflow for urgent fixes, with a strict WIP limit to prevent overloading.

Cultural Shifts

  • Incentivize Collaboration: Reward teams for unblocking others or contributing to shared goals, not just completing their own tasks.

  • Normalize Fast Failure: Encourage rapid iteration and small releases to catch issues early, reducing the impact of bugs.


How This Addresses Your Pain Points

  • Slow Development:

    • Continuous flow and WIP limits ensure tasks move quickly without waiting for sprint boundaries. Smaller task sizes reduce complexity and make progress predictable.

  • Delayed Bug Fixes:

    • The emergency lane prioritizes critical bugs for immediate resolution. 


Potential Challenges and Mitigations

  • Resistance to Change: Developers accustomed to sprints may resist continuous flow. Mitigate by starting with a pilot and showcasing early wins (e.g., faster bug fixes).

  • Stakeholder Pushback: Stakeholders may demand estimates. Educate them on cycle time metrics and provide regular updates to build trust.


Why This Is Better Than Scrum or Kanban Alone

  • Scrum: Fixed sprints cause delays for bugs and dependencies, and planning overhead slows teams down. The proposed system eliminates sprints for faster flow and responsiveness.

  • Kanban: While Kanban supports flow, it often lacks structure for prioritization and dependency management. This system adds dynamic prioritization, swarming, and dependency coordination.

  • Lean Influence: By focusing on waste reduction (e.g., waiting, overestimation), the system ensures value is delivered quickly.


Real-World Examples

  • Spotify’s Squad Model: Spotify uses autonomous squads with loose alignment, similar to the proposed team autonomy with guardrails. They emphasize cross-squad collaboration for dependencies, akin to swarming.

  • Toyota’s Lean Flow: Toyota’s production system inspired the continuous flow and WIP limits, ensuring work moves smoothly without bottlenecks.



On one hand, I was surprised because AI tends to agree with the status quo more than with potentially controversial or visionary ideas. However, the answer accurately verifies my past ideas and postings on the topic, as well as verifies my actual experience in the field.

What do you think?

Thursday, June 12, 2025

Saving Humanity and Social Media?

By: Alon Cohen Jan 21st, 2025

Updated: June 12, 2025

Aggression on Social Media.


Humans often exhibit more aggressive or hostile behavior on social media compared to face-to-face interactions. We recently witnessed a public demonstration of this when one of the two most influential individuals on earth decided to poke the other over social media. This phenomenon of out-of-character over-aggressiveness is happening for several reasons:


Anonymity and Disinhibition: Social media platforms offer a layer of full or pseudo-anonymity, allowing users to feel less accountable for their actions. The reduced accountability phenomenon, known as the "online disinhibition effect," diminishes the social cues and immediate feedback that are present in in-person conversations, such as facial expressions or tone of voice, which ordinarily moderate behavior. Without these cues, people may feel freer to express harsh or critical thoughts without immediately seeing the impact on others.


Distance and Lack of Consequence: The physical and emotional distance provided by online communication means no immediate personal consequence to hurtful words. In face-to-face conversations, you might see someone's reaction, feel empathy, or face social repercussions, such as losing respect or damaging relationships. On social media, the immediate impact is often not visible, and the consequences can seem less tangible or delayed, which can embolden people to express themselves more harshly.


Deindividuation: When individuals are part of a large online community or mob, they may experience deindividuation, losing their sense of personal identity and feeling less responsible for their actions. This can lead to behavior that is more in line with the group's norms, which might be more aggressive or dismissive on platforms where such behavior is typical.


Echo Chambers and Group Polarization: Social media algorithms often create echo chambers, where users are primarily exposed to similar viewpoints, which can lead to group polarization. In these environments, individuals might feel validated in their harsher opinions because they see others expressing similar or even more extreme views. This can escalate the tone of discourse as users attempt to stand out or gain approval within their echo chamber by being more confrontational or provocative.


The Need for Attention: Social media thrives on engagement, and controversially harsh comments often garner more attention or reactions than polite or moderate ones. For some, pursuing likes, shares, or even notoriety can drive them to write in a way that's more likely to provoke a reaction, even if it's negative. The platform's reward system can inadvertently encourage this behavior by highlighting contentious content, thereby reinforcing it.


Instant Gratification and Impulse Control: Social media allows for immediate expression of thoughts without the time for reflection that might occur in person. This can lead to impulsive comments in the heat of the moment, which might not reflect one's true character or usual manner of discourse. The immediacy of posting can bypass the typical social filters that would moderate speech in real-life interactions.


These factors, when combined, create an environment where harsh comments are common.

Is that bad for humanity?

While there might be some scenarios in which harsh comments could lead to positive outcomes, the overwhelming evidence suggests the opposite. The potential benefits are often overshadowed by the damage they can cause to individuals' mental health, societal cohesion, and the quality of public discourse.  The consensus suggests that harsh comments are not helping humanity, particularly when they contribute to a climate of fear, misunderstanding, or hate. The challenge lies in maintaining open, honest communication while promoting kindness and respect.


Addressing the challenge


Social media platforms can implement a few measures to help address the challenges described that can help reduce the “temperature” in regular Social Media discourse.


Implementing some or all of the solutions described below could be a good starting point.


  1. THE UPLOADED PICTURE:
    If possible, require people to upload a verified picture to reduce anonymity.


  1. SHOWING THE FACE OF THE OTHER SIDE:
    Show the commenter, on the same screen, the picture of the poster on which they are commenting. This base-level feature can help signal commenters that they are responding to a REAL person on the other side.

  2. SHOW HOW THE OTHER SIDE FEELS:
    Let's take it to the next level. We can take the verified, loaded image (as the user uploaded it) and derive other images using AI that reflect that person's range of emotions. Users should also be able to upload their emotional photos if they like.




Emotional faces

As the commenter writes the response to a post, the AI analyzes the dialog and the comment's sentiment (text or emojis - see sample UI below) and shows the commenter a realistic image (as possible) of the poster, that reflects the most likely feeling of the poster if they read that comment.

Given the set of ready-to-use “emotional images” created at image upload, there is no need for any real-time processing besides the text sentiment analysis and selecting the corresponding image.


Using this method, the commenter will see if the poster’s image changes to angry, sad, crying, etc., based on the sentiment in the comment that they are writing in real-time. The assumption is that most people (i.e., commenters and posters) would be more aware and less harsh if they realized, by looking at that instant emotional image feedback, that they would be inflicting emotional pain on the reader. 


This process works on both sides: the Poster’s and the Commenter's sides.


  1. PREDICT THE OUTCOME BY SIMULATING THE BACK-AND-FORTH TEXT:
    The third level involves having the AI display possible back-and-forth sentences a few steps ahead and assess whether the result, after a potential escalation, is beneficial or detrimental to either side, i.e., the poster or the commenter.



Sample UI


Sample UI to cool down social media.

Conclusion


The solution outlined above aims to cool social media without censorship. It is designed to address and alleviate the problem of harsh comments and harmful escalation on social media platforms.

The solution aims to address the psychological aspects of online interactions. We can perfect the concept by taking into account the following elements:


Privacy and Consent: The above solutions must be implemented with robust privacy policies and user consent mechanisms. Users should have some control over their image set and need to agree to load and display larger images, although results may also be achieved by displaying emojis.

Cultural Sensitivity: Responses to these features may vary widely across different cultures. Differentiated deployment of the feature can prevent bad reactions in certain cultures.

Technical Feasibility: The AI systems can run locally on the device to handle sentiment analysis and behavior prediction accurately across diverse human emotions and expressions. The speed at which the AI is progressing and local device capability might make this task more straightforward than it seems.

User Adoption: For these features to be compelling, they must be user-friendly and not intrusive or manipulative.


In conclusion, while these solutions offer promising avenues to mitigate toxicity on social media, they require careful implementation, ongoing evaluation, and possibly adjustments to strike a balance between effectiveness, ethical considerations, and user rights.


What do you think?