The Facebook Oversight Board (FOB) is already feeling frustrated by the binary choices it’s expected to make as it reviews Facebook’s content moderation decisions, according to one of its members who was giving evidence to a UK House of Lords committee today which is running an enquiry into freedom of expression online.
The FOB is currently considering whether to overturn Facebook’s ban on former US president, Donald Trump. The tech giant banned Trump “indefinitely” earlier this year after his supporters stormed the US capital.
The chaotic insurrection on January 6 led to a number of deaths and widespread condemnation of how mainstream tech platforms had stood back and allowed Trump to use their tools as megaphones to whip up division and hate rather than enforcing their rules in his case.
Yet, after finally banning Trump, Facebook almost immediately referred the case to it’s self-appointed and self-styled Oversight Board for review — opening up the prospect that its Trump ban could be reversed in short order via an exceptional review process that Facebook has fashioned, funded and staffed.
Alan Rusbridger, a former editor of the British newspaper The Guardian — and one of 20 FOB members selected as an initial cohort (the Board’s full headcount will be double that) — avoided making a direct reference to the Trump case today, given the review is ongoing, but he implied that the binary choices it has at its disposal at this early stage aren’t as nuanced as he’d like.
“What happens if — without commenting on any high profile current cases — you didn’t want to ban somebody for life but you wanted to have a ‘sin bin’ so that if they misbehaved you could chuck them back off again?” he said, suggesting he’d like to be able to issue a soccer-style “yellow card” instead.
“I think the Board will want to expand in its scope. I think we’re already a bit frustrated by just saying take it down or leave it up,” he went on. “What happens if you want to… make something less viral? What happens if you want to put an interstitial?
“So I think all these things are things that the Board may ask Facebook for in time. But we have to get our feet under the table first — we can do what we want.”
“At some point we’re going to ask to see the algorithm, I feel sure — whatever that means,” Rusbridger also told the committee. “Whether we can understand it when we see it is a different matter.”
To many people, Facebook’s Trump ban is uncontroversial — given the risk of further violence posed by letting Trump continue to use its megaphone to foment insurrection. There are also clear and repeat breaches of Facebook’s community standards if you want to be a stickler for its rules.
Among supporters of the ban is Facebook’s former chief security officer, Alex Stamos, who has since been working on wider trust and safety issues for online platforms via the Stanford Internet Observatory.
Stamos was urging both Twitter and Facebook to cut Trump off before everything kicked off, writing in early January: “There are no legitimate equities left and labeling won’t do it.”
But in the wake of big tech moving almost as a unit to finally put Trump on mute, a number of world leaders and lawmakers were quick to express misgivings at the big tech power flex.
Germany’s chancellor called Twitter’s ban on him “problematic”, saying it raised troubling questions about the power of the platforms to interfere with speech. While other lawmakers in Europe seized on the unilateral action — saying it underlined the need for proper democratic regulation of tech giants.
The sight of the world’s most powerful social media platforms being able to mute a democratically elected president (even one as divisive and unpopular as Trump) made politicians of all stripes feel queasy.
Facebook’s entirely predictable response was, of course, to outsource this two-sided conundrum to the FOB. After all, that was its whole plan for the Board. The Board would be there to deal with the most headachey and controversial content moderation stuff.
And on that level Facebook’s Oversight Board is doing exactly the job Facebook intended for it.
But it’s interesting that this unofficial ‘supreme court’ is already feeling frustrated by the limited binary choices it’s asked them for. (Of, in the Trump case, either reversing the ban entirely or continuing it indefinitely.)
The FOB’s unofficial message seems to be that the tools are simply far too blunt. Although Facebook has never said it will be bound by any wider policy suggestions the Board might make — only that it will abide by the specific individual review decisions. (Which is why a common critique of the Board is that it’s toothless where it matters.)
How aggressive the Board will be in pushing Facebook to be less frustrating very much remains to be seen.
“None of this is going to be solved quickly,” Rusbridger went on to tell the committee in more general remarks on the challenges of moderating speech in the digital era. Getting to grips with the Internet’s publishing revolution could in fact, he implied, take the work of generations — making the customary reference the long tail of societal disruption that flowed from Gutenberg inventing the printing press.
If Facebook was hoping the FOB would kick hard (and thorny-in-its-side) questions around content moderation into long and intellectual grasses it’s surely delighted with the level of beard stroking which Rusbridger’s evidence implies is now going on inside the Board. (If, possibly, slightly less enchanted by the prospect of its appointees asking it if they can poke around its algorithmic black boxes.)
Kate Klonick, an assistant professor at St John’s University Law School, was also giving evidence to the committee — having written an article on the inner workings of the FOB, published recently in the New Yorker, after she was given wide-ranging access by Facebook to observe the process of the body being set up.
The Lords committee was keen to learn more on the workings of the FOB and pressed the witnesses several times on the question of the Board’s independence from Facebook.
Rusbridger batted away concerns on that front — saying “we don’t feel we work for Facebook at all”. Though Board members are paid by Facebook via a trust it set up to put the FOB at arm’s length from the corporate mothership. And the committee didn’t shy away or raising the payment point to query how genuinely independent they can be?
“I feel highly independent,” Rusbridger said. “I don’t think there’s any obligation at all to be nice to Facebook or to be horrible to Facebook.”
“One of the nice things about this Board is occasionally people will say but if we did that that will scupper Facebook’s economic model in such and such a country. To which we answer well that’s not our problem. Which is a very liberating thing,” he added.
Of course it’s hard to imagine a sitting member of the FOB being able to answer the independence question any other way — unless they were simultaneously resigning their commission (which, to be clear, Rusbridger wasn’t).
He confirmed that Board members can serve three terms of three years apiece — so he could have almost a decade of beard-stroking on Facebook’s behalf ahead of him.
Klonick, meanwhile, emphasized the scale of the challenge it had been for Facebook to try to build from scratch a quasi-independent oversight body and create distance between itself and its claimed watchdog.
“Building an institution to be a watchdog institution — it is incredibly hard to transition to institution-building and to break those bonds [between the Board and Facebook] and set up these new people with frankly this huge set of problems and a new technology and a new back end and a content management system and everything,” she said.
Rusbridger had said the Board went through an extensive training process which involved participation from Facebook representatives during the ‘onboarding’. But went on to describe a moment when the training had finished and the FOB realized some Facebook reps were still joining their calls — saying that at that point the Board felt empowered to tell Facebook to leave.
“This was exactly the type of moment — having watched this — that I knew had to happen,” added Klonick. “There had to be some type of formal break — and it was told to me that this was a natural moment that they had done their training and this was going to be moment of push back and breaking away from the nest. And this was it.”
However if your measure of independence is not having Facebook literally listening in on the Board’s calls you do have to query how much Kool Aid Facebook may have successfully doled out to its chosen and willing participants over the long and intricate process of programming its own watchdog — including to extra outsiders it allowed in to observe the set up.
The committee was also interested in the fact the FOB has so far mostly ordered Facebook to reinstate content its moderators had previously taken down.
In January, when the Board issued its first decisions, it overturned four out of five Facebook takedowns — including in relation to a number of hate speech cases. The move quickly attracted criticism over the direction of travel. After all, the wider critique of Facebook’s business is it’s far too reluctant to remove toxic content (it only banned holocaust denial last year, for example). And lo! Here’s its self-styled ‘Oversight Board’ taking decisions to reverse hate speech takedowns…
The unofficial and oppositional ‘Real Facebook Board’ — which is truly independent and heavily critical of Facebook — pounced and decried the decisions as “shocking”, saying the FOB had “bent over backwards to excuse hate”.
Klonick said the reality is that the FOB is not Facebook’s supreme court — but rather it’s essentially just “a dispute resolution mechanism for users”.
If that assessment is true — and it sounds spot on, so long as you recall the fantastically tiny number of users who get to use it — the amount of PR Facebook has been able to generate off of something that should really just be a standard feature of its platform is truly incredible.
Klonick argued that the Board’s early reversals were the result of it hearing from users objecting to content takedowns — which had made it “sympathetic” to their complaints.
“Absolute frustration at not knowing specifically what rule was broken or how to avoid breaking the rule again or what they did to be able to get there or to be able to tell their side of the story,” she said, listing the kinds of things Board members had told her they were hearing from users who had petitioned for a review of a takedown decision against them.
“I think that what you’re seeing in the Board’s decision is, first and foremost, to try to build some of that back in,” she suggested. “Is that the signal that they’re sending back to Facebook — that’s it’s pretty low hanging fruit to be honest. Which is let people know the exact rule, given them a fact to fact type of analysis or application of the rule to the facts and give them that kind of read in to what they’re seeing and people will be happier with what’s going on.
“Or at least just feel a little bit more like there is a process and it’s not just this black box that’s censoring them.”
In his response to the committee’s query, Rusbridger discussed how he approaches review decision-making.
“In most judgements I begin by thinking well why would we restrict freedom of speech in this particular case — and that does get you into interesting questions,” he said, having earlier summed up his school of thought on speech as akin to the ‘fight bad speech with more speech’ Justice Brandeis type view.
“The right not to be offended has been engaged by one of the cases — as opposed to the borderline between being offended and being harmed,” he went on. “That issue has been argued about by political philosophers for a long time and it certainly will never be settled absolutely.
“But if you went along with establishing a right not to be offended that would have huge implications for the ability to discuss almost anything in the end. And yet there have been one or two cases where essentially Facebook, in taking something down, has invoked something like that.”
“Harm as oppose to offence is clearly something you would treat differently,” he added. “And we’re in the fortunate position of being able to hire in experts and seek advisors on the harm here.”
While Rusbridger didn’t sound troubled about the challenges and pitfalls facing the Board when it may have to set the “borderline” between offensive speech and harmful speech itself — being able to (further) outsource expertise presumably helps — he did raise a number of other operational concerns during the session. Including over the lack of technical expertise among current board members (who were purely Facebook’s picks).
Without technical expertise how can the Board ‘examine the algorithm’, as he suggested it would want to, because it won’t be able to understand Facebook’s content distribution machine in any meaningful way?
Since the Board currently lacks technical expertise, it does raise wider questions about its function — and whether its first learned cohort might not be played as useful idiots from Facebook’s self-interested perspective — by helping it gloss over and deflect deeper scrutiny of its algorithmic, money-minting choices.
If you don’t really understand how the Facebook machine functions, technically and economically, how can you conduct any kind of meaningful oversight at all? (Rusbridger evidently gets that — but is also content to wait and see how the process plays out. No doubt the intellectual exercise and insider view is fascinating. “So far I’m finding it highly absorbing,” as he admitted in his evidence opener.)
“People say to me you’re on that Board but it’s well known that the algorithms reward emotional content that polarises communities because that makes it more addictive. Well I don’t know if that’s true or not — and I think as a board we’re going to have to get to grips with that,” he went on to say. “Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying.”
“I do think our responsibility will be to understand what these machines are — the machines that are going in rather than the machines that are moderating,” he added. “What their metrics are.”
Both witnesses raised another concern: That the kind of complex, nuanced moderation decisions the Board is making won’t be able to scale — suggesting they’re too specific to be able to generally inform AI-based moderation. Nor will they necessarily be able to be acted on by the staffed moderation system that Facebook currently operates (which gives its thousand of human moderators a fantastically tiny amount of thinking time per content decision).
Despite that the issue of Facebook’s vast scale vs the Board’s limited and Facebook-defined function — to fiddle at the margins of its content empire — was one overarching point that hung uneasily over the session, without being properly grappled with.
“I think your question about ‘is this easily communicated’ is a really good one that we’re wrestling with a bit,” Rusbridger said, conceding that he’d had to brain up on a whole bunch of unfamiliar “human rights protocols and norms from around the world” to feel qualified to rise to the demands of the review job.
Scaling that level of training to the tens of thousands of moderators Facebook currently employs to carry out content moderation would of course be eye-wateringly expensive. Nor is it on offer from Facebook. Instead it’s hand-picked a crack team of 40 very expensive and learned experts to tackle an infinitesimally smaller number of content decisions.
“I think it’s important that the decisions we come to are understandable by human moderators,” Rusbridger added. “Ideally they’re understandable by machines as well — and there is a tension there because sometimes you look at the facts of a case and you decide it in a particular way with reference to those three standards [Facebook’s community standard, Facebook’s values and “a human rights filter”]. But in the knowledge that that’s going to be quite a tall order for a machine to understand the nuance between that case and another case.
“But, you know, these are early days.”
How Duolingo became a $2.4B language unicorn – TechCrunch
At the heart of Duolingo is its mission: to scale free education and increase income potential through language learning. However, the same mission that has helped it grow to a business valued at $2.4 billion with over 500 million registered learners, has led to tensions that continue to define the business.
How do you survive as a startup if you don’t want to charge users? How do you design a startup that isn’t too hard to lose people, but isn’t too easy to compromise education? How do you balance monetization goals while also keeping education as a product free?
For my first EC-1, I spent months with Duolingo executives, investors, and of course, competitors, to answer some of these questions.
One of my favorite details in the story that got left on the cutting room floor was Duolingo co-founder and CEO Luis von Ahn comparing his company to the elliptical. I was pressing him on the efficacy of Duolingo, and the long-standing critique that it still can’t teach a user how to speak a language fluently.
“Now, there’s a difference between whether you know you’re doing the elliptical or yoga or running, but by far, the most important thing is that you’re doing something [other than] just walking around,” he said.
What von Ahn is getting at is that Duolingo’s biggest value proposition is that it helps people get motivated to learn a language, even if it’s just five minutes — or an elliptical workout — a day. He thinks motivation is harder than the learning itself. Do you agree?
If you enjoyed my series, make sure to check out other EC-1s and subscribe to ExtraCrunch to support me, this newsletter and the rest of the team. I’d also love it if you followed me on Twitter @nmasc_.
In the rest of this newsletter, we’ll talk about Tesla, the morality of going public and verticalized telehealth.
There’s always a Tesla angle
When I was working in Boston, the newsroom saying was “there’s always a Boston Angle.” In a remote, tech-dominated world, I’ll tweak it: There’s always a Tesla angle. While we all prepare for Elon Musk to grace the SNL stage, there’s a story you might want to check out.
Here’s what to know: Tesla tapped a small Canadian startup to build cleaner and cheaper batteries. The price tag will shock you, but the story tells a bigger narrative about patented technology, and the outsized impact that a tiny startup has on Tesla’s route to batteries.
Literally moving us along:
The clash of the CFOs
While Equity usually keeps it light and punny, we chewed into a deeper topic this week: the morality of going public. Startups are staying private longer than ever before, but one CFO argues that it’s a moral obligation to leave the nest and provide returns to the general public. We had that CFO on the show, along with another CFO at a company pursuing a SPAC. It ended up being the most interesting clash of the CFOs I’ve been a part of.
Here’s what to know: The growth of venture capital as an asset class has a role to play in this whole mess and has kept the nest warm for many startups. We talk about if the tides are turning, or we’re saying goodbye to a world in which a company like Salesforce would debut price for $11 per share.
While you’re focused on Twitter’s tip jar, here’s other money news you may have missed in the meantime:
Where telehealth goes from here
As I start to cover digital health, one of the biggest questions I ask and get asked is where telehealth goes from here. Virtual caretaking had an uptick in usage because of the pandemic but is now starting to slow as the world reopens and vaccinations are on the rise. For telehealth startups, it means crafting a pitch that explains why virtual care makes sense for the conditions you serve.
Here’s what to know: I talked about how to become pandemic-proof in healthcare with Expressable, a virtual speech therapy startup that just raised millions in venture capital money. Part of the startups’ product differentiation is an edtech platform that motivates consumers to asynchronous practice speech exercises with the help of parents and friends.
And down the rabbit hole we go:
Seen on TechCrunch
Seen on Extra Crunch
And that’s that. Thank you for reading along and supporting me. I’ll never get over it.
When the Earth is gone, at least the internet will still be working – TechCrunch
The internet is now our nervous system. We are constantly streaming and buying and watching and liking, our brains locked into the global information matrix as one universal and coruscating emanation of thought and emotion.
What happens when the machine stops though?
It’s a question that E.M. Forster was intensely focused on more than a century ago in a short story called, rightly enough, “The Machine Stops,” about a human civilization connected entirely through machines that one day just turn off.
Those fears of downtime are not just science fiction anymore. Outages aren’t just missing a must-watch TikTok clip. Hospitals, law enforcement, the government, every corporation — the entire spectrum of human institutions that constitute civilization now deeply rely on connectivity to function.
So when it comes to disaster response, the world has dramatically changed. In decades past, the singular focus could be roughly summarized as rescue and mitigation — save who you can while trying to limit the scale of destruction. Today though, the highest priority is by necessity internet access, not just for citizens, but increasingly for the on-the-ground first responders who need bandwidth to protect themselves, keep abreast of their mission objectives, and have real-time ground truth on where dangers lurk and where help is needed.
While the sales cycles might be arduous as we learned in part one and the data trickles have finally turned to streams in part two, the reality is that none of that matters if there isn’t connectivity to begin with. So in part three of this series on the future of technology and disaster response, we’re going to analyze the changing nature of bandwidth and connectivity and how they intersect with emergencies, taking a look at how telcos are creating resilience in their networks while defending against climate change, how first responders are integrating connectivity into their operations, and finally, exploring how new technologies like 5G and satellite internet will affect these critical activities.
Wireless resilience as the world burns
Climate change is inducing more intense weather patterns all around the world, creating second- and third-order effects for industries that rely on environmental stability for operations. Few industries have to be as dynamic to the changing context as telecom companies, whose wired and wireless infrastructure is regularly buffeted by severe storms. Resiliency of these networks isn’t just needed for consumers — it’s absolutely necessary for the very responders trying to mitigate disasters and get the network back up in the first place.
Unsurprisingly, no issue looms larger for telcos than access to power — no juice, no bars. So all three of America’s major telcos — Verizon (which owns TechCrunch’s parent company Verizon Media, although not for much longer), AT&T and T-Mobile — have had to dramatically scale up their resiliency efforts in recent years to compensate both for the demand for wireless and the growing damage wrought by weather.
Jay Naillon, senior director of national technology service operations strategy at T-Mobile, said that the company has made resilience a key part of its network buildout in recent years, with investments in generators at cell towers that can be relied upon when the grid cannot. In “areas that have been hit by hurricanes or places that have fragile grids … that is where we have invested most of our fixed assets,” he said.
Like all three telcos, T-Mobile pre-deploys equipment in anticipation for disruptions. So when a hurricane begins to swirl in the Atlantic Ocean, the company will strategically fly in portable generators and mobile cell towers in anticipation of potential outages. “We look at storm forecasts for the year,” Naillon explained, and do “lots of preventative planning.” They also work with emergency managers and “run through various drills with them and respond and collaborate effectively with them” to determine which parts of the network are most at risk for damage in an emergency. Last year, the company partnered with StormGeo to accurately predict weather events.
Predictive AI for disasters is also a critical need for AT&T. Jason Porter, who leads public sector and the company’s FirstNet first-responder network, said that AT&T teamed up with Argonne National Laboratory to create a climate-change analysis tool to evaluate the siting of its cell towers and how they will weather the next 30 years of “floods, hurricanes, droughts and wildfires.” “We redesigned our buildout … based on what our algorithms told us would come,” he said, and the company has been elevating vulnerable cell towers four to eight feet high on “stilts” to improve their resiliency to at least some weather events. That “gave ourselves some additional buffer.”
AT&T has also had to manage the growing complexity of creating reliability with the chaos of a climate-change-induced world. In recent years, “we quickly realized that many of our deployments were due to weather-related events,” and the company has been “very focused on expanding our generator coverage over the past few years,” Porter said. It’s also been very focused on building out its portable infrastructure. “We essentially deploy entire data centers on trucks so that we can stand up essentially a central office,” he said, empathizing that the company’s national disaster recovery team responded to thousands of events last year.
Particularly on its FirstNet service, AT&T has pioneered two new technologies to try to get bandwidth to disaster-hit regions faster. First, it has invested in drones to offer wireless services from the sky. After Hurricane Laura hit Louisiana last year with record-setting winds, our “cell towers were twisted up like recycled aluminum cans … so we needed to deploy a sustainable solution,” Porter described. So the company deployed what it dubs the FirstNet One — a “dirigible” that “can cover twice the cell coverage range of a cell tower on a truck, and it can stay up for literally weeks, refuel in less than an hour and go back up — so long-term, sustainable coverage,” he said.
Secondly, the company has been building out what it calls FirstNet MegaRange — a set of high-powered wireless equipment that it announced earlier this year that can deploy signals from miles away, say from a ship moored off a coast, to deliver reliable connectivity to first responders in the hardest-hit disaster zones.
As the internet has absorbed more of daily life, the norms for network resilience have become ever more exacting. Small outages can disrupt not just a first responder, but a child taking virtual classes and a doctor conducting remote surgery. From fixed and portable generators to rapid-deployment mobile cell towers and dirigibles, telcos are investing major resources to keep their networks running continuously.
Yet, these initiatives are ultimately costs borne by telcos increasingly confronting a world burning up. Across conversations with all three telcos and others in the disaster response space, there was a general sense that utilities just increasingly have to self-insulate themselves in a climate-changed world. For instance, cell towers need their own generators because — as we saw with Texas earlier this year — even the power grid itself can’t be guaranteed to be there. Critical applications need to have offline capabilities, since internet outages can’t always be prevented. The machine runs, but the machine stops, too.
The trend lines on the frontlines are data lines
While we may rely on connectivity in our daily lives as consumers, disaster responders have been much more hesitant to fully transition to connected services. It is precisely in the middle of a tornado and the cell tower is down that you realize a printed map might have been nice to have. Paper, pens, compasses — the old staples of survival flicks remain just as important in the field today as they were decades ago.
Yet, the power of software and connectivity to improve emergency response has forced a rethinking of field communications and how deeply technology is integrated on the ground. Data from the frontlines is extremely useful, and if it can be transmitted, dramatically improves the ability of operations planners to respond safely and efficiently.
Both AT&T and Verizon have made large investments in directly servicing the unique needs of the first responder community, with AT&T in particular gaining prominence with its FirstNet network, which it exclusively operates through a public-private partnership with the Department of Commerce’s First Responder Network Authority. The government offered a special spectrum license to the FirstNet authority in Band 14 in exchange for the buildout of a responder-exclusive network, a key recommendation of the 9/11 Commission, which found that first responders couldn’t communicate with each other on the day of those deadly terrorist attacks. Now, Porter of AT&T says that the company’s buildout is “90% complete” and is approaching 3 million square miles of coverage.
Why so much attention on first responders? The telcos are investing here because in many ways, the first responders are on the frontiers of technology. They need edge computing, AI/ML rapid decision-making, the bandwidth and latency of 5G (which we will get to in a bit), high reliability, and in general, are fairly profitable customers to boot. In other words, what first responders need today are what consumers in general are going to want tomorrow.
Cory Davis, director of public safety strategy and crisis response at Verizon, explained that “more than ever, first responders are relying on technology to go out there and save lives.” His counterpart, Nick Nilan, who leads product management for the public sector, said that “when we became Verizon, it was really about voice [and] what’s changed over the last five [years] is the importance of data.” He brings attention to tools for situational awareness, mapping, and more that are a becoming standard in the field. Everything first responders do “comes back to the network — do you have the coverage where you need it, do you have the network access when something happens?”
The challenge for the telcos is that we all want access to that network when catastrophe strikes, which is precisely when network resources are most scarce. The first responder trying to communicate with their team on the ground or their operations center is inevitably competing with a citizen letting friends know they are safe — or perhaps just watching the latest episode of a TV show in their vehicle as they are fleeing the evacuation zone.
That competition is the argument for a completely segmented network like FirstNet, which has its own dedicated spectrum with devices that can only be used by first responders. “With remote learning, remote work and general congestion,” Porter said, telcos and other bandwidth providers were overwhelmed with consumer demand. “Thankfully we saw through FirstNet … clearing that 20 MHz of spectrum for first responders” helped keep the lines clear for high-priority communications.
FirstNet’s big emphasis is on its dedicated spectrum, but that’s just one component of a larger strategy to give first responders always-on and ready access to wireless services. AT&T and Verizon have made prioritization and preemption key operational components of their networks in recent years. Prioritization gives public safety users better access to the network, while preemption can include actively kicking off lower-priority consumers from the network to ensure first responders have immediate access.
Nilan of Verizon said, “The network is built for everybody … but once we start thinking about who absolutely needs access to the network at a period of time, we prioritize our first responders.” Verizon has prioritization, preemption, and now virtual segmentation — “we separate their traffic from consumer traffic” so that first responders don’t have to compete if bandwidth is limited in the middle of a disaster. He noted that all three approaches have been enabled since 2018, and Verizon’s suite of bandwidth and software for first responders comes under the newly christened Verizon Frontline brand that launched in March.
With increased bandwidth reliability, first responders are increasingly connected in ways that even a decade ago would have been unfathomable. Tablets, sensors, connected devices and tools — equipment that would have been manual are now increasingly digital.
That opens up a wealth of possibilities now that the infrastructure is established. My interview subjects suggested applications as diverse as the decentralized coordination of response team movements through GPS and 5G; real-time updated maps that offer up-to-date risk analysis of how a disaster might progress; pathfinding for evacuees that’s updated as routes fluctuate; AI damage assessments even before the recovery process begins; and much, much more. In fact, when it comes to the ferment of the imagination, many of those possibilities will finally be realized in the coming years — when they have only ever been marketing-speak and technical promises in the past.
We’ve been hearing about 5G for years now, and even 6G every once in a while just to cause reporters heart attacks, but what does 5G even mean in the context of disaster response? After years of speculation, we are finally starting to get answers.
Naillon of T-Mobile noted that the biggest benefit of 5G is that it “allows us to have greater coverage” particularly given the low-band spectrum that the standard partially uses. That said, “As far as applications — we are not really there at that point from an emergency response perspective,” he said.
Meanwhile, Porter of AT&T said that “the beauty of 5G that we have seen there is less about the speed and more about the latency.” Consumers have often seen marketing around voluminous bandwidths, but in the first-responder world, latency and edge computing tends to be the most desirable features. For instance, devices can relay video to each other on the frontlines, without necessarily needing a backhaul to the main wireless network. On-board processing of image data could allow for rapid decision-making in environments where seconds can be vital to the success of a mission.
That flexibility is allowing for many new applications in disaster response, and “we are seeing some amazing use cases coming out of our 5G deployments [and] we have launched some of our pilots with the [Department of Defense],” Porter said. He offered an example of “robotic dogs to go and do bomb dismantling or inspecting and recovery.”
Verizon has made innovating on new applications a strategic goal, launching a 5G First Responders Lab dedicated to guiding a new generation of startups to build at this crossroads. Nilan of Verizon said that the incubator has had more than 20 companies across four different cohorts, working on everything from virtual reality training environments to AR applications that allow firefighters to “see through walls.” His colleague Davis said that “artificial intelligence is going to continue to get better and better and better.”
Blueforce is a company that went through the first cohort of the Lab. The company uses 5G to connect sensors and devices together to allow first responders to make the best decisions they can with the most up-to-date data. Michael Helfrich, founder and CEO, said that “because of these new networks … commanders are able to leave the vehicle and go into the field and get the same fidelity” of information that they normally would have to be in a command center to receive. He noted that in addition to classic user interfaces, the company is exploring other ways of presenting information to responders. “They don’t have to look at a screen anymore, and [we’re] exploring different cognitive models like audio, vibration and heads-up displays.”
5G will offer many new ways to improve emergency responses, but that doesn’t mean that our current 4G networks will just disappear. Davis said that many sensors in the field don’t need the kind of latency or bandwidth that 5G offers. “LTE is going to be around for many, many more years,” he said, pointing to the hardware and applications taking advantage of LTE-M standards for Internet of Things (IoT) devices as a key development for the future here.
Link me to the stars, Elon Musk
Michael Martin of emergency response data platform RapidSOS said that “it does feel like there is renewed energy to solve real problems,” in the disaster response market, which he dubbed the “Elon Musk effect.” And that effect definitely does exist when it comes to connectivity, where SpaceX’s satellite bandwidth project Starlink comes into play.
Satellite uplinks have historically had horrific latency and bandwidth constraints, making them difficult to use in disaster contexts. Furthermore, depending on the particular type of disaster, satellite uplinks can be astonishingly challenging to setup given the ground environment. Starlink promises to shatter all of those barriers — easier connections, fat pipes, low latencies and a global footprint that would be the envy of any first responder globally. Its network is still under active development, so it is difficult to foresee today precisely what its impact will be on the disaster response market, but it’s an offering to watch closely in the years ahead, because it has the potential to completely upend the way we respond to disasters this century if its promises pan out.
Yet, even if we discount Starlink, the change coming this decade in emergency response represents a complete revolution. The depth and resilience of connectivity is changing the equation for first responders from complete reliance on antiquated tools to an embrace of the future of digital computing. The machine is no longer stoppable.
Future of Technology and Disaster Response Table of Contents
Betting on upcoming startup markets – TechCrunch
Welcome back to The TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s broadly based on the daily column that appears on Extra Crunch, but free, and made for your weekend reading. Want it in your inbox every Saturday? Sign up here.
Ready? Let’s talk money, startups and spicy IPO rumors.
Betting on upcoming startup markets
This week M25, a venture capital concern focused on investing in the Midwest of the United States, announced a new fund worth $31.8 million. As the firm noted in a release that The Exchange reviewed, its new fund is about three times the size of its preceding investment vehicle.
I caught up with M25 partner Mike Asem to chat about the round. Asem joined M25 in 2016 after partner Victor Gutwein spearheaded the effort with a small $1 million fund. Asem and Gutwein have led the firm since its first material, if technically second fund.
Asem said that his team had targeted a $25 million to $30 million fund three, meaning that they came in a bit higher than anticipated in fundraising terms. That’s not a surprise in today’s venture capital market, given the pace at which capital is both invested into VC funds and startups.
The investor told The Exchange that M25 has been investing out of its third fund for some time, including CASHDROP, a startup that I’ve heard good things about regarding its growth rate. (More here on the CASHDROP round that M25 put capital into.)
All that’s fine, but what makes M25 an interesting bet is that the firm only invests in Midwest-headquartered startups. Often when I chat to a fund that has a unique geographical focus, it’s merely that, a focus. As opposed to M25’s more hard-and-fast rule. Now with more capital and plans to take part in 12-15 deals per year, the group can double down on its thesis.
Per Asem, M25 has done about a third of its deals in Chicago, where it’s based, but has put capital into startups in 24 cities thus far. TechCrunch covered one of those companies, Metafy, earlier this week when it closed more than $5 million in new capital.
Why does M25 think that the Midwest is the place to deploy capital and generate outsize returns? Asem listed a number of perspectives that underpin his team’s thesis: The Midwest’s economic might, the network that his partner and him developed in the area before founding M25, and the fact that valuations can prove to be more attractive in the region at the stage that his firm invests. They are sufficiently different, he said, that his firm can generate material returns even with exits at around the $100 million mark, a lower threshold than most VCs with larger capital vehicles might find palatable.
M25 is not alone in its bets on alternative regions. The Exchange also chatted with Somak Chattopadhyay of Armory Square Ventures on Friday, a firm that is based in upstate New York and invests in B2B software companies in what we might call post-manufacturing cities. One of its investments has gone public, and the group’s latest fund is a multiple of the size of its first. Armory now has around $60 million in AUM.
All that’s to say that the venture capital boom is not merely helping firms like a16z raise another billion here, or another billion there. But the generally hot market for startups and private capital is helping even smaller firms raise more capital to take on less traditional spaces. It’s heartening.
On-demand pricing, and grokking the insurance game
This week The Exchange chatted with Twilio CFO Khozema Shipchandler about his company’s earnings report. You can read more on the hard numbers here. The short gist is that it was a good quarter. But what mattered most in our chat was Shipchandler riffing on where the center of gravity at Twilio will remain in revenue terms.
Briefly, Twilio is best known for building APIs that allow developers to leverage telecom services. Those developers and their employers pay for as much Twilio as they used. But over time Twilio has bought more and more companies, building out a diverse product set after its 2016-era IPO.
So we were curious: Where does the company stand on the on-demand versus SaaS pricing debate that is currently raging in the software world? Staunchly in the first camp, still, despite buying Segment, which is a SaaS service. Per Shipchandler, Twilio revenue is still more than 70% on-demand, and the company wants to make sure that its customers only buy more of its services as they sell more of their own.
Startups, then, probably don’t have to give up on on-demand pricing as they scale. Twilio is huge and is sticking to it!
Then there was Root’s earnings report. Again, here are the core numbers. The Exchange is keeping tabs on Root’s post-IPO performance not only because it was a company we tracked extensively during its late private life, but also because it is a bellwether of sorts for the yet-private, neoinsurane companies. Which matters for fellow neoinsurance player Hippo, as it is going public via a SPAC.
Alex Timm, Root’s CEO, said that his firm performed well in the first quarter, generating more direct written premium than anticipated, and at better loss-rates to boot. The company also remains very cash-rich post IPO, and Timm is confident that his company’s data science work has lots more room to improve Root’s underwriting models.
So, faster-than-expected growth, lots of cash, improving economics and a bullish technology take — Root’s stock is flying, right? No, it is not. Instead Root has taken a bit of a public-market pounding in recent months. The Exchange asked Timm about the disparity between how he views his company’s performance and future, and how it is being valued. He said that the insurance folks don’t always get its technology work and that tech folks don’t always grok Root’s insurance business.
That’s tough. But with years and years of cash at its current burn rate, Root has more than enough space to prove its critics wrong, provided that its modeling holds up over the next dozen quarters or so. Its share price can’t be great for the yet-private neoinsurance companies, however. Even if Next Insurance did just raise another grip of cash at another new, higher valuation.
Corporate spend’s big week
As you’ve read by now, Bill.com is buying corporate-spend unicorn Divvy for $2.5 billion. I dug into the numbers behind the deal here, if that’s your sort of thing.
But after collecting notes from the CEOs of Divvy competitors Ramp and Brex here, another bit of commentary came in that I wanted to share. Thejo Kote, the corporate spend startup Airbase’s CEO and founder did some math on Divvy’s results that Bill.com shared with its own investors, arguing that the company’s March payment volume and active customer account implies that the company’s “average spend volume per customer was $44,400 per month.”
Is that good or bad? Kote is not impressed, saying that Airbase’s “average spend volume per customer is almost 10 [times] that of Divvy,” or around “$375,000 per month.” What’s driving that difference? A focus on larger customers, and the fact that Airbase covers more ground, in Kote’s view, than Divvy by encompassing software work that Bill.com itself and Expensify manage.
I bring you all of this as the war in managing spend for companies large and small is heating up in software terms. With Divvy off the table, Ramp is now perhaps the largest player in the space not charging for the software it wraps around corporate cards. Brex recently launched a software product that it charges for on a recurring basis. (More on Brex at this link, if you are into it.)
Various and sundry
Two final notes for you, things that should make you either laugh, grimace, or howl:
- The Wall Street Journal’s Eliot Brown tweeted some data this week from the Financial Times, namely that amongst the roughly 40 SPACs that completed deals last year, a dozen and a half have lost more than half their value. And that the average drop amongst the combined entities is 38%. Woof.
- And, finally, welcome to peak everything.
More to come next week, including notes on the return of the Kaltura and Procore IPOs, and whatever it is we can suss out from the Krispy Kreme S-1 filing, as donuts are life.
A huge fintech exit as the week ends – TechCrunch
To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.
Our thanks to everyone who wrote in this week about the format changes to the newsletter! Feedback largely sorted into two themes: Some people really like the more narrative format, and some folks really want a more link-list styled missive. What follows is an attempt to balance both perspectives.
Starting today we’ll bold company names, so that you can more quickly pick out startups, add more bulleted points to sections, and, per a different piece of feedback, include more regular descriptors of companies that are not household names.
That said, we’re not going to abandon chatting with you every day, as TechCrunch is nothing if not full of things to say. So here’s a blend of what the new, updated Daily Crunch team had in mind, and your notes. A big thanks to everyone who wrote in!
A mega-exit for American fintech
The news that public fintech company Bill.com will buy Divvy, a Utah-based startup that helps small and midsized businesses manage their spend, was perhaps the biggest startup story of the week. Breaking late Thursday, the $2.5 billion transaction was long expected. Divvy had raised more than $400 million from PayPal Ventures, New Enterprise Associates, Insight Partners and Pelion Venture Partners.
TechCrunch covered the impending sale, rumors of which sprung up before Bill.com reported its Q1 earnings. To see the company drop the news at the same time as its earnings was not a surprise. For the burgeoning corporate payment space (more here on startups in the space like Ramp, Airbase and Brex).
I got to noodle on the financial results that Bill.com detailed regarding Divvy — they are pretty key metrics to help us value the startups that are competing to go public or find a similarly feathered corporate nest. In short, the corporate spend startup cohort is doing great. It’s even spawning new startups like Latin American-focused Clara, which raised $3.5 million earlier this year.
Startups and venture capital
- Startup employees should pay attention to Biden’s capital gains tax plans — Vieje Piauwasdy, a director at Secfi, a company working to help startup employees manage equity, has notes on the current political climate in a key startup market, the United States.
- Tiger Global is betting that more schools are going to share future student earnings — Tiger Global invested in Blair, a startup that wants to help universities offer income share agreements, or ISAs, to students. Natasha has the latest on the trend, and, of course, the recently ubiquitous Tiger investing group.
- SoftBank leads $15M round for China’s industrial robot maker Youibot — Well-known Japanese conglomerate SoftBank’s Asian venture group is putting $15 million into Youibot, a Chinese startup that builds “autonomous mobile robots,” Rita reports.
- GajiGesa, a fintech focused on Indonesian workers, adds strategic investors and launches new app for micro-SMEs — GajiGesa, a startup that provides “earned wage access,” or EWA in the Indonesian market, has raised an undisclosed amount of new capital, following its February venture round worth $2.5 billion that was backed by Defy.vc and Quest Ventures.
5 investors discuss the future of RPA after UiPath’s IPO
Much ink (erm, pixels) has been spilled about robotic process automation (RPA) recently, particularly in the wake of UiPath’s IPO last month.
But while some of the individuals Ron interviewed about the future of RPA believe the technology is in its “early infancy,” the pandemic increased attention toward things we can let robots handle for us. And it’s hard to argue that repetitive tasks like billing and spreadsheeting and paper-pushing should not be outsourced to robots.
“RPA allows companies to automate a group of highly mundane tasks and have a machine do the work instead of a human,” Ron writes. “Think of finding an invoice amount in an email, placing the figure in a spreadsheet and sending a Slack message to accounts payable. You could have humans do that, or you could do it more quickly and efficiently with a machine. We’re talking mind-numbing work that is well suited to automation.”
Although RPA is the fastest-growing category in enterprise software, the market remains surprisingly small. Ron spoke to five investors about where the sector is headed, where there are opportunities and the biggest threats to the RPA startup ecosystem.
(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)
The tech giants
It was a quieter day from the tech giants, who made plenty of news earlier in the week. The good news is that their relative calm means we can take a look at news from other Big Tech companies, those that don’t quite crack the $1 trillion market cap threshold yet:
- Walmart’s Flipkart to cover insurance for all sellers in India and waive additional fees — Recall that American commerce giant Walmart owns Indian e-commerce giant Flipkart, which is “exempting storage and cancellation fees for sellers on its marketplace and also providing them with insurance coverage” in light of the COVID-19 surge in the country. A good move.
- Credit Karma reinvents cash-back rewards with instant payback — American consumer credit fintech Credit Karma, which sold to Intuit for more than $7 billion last year, is trying to reinvent the cash-back reward system popular among credit cards for its debit-card-using users, Matt reports.
- A conversation with Bison Trails, the AWS-like service inside of Coinbase — Now a public company, Coinbase, a cryptocurrency exchange with easy on-ramps to the more mainstream fiat banking world, has a secret little company helping power it from the inside called Bison Trails that it bought some time back. Connie digs in.
- Twitch UX teardown: The Anchor Effect and de-risking decisions — Finally, UX guru Peter Ramsey of Built For Mars tucks into Twitch, the popular streaming platform that Amazon bought years ago.
Tesla refutes Elon Musk’s timeline on ‘full self-driving’ – TechCrunch
What Tesla CEO Elon Musk says publicly about the company’s progress on a fully autonomous driving system doesn’t match up with “engineering reality,” according to a memo that summarizes a meeting between California regulators and employees at the automaker.
The memo, which transparency site Plainsite obtained via a Freedom of Information Act request and subsequently released, shows that Musk has inflated the capabilities of the Autopilot advanced driver assistance system in Tesla vehicles, as well the company’s ability to deliver fully autonomous features by the end of the year.
Tesla vehicles come standard with a driver assistance system branded as Autopilot. For an additional $10,000, owners can buy “full self-driving,” or FSD — a feature that Musk promises will one day deliver full autonomous driving capabilities. FSD, which has steadily increased in price and capability, has been available as an option for years. However, Tesla vehicles are not self-driving. FSD includes the parking feature Summon as well as Navigate on Autopilot, an active guidance system that navigates a car from a highway on-ramp to off-ramp, including interchanges and making lane changes. Once drivers enter a destination into the navigation system, they can enable “Navigate on Autopilot” for that trip.
Tesla vehicles are far from reaching that level of autonomy, a fact confirmed by statements made by the company’s director of Autopilot software CJ Moore to California regulators, the memo shows.
“Elon’s tweet does not match engineering reality per CJ,” according to the memo summarizing the conversation between regulators with the California Department of Motor Vehicles’ autonomous vehicles branch and four Tesla employees, including Moore.
The memo, which was written by California DMV’s Miguel Acosta, states that Moore described Autopilot — and the new features being tested — as a Level 2 system. That description matters in the world of automated driving.
There are five levels of automation under standards created by SAE International. Level 2 means two primary functions — like adaptive cruise and lane keeping — are automated and still have a human driver in the loop at all times. Level 2 is an advanced driver assistance system, and has become increasingly available in new vehicles, including those produced by Tesla, GM, Volvo and Mercedes. Tesla’s Autopilot and its more capable FSD were considered the most advanced systems available to consumers. However, other automakers have started to catch up.
Level 4 means the vehicle can handle all aspects of driving in certain conditions without human intervention and is what companies like Argo AI, Aurora, Cruise, Motional, Waymo and Zoox are working on. Level 5, which is widely viewed as a distant goal, would handle all driving in all environments and conditions.
Here is an important bit via Acosta’s summarization:
DMV asked CJ to address from an engineering perspective, Elon’s messaging about L5 capability by the end of the year. Elon’s tweet does not match engineering reality per CJ. Tesla is at Level 2 currently. The ratio of driver interaction would need to be in the magnitude of 1 or 2 million miles per driver interaction to move into higher levels of automation. Tesla indicated that Elon is extrapolating on the rates of improvement when speaking about L5 capabilities. Tesla couldn’t say if the rate of improvement would make it to L5 by end of calendar year.
Portions of this commentary were redacted. However, Plainsite was able to copy and paste the redacted part, which shows up as white space on a PDF, into another document.
The comments in the memo are contrary to what Musk has said repeatedly in the public sphere.
Musk is frequently asked on Twitter and in quarterly earnings calls for progress reports on FSD, including questions about when it will be rolled out via software updates to owners who have purchased the option. In a January earnings call, Musk said he was “highly confident the car will be able to drive itself with reliability in excess of a human this year.” In April 2021, during the company’s first quarter earnings call, Musk said “it’s really quite, quite tricky. But I am highly confident that we will get this done.”
The memo released this week provided other insights into Tesla’s push to test and eventually unlock greater levels of autonomy, including the number of vehicles testing a beta version of “Navigate on Autopilot on City Streets,” a feature that is meant to handle driving in urban areas and not just highways. Regulators also asked the Tesla employees if and how participants were being trained to test this feature, and how the sales team ensures that messaging about the vehicle capabilities and limitations are communicated.
As of the March meeting, there were 824 vehicles in a pilot program testing a beta version of “city streets.” About 750 of those vehicles were being driven by employees and 71 by non-employees. Pilot participants are located across 37 states, with the majority of participants in California. As of March 2021, pilot participants have driven more than 153,000 miles using the City Streets feature, the memo states. The memo noted that Tesla planned to expand this pool of participants to approximately 1,600 later that month.
Tesla told the DMV that it is working on developing a video for the participants and that the next group of participants will include referrals from existing participants. “The new participants will be vetted by Tesla by looking at insurance telematics based on the VINs registered to that participant,” according to the memo.
Tesla also told the DMV that it is able to track when there are failures or when the feature is deactivated. Moore described these as “disengagements,” a term also used by companies testing and developing autonomous vehicle technology. The primary difference worth noting here is that these companies only use employees who are trained safety drivers, not the public.
Toyota AI Ventures and May Mobility will talk the future of the transportation industry on Extra Crunch Live – TechCrunch
Besides a passion for progress in the mobility space, what do Toyota AI Ventures’ Jim Adler, May Mobility’s Nina Grooms Lee and May’s Edwin Olson have in common?
All three of them are joining us on an upcoming episode of Extra Crunch Live. The show goes down on May 12 at 3pm ET/noon PT. Register here for free!
May Mobility is one the most exciting companies to enter the transportation space in the past decade. The autonomous shuttle company has a fleet of autonomous low-speed shuttles spread out between Detroit, Grand Rapids and Providence. Recently, May launched a Lexus-based autonomous shuttle. The company has raised $83.6 million in funding, including a $50 million Series B led by Toyota Motor Corp.
Which brings us this episode of Extra Crunch Live.
Toyota AI Ventures Founding Managing Director Jim Adler will sit down with May Mobility Chief Product Officer Nina Grooms Lee and May co-founder and CEO Edwin Olson to discuss how that Series B deal came about. We’ll talk about what made May stand out to Toyota, and vice versa, and how the teams have worked together since.
We’ll also talk about what to expect out of the ever-changing and growing mobility industry.
Following the interview, Grooms Lee, Olson and Adler will weigh in on startup pitches from the audience. Yup, that’s right. Our ECL audience will once again have the chance to pitch our seasoned tech professionals. Attendees can virtually “raise their hand” as soon as our virtual doors open and throw their hat in the ring for an opportunity to make a 2-minute elevator pitch. Imagine running into a VC or potential customer at a tech conference like Disrupt or bumping into them at a park. As such, no visual aids are allowed, including decks, videos, demoes, etc. Excited? Smash this link to register for free!
Extra Crunch Live goes down every Wednesday at 3pm ET/noon PT and is accessible to anyone and everyone. However, on-demand access to the content is reserved exclusively for Extra Crunch members. If you’re not yet a member, what are you waiting for?
SpaceX might try to fly the first Starship prototype to successfully land a second time – TechCrunch
SpaceX is fresh off a high for its Starship spacecraft development program, but according to CEO Elon Musk, it’s already looking ahead to potentially repeating its latest success with an unplanned early reusability experiment. Earlier this week, SpaceX flew the SN15 (i.e., 15th prototype) of its Starship from its development site near Brownsville, Texas, and succeeded in landing it upright for the first time. Now, Musk says they could fly the same prototype a second time, a first for the Starship test and development effort.
The successful launch and landing on Wednesday included an ascent to around 30,000 feet, where the 150-foot tall spacecraft flipped onto its ‘belly’ and then descended back to Earth, returning vertical and firing its engines to slow its descent and touch down softly standing upright. This atmospheric testing is a key step meant to help prove out the technologies and systems that will later help Starship return to Earth after its orbital launches. The full Starship launch system is intended to be completely reusable, including this vehicle (which will eventually serve as the upper stage) and the Super Heavy booster that the company is also in the process of developing.
A second test flight of SN15 is an interesting possibility among the options for the prototype. SpaceX will obviously be conducting a number of other check-outs and gathering as much data as it can from the vehicle, in addition to whatever it collected from onboard sensors, but the options for the craft after that basically amounted to stress testing it to failure, or dismantling it and studying the pieces. A second flight attempt is an interesting additional option that could provide SpaceX with a lot of invaluable data about its planned re-use of the production version of Starship.
Whether or not SpaceX actually does re-fly SN15 is still up in the air, but if it does end up being technically possible, it seems like a great learning opportunity for SpaceX that could help fast-track the overall development program.
How Duolingo became a $2.4B language unicorn – TechCrunch
When the Earth is gone, at least the internet will still be working – TechCrunch
Betting on upcoming startup markets – TechCrunch
HoneyBook raises $155M at $1B+ valuation to help SMBs, freelancers manage their businesses – TechCrunch
Amazon’s over-the-top business, including IMDb TV and Twitch, tops 120M monthly viewers – TechCrunch
Big Tech is now worth so much we’ve forgotten to be shocked by the numbers – TechCrunch
- Technology4 days ago
HoneyBook raises $155M at $1B+ valuation to help SMBs, freelancers manage their businesses – TechCrunch
- Technology4 days ago
Amazon’s over-the-top business, including IMDb TV and Twitch, tops 120M monthly viewers – TechCrunch
- Technology3 days ago
Founded by former Carousell and Fave execs, Rainforest gets $36M to consolidate Asia-Pacific Amazon Marketplace brands – TechCrunch
- Technology5 days ago
Hangry, an Indonesian cloud kitchen startup with plans to become a global F&B company, closes $13M Series A – TechCrunch
- Technology5 days ago
Uber and Arrival partner to create an EV for ride-hail drivers – TechCrunch
- Technology4 days ago
Fewcents raises $1.6M to help publishers take payments for individual articles, videos and podcasts – TechCrunch
- Technology5 days ago
CryptoPunks maker Larva Labs launches their new NFT project, Meebits – TechCrunch
- Technology6 days ago
Gillmor Gang: Walk the Dinosaur