Home Blog

AI Should Be Decentralized, But How?

0

The case for greater transparency and verifiability in AI. But is decentralization the best way to achieve that and is feasible in practice? Jesus Rodriquez says the technical challenges are tremendous.

Source link

Bybit to halt operations in the UK amid regulatory changes

0


  • Bybit will stop new account registrations on October 1 and end pause deposits and other services for existing customers on October 8.
  • The exchange’s announcement comes ahead of the effective date for new FCA rules for crypto firms.

Bybit has notified its customers that it will suspend its services in the UK next month as it looks to align operations with new regulatory requirements.

According to the crypto exchange, the decision is “in light of the UK Financial Conduct Authority’s introduction of new rules regarding marketing and communications by crypto businesses as outlined in the June 2023 Policy Statement (PS23/6) entitled “Financial Promotion Rules for Crypto assets.”

The suspension will allow the exchange to focus resources on meeting the new regulation, Bybit said in the announcement.

No new customer registrations beginning October 1

No new registrations will be allowed as from October 1, while services to existing customers will be halted on October 8. 

For current Bybit customers, the effective date will see them unable to make deposits, create new contracts, or increase positions. However, users will still be able to reduce or close positions as well as withdraw funds.

All UK customers have been asked to take the necessary steps to manage and wind down positions. The timeline for such action will run until January 8, 2024, 8:00 am UTC.

After the stipulated deadline, their open positions will be liquidated, and the liquidation funds will be available for withdrawal,” the exchange noted.

Bybit’s move comes ahead of the October 8 deadline for companies to comply with the new UK marketing rules on ads and promotions. The requirement is that a crypto company must have regulatory approval from the Financial Conduct Authority (FCA) for it to support ads and other promotional materials.

At the moment, FCA’s crypto register does not include Dubai-based Bybit. The company recently scored a crypto exchange and custody licence in Cyprus. 



Source link

NFTs and Real Estate Tokenization: A Game-Changer

0


Are you curious about the next major development in the world of non-fungible tokens (NFTs)? Well, it seems that real estate tokenization is poised to become the next big thing in NFTs.

Lately, the concept of tokenizing real estate assets has been making waves in the blockchain community, and for good reason. By tokenizing real estate, individuals can invest in property without having to buy the entire building or land outright. This approach allows investors to break down their investments into smaller, more manageable portions while still ensuring secure ownership.

Let’s dive deeper into the mechanics of real estate tokenization and explore how it could revolutionize real estate investment.

Understanding Real Estate Tokenization

For those interested in real estate investment or simply keeping an eye on industry trends, you may have encountered the term “tokenization.” In essence, tokenization involves converting tangible assets, like real estate properties, into smaller digital tokens. Each token represents a fraction of the original asset, and these tokens can be securely traded or sold on blockchain networks.

So, what does this mean for real estate? With blockchain technology and smart contracts, property owners can divide their assets into smaller, more accessible pieces for global investors. This process simplifies real estate investments that might otherwise be out of reach for many.

Smart contracts, those self-executing lines of code on the blockchain, are crucial here. They encode the terms and conditions of the fractionalized property, specifying who earns what portion of the profits and who holds specific rights to the property. Interestingly, non-fungible tokens (NFTs), often associated with art or collectibles, are also finding a role in real estate tokenization. ERC-721 tokens on the Ethereum blockchain are commonly used for this purpose.

How Does Real Estate Tokenization Operate?

Real estate tokenization is a groundbreaking concept that has captivated the real estate industry. It entails dividing a property into smaller fractions using blockchain-based smart contracts. This means that anyone who buys or owns a token possesses a share of the real estate and can claim a proportionate share of profits and losses.

For instance, imagine you own a property worth $100,000 but urgently need cash. In this scenario, you can tokenize your property by creating digital tokens representing it and offer these tokens for sale at affordable prices. Buyers can then invest in your property by purchasing as many tokens as they wish, securing a stake in your real estate.

Tokenization enhances transparency because it operates on a decentralized network. Every transaction is publicly recorded on the blockchain, ensuring complete openness. This approach can be applied to various property types, from commercial and residential to prestigious trophy assets. Even high-value commercial properties can be fractionated, opening doors for smaller investors while enabling property owners to quickly liquidate their assets.

Furthermore, tokenization is not restricted to property alone; it can extend to digitized shares of deeds, equity interests in legal entities, or ownership of collateralized debt. The advent of NFTs has expanded the possibilities for investors and property owners alike.

Advantages of Real Estate Tokenization

1. Transparent and Efficient Transactions:

The automated process cuts down the traditional paperwork, mitigating human intervention and the potential for unfair deals. Furthermore, transactions can occur unhindered by typical office hours, a convenience afforded by blockchain technology.

2. Enhanced Liquidity:

Tokenization addresses the traditional liquidity challenge in real estate by enabling fractional ownership units to be bought and sold digitally. This ease of access allows investors to enter the market more seamlessly, and property owners can swiftly sell their tokens, leading to an increase in market liquidity.

3. Opportunities for Small Investors:

By reducing the cost and entry barriers to real estate investment, investors are able to purchase small amounts of equity in multiple properties. This allows for portfolio diversification and minimized risk. Unlike traditional investment vehicles like REITs, tokenization offers transparency and control over investments.

4. A Free-Flowing Market:

Real estate tokenization fosters a decentralized financial system, providing equal opportunities to all stakeholders and eliminating biases and over-regulation. Fractional ownership simplifies decision-making, establishing a transparent and trustworthy market structure. Moreover, tokenization enhances accessibility, inviting individuals with limited resources to participate in the sector.

5. Reduced Counter-Party Risk:

Finally, tokenization significantly reduces counter-party risk. By streamlining transactions and minimizing the number of involved parties, counter-party risk is greatly reduced. Digital assets on blockchain platforms are easily bought and sold, and the enforcement of smart contracts ensures smooth, fraud-resistant transactions.

Challenges of Real Estate Tokenization

1. Smart Contract Security:

Despite blockchain being renowned for its security, the associated smart contracts may contain vulnerabilities that hackers can exploit. Such weaknesses could lead to significant breaches, potentially resulting in irrecoverable asset loss. Therefore, regular audits are non-negotiable to ensure the robustness of the system against such incidents.

2. Complex Licensing:

Complex Licensing represents another substantial obstacle in the path of implementing tokenized real estate assets. Acquiring the necessary licenses for security token offerings (STOs) can be a complex and daunting process. The intricate web of regulations and requirements often proves to be a hindrance to the smooth execution of tokenization in the real estate sector.

3. Regulatory Issues in Tokenization:

Regulatory Issues can also pose challenges in the advancement of real estate tokenization. There exists a considerable gap in understanding blockchain technology among many regulators and real estate professionals. This lack of knowledge often translates into regulatory ambiguity, which can inhibit the growth and widespread acceptance of this technology in real estate.

4. Tax Complexities:

Tax Complexities associated with cryptocurrencies add to the uncertainty for stakeholders. Global tax regimes have yet to establish clear regulations for cryptocurrency taxes. This regulatory ambiguity can instigate uncertainty among stakeholders, potentially discouraging their participation in real estate tokenization.

Conclusion

Real estate tokenization holds the potential to transform real estate investment and transactions. By reducing counter-party risk, establishing a transparent market, and enabling fractional ownership, it promises to make real estate more accessible than ever.

However, challenges such as smart contract security, licensing complexities, regulatory issues, and tax uncertainties must be addressed for tokenized real estate to thrive. If these obstacles can be overcome, tokenization may become an appealing option for investors worldwide.

Explore the benefits of NFTICALLY, a user-friendly platform to launch your NFT marketplace and list tokenized assets alongside non-fungible tokens. Visit the platform today and discover its advantages.

Source link

Pancakeswap integrates with Transak to ease crypto purchases with fiat

0


Key takeaways

  • Pancakeswap has integrated with Transak for fiat onboarding on multiple chains.

  • Users can now purchase cryptocurrencies with debit cards, Google Pay, Apple Pay and other methods via Transak.

Pancakeswap integrates with Transak

Decentralised exchange PancakeSwap has integrated with Transak, a developer integration toolkit that allows users to buy/sell crypto in any app, website or web plugin.

Transak is now part of the Pancakeswap “buy crypto” tab, enabling Pancakeswap users to purchase cryptocurrencies using a wide range of payment options.

According to the announcement, Pancakeswap users can now purchase cryptocurrencies with fiat currencies using numerous options such as credit cards, bank transfers, Google Pay, and Apple Pay.

This isn’t the first fiat purchase option integrated by Pancakeswap. The decentralised exchange had previously integrated Mercuryo and MoonPay into its platform, with Transak now the third payment option available to users. 

Pancakeswap is one of the leading decentralised exchanges in the world. It is currently available on multiple blockchain networks, including BNB Smart Chain, Ethereum, Base and Polygon zkEVM.

The DEX currently has more than $1.3 billion worth of cryptocurrencies in its contracts and processes more than $150 million in daily trading volume. 

Transak is supported by seven blockchains

According to the announcement, Pancakeswap said Transak is now available across nine cryptocurrencies on seven blockchains at launch. Some of the supported chains include Ethereum, Polygon zkEVM, zkSync Era, Linea, Base, BNB Chain, and Arbitrum.

While commenting on this latest cryptocurrency news, PancakeSwap’s pseudonymous leader, head chef Mochi, said,

“It’s imperative that entry points remain simple yet robust. Transak’s expertise in fiat on-ramping, combined with PancakeSwap’s platform capabilities, promises an era where diving into decentralised finance is intuitive and barrier-free for all.”

Pancakeswap has been building despite the ongoing bear market. In August, Pancakeswap V3 launched on the Ethereum Layer 2 Linea mainnet. Linea is an EVM-compatible zero-knowledge proofs network developed by ConsenSys.

Source link

Foundational models at the edge

0


Foundational models (FMs) are marking the beginning of a new era in machine learning (ML) and artificial intelligence (AI), which is leading to faster development of AI that can be adapted to a wide range of downstream tasks and fine-tuned for an array of applications. 

With the increasing importance of processing data where work is being performed, serving AI models at the enterprise edge enables near-real-time predictions, while abiding by data sovereignty and privacy requirements. By combining the IBM watsonx data and AI platform capabilities for FMs with edge computing, enterprises can run AI workloads for FM fine-tuning and inferencing at the operational edge.  This enables enterprises to scale AI deployments at the edge, reducing the time and cost to deploy with faster response times.

Please make sure to check out all the installments in this series of blog posts on edge computing:

What are foundational models?

Foundational models (FMs), which are trained on a broad set of unlabeled data at scale, are driving state-of-the-art artificial intelligence (AI) applications. They can be adapted to a wide range of downstream tasks and fine-tuned for an array of applications. Modern AI models, which execute specific tasks in a single domain, are giving way to FMs because they learn more generally and work across domains and problems. As the name suggests, an FM can be the foundation for many applications of the AI model.

FMs address two key challenges that have kept enterprises from scaling AI adoption. First, enterprises produce a vast amount of unlabeled data, only a fraction of which is labeled for AI model training. Second, this labeling and annotation task is extremely human-intensive, often requiring several hundreds of hours of a subject matter expert’s (SME) time. This makes it cost-prohibitive to scale across use cases since it would require armies of SMEs and data experts. By ingesting vast amounts of unlabeled data and using self-supervised techniques for model training, FMs have removed these bottlenecks and opened the avenue for widescale adoption of AI across the enterprise. These massive amounts of data that exist in every business are waiting to be unleashed to drive insights.

What are large language models?

Large language models (LLMs) are a class of foundational models (FM) that consist of layers of neural networks that have been trained on these massive amounts of unlabeled data. They use self-supervised learning algorithms to perform a variety of natural language processing (NLP) tasks in ways that are similar to how humans use language (see Figure 1).

Figure 1. Large language models (LLMs) have taken the field of AI by storm.

Scale and accelerate the impact of AI

There are several steps to building and deploying a foundational model (FM). These include data ingestion, data selection, data pre-processing, FM pre-training, model tuning to one or more downstream tasks, inference serving, and data and AI model governance and lifecycle management—all of which can be described as FMOps.

To help with all this, IBM is offering enterprises the necessary tools and capabilities to leverage the power of these FMs via IBM watsonx, an enterprise-ready AI and data platform designed to multiply the impact of AI across an enterprise. IBM watsonx consists of the following:

  1. IBM watsonx.ai brings new generative AI capabilities—powered by FMs and traditional machine learning (ML)—into a powerful studio spanning the AI lifecycle.
  2. IBM watsonx.data is a fit-for-purpose data store built on an open lakehouse architecture to scale AI workloads for all of your data, anywhere.
  3. IBM watsonx.governance is an end-to-end automated AI lifecycle governance toolkit that is built to enable responsible, transparent and explainable AI workflows.

Another key vector is the increasing importance of computing at the enterprise edge, such as industrial locations, manufacturing floors, retail stores, telco edge sites, etc. More specifically, AI at the enterprise edge enables the processing of data where work is being performed for near real-time analysis. The enterprise edge is where vast amounts of enterprise data is being generated and where AI can provide valuable, timely and actionable business insights.

Serving AI models at the edge enables near-real-time predictions while abiding by data sovereignty and privacy requirements. This significantly reduces the latency often associated with the acquisition, transmission, transformation and processing of inspection data. Working at the edge allows us to safeguard sensitive enterprise data and reduce data transfer costs with faster response times.

Scaling AI deployments at the edge, however, is not an easy task amid data (heterogeneity, volume and regulatory) and constrained resources (compute, network connectivity, storage and even IT skills) related challenges. These can broadly be described in two categories:

  • Time/cost to deploy: Each deployment consists of several layers of hardware and software that need to be installed, configured and tested prior to deployment. Today, a service professional can take up to a week or two for installation at each location, severely limiting how fast and cost-effectively enterprises can scale up deployments across their organization.                                  
  • Day-2 management: The vast number of deployed edges and the geographical location of each deployment could often make it prohibitively expensive to provide local IT support at each location to monitor, maintain and update these deployments.

Edge AI deployments

IBM developed an edge architecture that addresses these challenges by bringing an integrated hardware/software (HW/SW) appliance model to edge AI deployments. It consists of several key paradigms that aid the scalability of AI deployments:

  • Policy-based, zero-touch provisioning of the full software stack.
  • Continuous monitoring of edge system health
  • Capabilities to manage and push software/security/configuration updates to numerous edge locations—all from a central cloud-based location for day-2 management.

A distributed hub-and-spoke architecture can be utilized to scale enterprise AI deployments at the edge, wherein a central cloud or enterprise data center acts as a hub and the edge-in-a-box appliance acts as a spoke at an edge location. This hub and spoke model, extending across hybrid cloud and edge environments, best illustrates the balance necessary to optimally utilize resources needed for FM operations (see Figure 2).

Figure 2. A hub-and-spoke deployment configuration for enterprise AI at edge locations.

Pre-training of these base large language models (LLMs) and other types of foundation models using self-supervised techniques on vast unlabeled datasets often needs significant compute (GPU) resources and is best performed at a hub. The virtually limitless compute resources and large data piles often stored in the cloud allow for pre-training of large parameter models and continual improvement in the accuracy of these base foundation models.

On the other hand, tuning of these base FMs for downstream tasks—which only require a few tens or hundreds of labeled data samples and inference serving—can be accomplished with only a few GPUs at the enterprise edge. This allows for sensitive labeled data (or enterprise crown-jewel data) to safely stay within the enterprise operational environment while also reducing data transfer costs.

Using a full-stack approach for deploying applications to the edge, a data scientist can perform fine-tuning, testing and deployment of the models. This can be accomplished in a single environment while shrinking the development lifecycle for serving new AI models to the end users. Platforms like the Red Hat OpenShift Data Science (RHODS) and the recently announced Red Hat OpenShift AI provide tools to rapidly develop and deploy production-ready AI models in distributed cloud and edge environments.

Finally, serving the fine-tuned AI model at the enterprise edge significantly reduces the latency often associated with the acquisition, transmission, transformation and processing of data. Decoupling the pre-training in the cloud from fine-tuning and inferencing on the edge lowers the overall operational cost by reducing the time required and data movement costs associated with any inference task (see Figure 3).

Figure 3. Value proposition for FM finetuning and inference at the operational edge with an edge-in-a-box. An exemplar use-case with a civil engineer deploying such an FM model for near-real-time defect-detection insights using drone imagery inputs.

To demonstrate this value proposition end-to-end, an exemplar vision-transformer-based foundation model for civil infrastructure (pre-trained using public and custom industry-specific datasets) was fine-tuned and deployed for inference on a three-node edge (spoke) cluster. The software stack included the Red Hat OpenShift Container Platform and Red Hat OpenShift Data Science. This edge cluster was also connected to an instance of Red Hat Advanced Cluster Management for Kubernetes (RHACM) hub running in the cloud.

Zero-touch provisioning

Policy-based, zero-touch provisioning was done with Red Hat Advanced Cluster Management for Kubernetes (RHACM) via policies and placement tags, which bind specific edge clusters to a set of software components and configurations. These software components—extending across the full stack and covering compute, storage, network and the AI workload—were installed using various OpenShift operators, provisioning of requisite application services, and S3 Bucket (storage).

The pre-trained foundational model (FM) for civil infrastructure was fine-tuned via a Jupyter Notebook within Red Hat OpenShift Data Science (RHODS) using labeled data to classify six types of defects found on concrete bridges. Inference serving of this fine-tuned FM was also demonstrated using a Triton server. Furthermore, monitoring of the health of this edge system was made possible by aggregating observability metrics from the hardware and software components via Prometheus to the central RHACM dashboard in the cloud. Civil infrastructure enterprises can deploy these FMs at their edge locations and use drone imagery to detect defects in near real-time—accelerating the time-to-insight and reducing the cost of moving large volumes of high-definition data to and from the Cloud.

Summary

Combining IBM watsonx data and AI platform capabilities for foundation models (FMs) with an edge-in-a-box appliance allows enterprises to run AI workloads for FM fine-tuning and inferencing at the operational edge. This appliance can handle complex use cases out of the box, and it builds the hub-and-spoke framework for centralized management, automation and self-service. Edge FM deployments can be reduced from weeks to hours with repeatable success, higher resiliency and security.

Learn more about foundational models

Please make sure to check out all the installments in this series of blog posts on edge computing:

Source link

VeChain (VET) officially launches its self-custody wallet

0


  • VeChain has officially launched VeWorld, its self-custody wallet.
  • VeWorld supports features such VIP-180 tokens, currency conversion and fee delegation.

VeChain has announced the official launch of VeWorld, the enterprise-focused L1 blockchain’s new self-custody wallet.

VeWorld integrates with WalletConnect, an open-source blockchain standard that allows users’ wallets to connect and interact with decentralised applications (dApps) and other wallets. It’s a bridge that connects the Web3 wallet to the dApps ecosystem, bringing the benefits of interoperability to developers and the broader VeChain user base.

As such, the mobile wallet’s unveiling means the VeChain ecosystem has added a key piece of infrastructure for crypto. It also marks a milestone achievement for the developer team.

Our developers have been working diligently, day and night, to build and deliver a superior VeChain wallet. Today marks the culmination of those efforts,” reads part of a blog post announcing VeWorld.

Features include currency conversion and fee delegation

VeWorld’s first iteration includes features such as support for VIP-180 tokens (VeChain’s native token standard), currency conversion (either in Euro or USD), and fee delegation. The wallet supports both iOS and Android.

In the future, VeWorld will be upgraded to bring numerous features to users, including VeChain dApp store integration, fiat on/off ramp, DEX functionality, support for asset-bridging and carbon footprint tracking.

VeChain (VET) price

VeChain (VET) traded around $0.01720062, roughly 2.6% down in the past 24 hours. The declines for VET came as the broader crypto market dropped 1.8% amid price slips for Bitcoin (BTC) and Ethereum (ETH). 

BTC was down to $26,600 and ETH to below $1,600 as markets reacted to central bank monetary policies. Continued FUD across the crypto market was also weighing on sentiment.



Source link

Artificial Intelligence and the Metaverse (And a Look at an AI-Assisted Social VR Platform, Riff XR) – Ryan Schultz

0


I created this image using OpenAI’s DALL-E generative AI art generation tool, using the text prompt “artificial intelligence in the metaverse” (source)

Housekeeping Note: I first started writing this editorial back in April, and from time to time I have picked up the draft, tinkered with it a bit more, added a bit more to it—and then promptly filed it away again as a draft, because I still wasn’t satisfied with it, and I always felt that I had something more to say.

Enough. I finally decided that the perfect was the enemy of the good, and I decided today to just go ahead and publish what I already had, and then write follow-up blogposts on the topic of AI in general, and AI in the metaverse in particular. And I do expect that I will return to this topic often! So please stay tuned.

I have written before on this blog about artificial intelligence (AI) applications, such as the image manipulation and animation tools WOMBO and Reface, the text-to-art creation programs DALL-E 2, Midjourney, and Stable Diffusion, and most recently, the AI-powered chatbot Replika and the text-generation app ChatGPT. Most people, myself included, treated them as toys, mere curiosities (I entertained myself for hours making my Second Life and Sansar avatars “come alive” using WOMBO). John Hermann, in a recent article for New York magazine titled The AI Magic Show (original; archived version), wrote:

In 2022, artificial-intelligence firms produced an overwhelming spectacle, a rolling carnival of new demonstrations. Curious people outside the tech industry could line up to interact with a variety of alluring and mysterious machine interfaces, and what they saw was dazzling.

The first major attraction was the image generators, which converted written commands into images, including illustrations mimicking specific styles, photorealistic renderings of described scenarios, as well as objects, characters, textures, or moods. Similar generators for video, music, and 3-D models are in development, and demos trickled out.

Soon, millions of people encountered ChatGPT, a conversational bot built on top of a large language model. It was by far the most convincing chatbot ever released to the public. It felt, in some contexts, and especially upon first contact, as though it could actually participate in something like conversation. What many users suggested felt truly magical, however, were the hints at the underlying model’s broader capabilities. You could ask it to explain things to you, and it would try — with confident and frequently persuasive results. You could ask it to write things for you — silly things, serious things, things that you might pass off as work product or school assignments — and it would.

As new users prompted these machines to show us what they could do, they repeatedly prompted us to do a little dirty extrapolation of our own: If AI can do this already, what will it be able to do next year?

As Charlie Wurzel writes in The Atlantic, in a recent article titled What Have We Just Unleashed? (original; archived version), not even the AI experts know exactly what will come next:

Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?

The more people I talked with, the more it became clear that there aren’t great answers to the big questions. Perhaps the best phrase I’ve heard to capture this feeling comes from Nathan Labenz, an entrepreneur who builds AI video technology at his company, Waymark: “Pretty radical uncertainty.”


You might be surprised to learn that we librarians are already preparing for the AI onslaught (perhaps not surprising, as some of the users we serve have already come to us with fake, AI-generated citations to information resources which do not exist!). I have seen no less than three events for Canadian librarians on this subject in the past couple of weeks, and at my employer, the University of Manitoba in Winnipeg, a fellow librarian has launched an AI Community of Practice (CoP), which will meet regularly to discuss ChatGPT and other AI apps and topics.

Librarians meeting via Zoom at the inaugural meeting of the brand-new University of Manitoba Libraries’ AI Community of Practice (I’m in the top row, second from the left!)

You.com is one of the newer entrants in the AI-powered search engine marketplace, which also offers AI-generated text and images as well. You can actually try out YouWrite (their AI-generated text generator) for free ten times before you have to shell out for the service, so I decided to give it a whirl, asking the site to create a professional-sounding blogpost about the impact of artificial intelligence on the metaverse:

And here’s the blogpost YouWrite spit back, exactly as generated, with zero edits:

The advent of artificial intelligence (AI) has revolutionized the way we interact with technology. AI is now being used to create virtual worlds, or “metaverses”, that are populated by digital avatars and objects that can be manipulated in real-time. As AI continues to evolve, it will have a profound impact on the metaverse and its inhabitants.

One of the most significant impacts of AI on the metaverse will be increased realism for users interacting within these virtual worlds. With advanced machine learning algorithms and natural language processing capabilities, computer-generated characters can become more lifelike than ever before – responding naturally to user input in real time while exhibiting realistic emotions such as joy or sadness depending on their environment. This could potentially lead to an unprecedented level of immersion for those engaging with these digital realms as they become indistinguishable from reality itself!

Another major advantage brought about by artificial intelligence is improved efficiency when it comes to managing large numbers of players at once – something which becomes particularly important when dealing with massive multiplayer online games (MMOs). By utilizing intelligent agents capable of autonomous decision making based off data collected from other players’ actions, developers can ensure smooth gameplay even during peak hours without sacrificing any quality due performance issues caused by too many people playing at once!

Finally, one area where AI could really shine in regards to improving user experience within a metaverse would be through automated content creation tools powered by deep learning models trained using vast amounts existing assets found across different platforms such as video games or movies etcetera… Such systems would allow game designers/developers quick access powerful resources allowing them craft unique experiences much faster than was previously possible – thus providing gamers exciting new adventures every time they log into their favourite world!

In conclusion then; It’s clear see how Artificial Intelligence stands revolutionize our concept what constitutes an immersive experience both inside out Metaverses – giving us greater control over our interactive environments alongside richer storylines filled dynamic characters all thanks advances this cutting edge technology brings us closer achieving true Virtual Reality today tomorrow alike…

That last paragraph is absolutely dreadful (almost as if the AI had a stroke!), but the rest reads well, and the text contains a few somewhat plausible, if borderline-fanciful ideas, but if I could sum it all up in one word, that word would be: bullshit. This is computer-generated bullshit, trained on a large language model to sound like an actual human expert, but it’s just parroting human writing, without any grasp of the knowledge it is talking about! (I can’t remember who said it first, but somebody once memorably and hilariously referred to AI-generated text apps like ChatGPT as “mansplaining as a service.” ???? In fact, I would go so far as to state that generative AI tools like ChatGPT offer white, cisgender, mansplaining as a service! All the biases in the mountains of data—scraped off the greater unwashed internet—used to train these tools sometimes comes out in their responses, despite the best efforts of the companies building them to eradicate these biases.)

Despite appearances, Chat GPT doesn’t really understand the world the way a human brain, with all of its lived experiences, does; it only understands how to generate plausible-sounding sentences and assemble them in coherent paragraphs! It’s a narrowly-defined problem, not general AI that is good at a variety of tasks, and certainly not a rival to humans.


Hermann, in his New York magazine article, paints a somewhat disquieting picture of what could happen in the future, as the AI wave accelerates:

Models trained on flawed, biased, and often secret sets of data will be used to attempt to perform an assuredly ambitious range of tasks, jobs, and vital economic and social processes that affect the lives of regular people. They will depend on access to massive amounts of computing power, meaning expensive computer hardware, meaning rare minerals, and meaning unspeakable amounts of electricity. These models will be trained with the assistance of countless low-paid labourers around the world who will correct bogus statistical assumptions until the models produce better, or at least more desirable, outputs. They will then be passed on for use in various other workplaces where their outputs and performances will be corrected and monitored by better-paid workers trying to figure out if the AI models are helping them or automating them out of a job, while their bosses try to figure out something similar about their companies. They will shade our constant submissions to the vast digital commons, intentional or consensual or mandatory, with the knowledge that every selfie or fragment of text is destined to become a piece of general-purpose training data for the attempted automation of everything. They will be used on people in extremely creative ways, with and without their consent.

Charlie Warzel goes even further, likening the potential impact of artificial intelligence to that of nuclear fission and nuclear war:

Trying to find the perfect analogy to contextualize what a true, lasting AI revolution might look like without falling victim to the most overzealous marketers or doomers is futile. In my conversations, the comparisons ranged from the agricultural revolution to the industrial revolution to the advent of the internet or social media. But one comparison never came up, and I can’t stop thinking about it: nuclear fission and the development of nuclear weapons.

As dramatic as this sounds, I don’t lie awake thinking of Skynet murdering me—I don’t even feel like I understand what advancements would need to happen with the technology for killer AGI [Artificial General Intelligence] to become a genuine concern. Nor do I think large language models are going to kill us all. The nuclear comparison isn’t about any version of the technology we have now—it is related to the bluster and hand-wringing from true believers and organizations about what technologists might be building toward. I lack the technical understanding to know what later iterations of this technology could be capable of, and I don’t wish to buy into hype or sell somebody’s lucrative, speculative vision. I am also stuck on the notion, voiced by some of these visionaries, that AI’s future development might potentially be an extinction-level threat.

ChatGPT doesn’t really resemble the Manhattan Project, obviously. But I wonder if the existential feeling that seeps into most of my AI conversations parallels the feelings inside Los Alamos in the 1940s. I’m sure there were questions then. If we don’t build it, won’t someone else? Will this make us safer? Should we take on monumental risk simply because we can? Like everything about our AI moment, what I find calming is also what I find disquieting. At least those people knew what they were building.

The point these authors are making is that, with AI, we are dealing with something which has the potential to dramatically impact (and, in some cases, up-end) our current society, in ways which might not be readily apparent at first.

Amy Castor and David Gerrard, who have been busy dissecting and critiquing the ongoing three-ring circus that is blockchain, crypto, and NFTs, have turned their attention to artificial intelligence, in a two-part series (part one; part two). I strongly suggest you read both blogposts, but here’s a sample:

Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.

The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.

The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.

Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

You can confidently expect the AI-fueled shenanigans to continue.


Riff XR: Artificial Intelligence in the Metaverse

However, there have some rather interesting specific applications of AI to the metaverse. A brand-new social VR platform called Riff XR offers a tantalizing (if still somewhat buggy) glimpse of the AI-assisted metaverse of the future.

Among the AI-assisted features of Riff XR are NPC (non-playing characters, i.e. bots) with whom you can have surprisingly open-ended conversations, as well as a “cutting-edge Stable Diffusion-powered Generative Art System”:

Now, I have not visited Riff XR myself (yet), but a good friend of mine, metaverse videographer Carlos Austin, has, and he posted a video of his explorations on this new metaverse platform, including verbal conversations with a number of NPCs using generative AI to “listen” and “respond” to his spoken sentences.

One was a constable droid roaming the night-time central plaza in Riff XR, a scene straight out of Ready Player One; another played the role of Vincent Van Gogh in an exhibition of AI-generated artworks in a museum just off the plaza; a third was a woman, named Molly Millions, working at the back bar in a cyber-disco with pulsating music and gyrating NPCs of various kinds, with whom Carlos had a surprisingly in-depth conversation about cocktails!

Carlos demonstrated that you could even speak to these NPCs in different languages including German, Japanese, and Spanish (although let me just add, that the faux Van Gogh’s German accent was absolutely atrocious!). Here’s his full video (please fast-forward through all the technical bugs and mishaps; Riff XR is still quite buggy!). Carlos’ conversation with Molly Millions is nearer the end of this video:

We can expect to see more such applications of artificial intelligence coming soon (and perhaps sooner than we might expect!) to a virtual world or social VR platform near you. And you can expect more blogposts from me on this topic in future, as the technology continues to develop and evolve over time. Stay tuned!


Many thanks To Jim Carnicelli (a.k.a Galen from Sansar), with whom I had a couple of wide-ranging online discussions via Discord on the topic of AI while I was working on this blogpost over the summer! While I did not use many of the ideas we talked about, they did give me much food for thought (and possible topics for future blog posts!). You can visit Jim’s store selling his AI-generated artwork here: Snuggle Hamster Designs.

Liked it? Then please consider supporting Ryan Schultz on Patreon! Even as little as US$1 a month unlocks exclusive patron benefits. Thank you!

Become a patron at Patreon!

Source link

Vote if You Want, but Remember 'Cypherpunks Write Code'

0

Vote if You Want, but Remember 'Cypherpunks Write Code'

Source link

Uniswap launches an educational platform in conjunction with Do DAO

0


  • Uniswap’s launch of Uniswap University, in partnership with the Do DAO underscores its commitment to user education and engagement.
  • This initiative offers a structured learning pathway and practical experience opportunities, empowering users to navigate the intricacies of the V3 exchange.
  • With a legacy of innovation and growth, Uniswap continues to be a pivotal player in the world of decentralized exchanges.

Uniswap, the decentralized exchange (DEX) powerhouse, has rolled out an educational initiative called Uniswap University in collaboration with the Do Decentralized Autonomous Organization (DAO), a blockchain education-focused entity.

This endeavour marks a pivotal move in enhancing user engagement and understanding of Uniswap’s V3 exchange.

The Uniswap University

Uniswap University is designed as a structured learning platform aimed at facilitating the onboarding process for users diving into the intricacies of the V3 exchange. It offers an array of resources, including courses, simulations, and quick guides, catering to individuals at various proficiency levels.

Through Uniswap University, users can access a comprehensive spectrum of knowledge, ranging from fundamental concepts like “What is a DEX?” to more advanced subjects such as “Strategy Backtesting Tools.” The inclusion of interactive simulations for activities like adding/removing liquidity and exploring advanced position management tools enables users to swiftly gain practical experience.

One notable offering within the educational repertoire is an advanced course that delves into the fundamentals of becoming a liquidity provider on the V3 exchange. This course introduces users to diverse strategies, each accompanied by its own set of advantages and drawbacks. Strategies encompass holding stable coins, maintaining 50% of two different tokens, having 100% exposure to a single token, providing liquidity across a wide range, offering liquidity within a narrow range, and participating in volatile token pools.

Uniswap’s Business Source License expiration

Earlier this year, Uniswap’s Business Source License expired, granting developers the freedom to fork the Uniswap V3 protocol and establish their own DEX platforms. Notably, shortly after its May 2021 launch, Uniswap V3 outstripped Bitcoin (BTC) in terms of fee generation.

A staggering $451 million worth of coins and tokens were traded on the Ethereum (ETH) mainnet via Uniswap V3 alone. The V3 protocol boasts an impressive $3.2 billion in total value locked (TVL), comprising liquidity pools, staking mechanisms, and DeFi lending.

According to data from DeFiLlama, the cumulative revenue generated by Uniswap’s V1, V2, and V3 protocols amounts to a substantial $327 million annually. During the height of the 2021 bull market, Uniswap reached an astounding peak TVL of $10 billion.



Source link

IBM TechXchange underscores the importance of AI skilling and partner innovation

0


Generative AI and large language models are poised to impact how we all access and use information. But as organizations race to adopt these new technologies for business, it requires a global ecosystem of partners with industry expertise to identify the right enterprise use-cases for AI and the technical skills to implement the technology.

During TechXchange, IBM’s premier technical learning event in Las Vegas last week, IBM Partner Plus members including our Strategic Partners, resellers, software vendors, distributors and service partners showed up in full force, joining us on stage to share how they are embracing watsonx, our enterprise-ready AI and data platform.

Read the blog about skilled partners accelerating AI adoption

Check out how a few of our partners participated in TechXchange and what they had to say about watsonx:

Samsung

In support of Call for Code, Samsung used TechXchange to demonstrate watsonx-powered versions of the Samsung Galaxy Z Fold 5 with new apps running watsonx.ai and watsonx Assistant. They also showcased how watsonx could enhance the power of Samsung’s SDS Zero Touch Mobility solution.

“IBM’s launch of watsonx was an awakening, and it inspired us to explore the immense potential of watsonx.ai’s generative AI capabilities to deliver unprecedented innovations for our clients,” said Sean Im, CEO, Samsung SDS America.

Amazon Web Services (AWS)

AWS joined us at TechXchange, where they illustrated how our generative AI technologies can be complementary and highlighted the availability of watsonx.data on the AWS marketplace.

“Organizations are increasingly adopting data lakehouse solutions to support their growing data needs, especially as we see an industry-wide shift toward AI solutions,” said Soo Lee, Director Worldwide Strategic Alliances at AWS.

“Making watsonx.data available as a service in AWS Marketplace further supports our customers’ increasing needs around hybrid cloud – giving them greater flexibility to run their business processes wherever they are, while providing choice of a wide range of AWS services and IBM cloud native software attuned to their unique requirements.”

Krista Software

“There’s a lot of excitement in the market about the value of large language models (LLMs) and generative AI, but many teams are struggling to get started because of concerns about costs, data privacy, other aspects of the enterprise readiness of available LLM solutions,” said Luther Birdzell, Chief AI Officer, Krista Software.

“IBM’s watsonx.ai is the first enterprise-grade LLM offering that brings trustworthy, scalable, and transparent AI that can be delivered on timelines and maintained at costs that align with industry leading return on investment (ROI).”

CrushBank

Crushbank demonstrated how they are using Watson Discovery and watsonx.ai to streamline the handling of extensive data in support centers, simplifying complex tickets with large language models. 

“We are especially interested in leveraging the summarization features in watsonx.ai to greatly improve service delivery,” said David Tan, CTO, CrushBank Technology, Inc.

“Providing IT agents with a comprehensive and accurate information source quickly and reliably eliminates the need to go through multiple documents manually. Integrating watsonx.ai capabilities with our data information sources, we can correlate and summarize information from various documents, delivering concise natural language answers.”

Arrow Electronics

Arrow Electronics was also onstage leading sessions. “Providing innovative solutions that bolster security and resilience, Arrow is committed to empowering organizations to safeguard their digital assets so they can thrive in an increasingly complex cyber landscape,” said Matthew Brennan, Vice President, Supplier Alliances, Arrow Electronics. “IBM watsonx has garnered industry recognition, reinforcing Arrow’s enthusiasm for generative AI and offering cutting-edge innovations for their customers. By harnessing the power of watsonx.ai, Arrow can help customers transform their core business operations with intelligence, helping ensure accuracy, scalability, and adaptability.”

Supporting multiple industries and an ever-growing number of use cases to solve real business problems, IBM is expanding our ecosystem to help clients infuse their core business operations with intelligence. Through the help of our partners, IBM’s generative AI platform is making it easier for organizations to design and deploy AI that is more accurate, scalable and adaptable than ever before.

Read what other clients and partners have to say about watsonx

Source link

William Mapan explains generative art using a crayon and dice – Cointelegraph Magazine

0


Generative artist William Mapan’s latest collection, “Distance,” sold out in less than 24 hours despite launching in the middle of a very weak NFT market.

From his early long-form generative series “Dragons” on the Tezos blockchain to the highly sought-after “Anticyclone” ArtBlocks collection that currently commands a 5 ETH floor, Mapan has a unique way of capturing the hearts and minds of collectors.

But many people in the public still don’t understand what generative art even is. Mapan has a unique way of explaining the often misunderstood genre by boiling it down to a piece of paper, a crayon and a die.

“It can be really hard to explain but usually the way I explain is to put away the code, put away the blockchain, put away everything. Just take a piece of paper, a crayon and dice. Imagine drawing two by two boxes on that paper, so four boxes total. You then throw the dice — if the roll shows up as a three or below, you draw a square; if the dice shows four or above, you draw a circle into one of the boxes.

“You just made an algorithm; you just made a set of rules and introduced some randomness in there. That’s basically what generative art is, you build a set of rules, an algorithm and then introduce randomness. Then you try to control that part of the space.

Strands of Solitude #010 by William Mapan (OpenSea)

“With the grid of two by two, the parameter of space is very reduced, but as soon as you expand to different parameters, you can get many different outputs. Imagine a 10 by 10 box and imagine you have multiple shapes like a circle, triangle, square, star or whatever. You just write down your rules and just follow them, and that’s it.”

Fine line technique

Mapan’s work straddles the line between appearing as if it’s physically or digitally made, a technique other artists such as Tyler Hobbs and Emily Xiu have a reputation for.

“I like to activate senses, feelings and memories. My hope is that when you see my work, it sparks curiosity. You might think my art reminds you of something in one way, but in another way, you’re thinking there are so many shapes that it’s impossible that someone made it by hand,” says Mapan.



“I hope that it connects with people in their memories, especially like the last series that I released last week, “Distance.” I want people to see themselves traveling, and they remember, ‘Oh, I was on this plane when I saw this kind of landscape down there.’ I like to trigger emotions and curiosity.”

Distance by William Mapan
Distance #22 by William Mapan (OpenSea)

Based in France, Mapan credits Matt Deslauriers, the artist behind Meridians and Subscapes, as his introduction to art on the blockchain. Mapan’s first NFT was minted on 4 March 2021 on Tezos, where he put a lot of his early digital work before launching Anticyclone via ArtBlocks on Ethereum on 23 April 2022.

“Matt helped me navigate early on. He kindly explained it all to me, and it started to make sense over time. I started in the Tezos ecosystem, which was a very community art-driven vibe,” Mapan says.

“It intrigued me that you could put an algorithm on the blockchain, and when people mint it, they buy an iteration that triggers your algorithm on demand. It was a new way to think about your work. Basically, the collector is a triggering point.”

Notable Sales

Rapid-fire Q&A

Are there any up-and-coming artists who you think people should be paying attention to?

Anna Lucia:I definitely love her work. She’s very talented, and I can’t wait to see her progress. You need to look her up.”

What are the influences on your art career to date?

“Abstract expressionism movement and people pushing boundaries in modern-day art.”

Who is a notable collector of yours that makes you smile knowing they own one of your pieces?

AC the collector — He is one of the most engaging ones. He comes to exhibitions and talks to me. He always tries to reach out to me and to understand the practice behind the work. AC is definitely a great collector.” 

What’s your favorite NFT in your wallet that’s not your own NFT?

“‘Horizon(te)s #5” — a collaboration by Iskra Velitchkova and Zach Lieberman.

“I don’t know why I love this, but I just do. It’s perfect because I love Iskra’s work and I love Zach’s work. It’s the perfect combination. I love the light and abstract shapes, it’s just amazing work.”

Who do you listen to when creating art? 

Kendrick Lamar and Sofiane Pamart. I really like classical music, especially when I try to be in the flow state. When I need to crush stuff, it’s hip hop.

“Performers are in another light. They need to go up in front of the public. They have to be fragile and sensible, yet you have to let your shell down. I find that very inspiring.

“I try to be more like that. To let my emotions out. Prior, I was basically shutting them down because I wasn’t creating art full-time. Now that art is my job, I want to explore expressing myself more. Performers are very inspiring in that regard.”

Untitled by William Mapan
“Untitled” by William Mapan (objkt.com)

What’s hot in NFT art markets

Mapan’s aforementioned “Distance,” a collaboration with Cactoid Labs and LACMA, sold out its 250-piece collection at a 2 ETH mint price per piece. The collection has done close to 185 ETH in secondary sales volume since its 13 September mint.

Below are some of the other top recent digital art sales.

Cool Cats headed to Macy’s Thanksgiving Day Parade

Nothing says mainstream more than the iconic Macy’s Thanksgiving Day Parade in New York City, and Cool Cats is set to become the first NFT collection to be featured.

In its 97th annual edition, the parade ran a contest that featured numerous NFT collections, including SupDucks, Boss Beauties and VeeFriends. Cool Cats eventually won out, which means a massive Blue Cat balloon will grace the skies of Manhattan on 23 November.

The lead artist and founder of Cool Cats, Clon, couldn’t be more excited for his beloved project.

“This is a big moment for me as an artist and as the founder of Cool Cats. Personally, the Macy’s Thanksgiving Day Parade has always been an important event in my family and it holds a lot of memories. Being able to showcase my artwork alongside some of the world’s most recognizable characters is a dream come true,” says Clon.

Read also

Features

How smart people invest in dumb memecoins: 3-point plan for success

Features

The Vitalik I know: Dmitry Buterin

Nouns DAO fork finalizes

After a bumpy ride over the past few weeks, the Nouns DAO fork has finished with 472 Nouns NFT holders out of 844 in total opting into the fork that was approved in proposal 356

The Nouns holders that opted into the fork will have the opportunity to get approximately 35 ETH back, while Noun holders that voted against proposal 356 will carry on as the DAO had originally been structured, where 1 Noun per day is auctioned, with the proceeds going to fund the treasury of Nouns.

Tweet of the week

Greg Oakford

Greg Oakford

Greg Oakford is the co-founder of NFT Fest Australia. A former marketing and communications specialist in the sports world, Greg now focuses his time on running events, creating content and consulting in web3. He is an avid NFT collector and hosts a weekly podcast covering all things NFTs.



Source link

Bitcoin’s 87% Drop in 2021 Was Caused by Sam Bankman-Fried's Alameda, Ex-Employee Claims

0

An ex-Alameda employee claims a trader at the firm punched in a wrong decimal which led to bitcoin’s 87% drop on Binance.US in 2021.

Source link

Bit Trade, the Kraken subsidiary operating in Australia, sued by the ASIC

0


  • Bit Trade allegedly neglected to establish a target market determination before offering its margin trading product to Australian clients.
  • Bit Trade first received notification of its non-compliance with these obligations in June 2022.
  • Bit Trade’s margin trading product has been used by at least 1160 Australian customers.

The Australian Securities and Investments Commission (ASIC) has initiated legal proceedings against Bit Trade, the operator of the Kraken cryptocurrency exchange in Australia. The action stems from Bit Trade’s failure to adhere to design and distribution requirements for one of its trading products.

As per ASIC’s statement released on September 21st, the financial regulator in Australia alleges that Bit Trade neglected to establish a target market determination before offering its margin trading product to Australian clients. These design and distribution obligations represent a legal mandate for financial product providers operating in Australia. They entail specific guidelines for designing financial products tailored to meet predetermined customer needs and subsequently distributing them through targeted strategies.

Allegations leveled against Bit Trade

ASIC contends that since the implementation of these design and distribution obligations in October 2021, Bit Trade’s margin trading product has been used by at least 1160 Australian customers for losing a collective $8.35 million (12.95 million Australian dollars).

According to ASIC, Bit Trade received notification of its non-compliance with these obligations in June 2022 but continued offering the product without fulfilling the necessary determinations.

Bit Trade’s margin trading product functions as a “margin extension” service, granting customers credit extensions of up to five times the value of the assets they employ as collateral. The regulatory authority asserts that this product effectively qualifies as a “credit facility,” offering customers credit for trading certain cryptocurrencies on the Kraken exchange.

ASIC’s deputy chair, Sarah Court, emphasized that these proceedings should serve as a stark reminder to the crypto industry that financial products will continue to undergo scrutiny by regulators to ensure compliance with consumer protection laws in Australia. She stated, “ASIC’s actions underscore the importance of complying with design and distribution obligations to ensure that financial products are distributed to consumers in a responsible manner.”



Source link

“Teams will get smarter and faster”: A conversation with Eli Manning

0


For the last three years, IBM has worked with two-time champion Eli Manning to help spread the word about our partnership with ESPN. The nature of that partnership is pretty technical, involving powerful AI models—built with watsonx—that analyze massive data sets to generate insights that help ESPN Fantasy Football team owners manage their teams. Eli has not only helped us promote awareness of these insights, but also to unpack the technology behind them, making it understandable and accessible to millions.

We’ve done this by producing videos and a variety of other social content that combine celebrity, humor, and technology. As you might imagine, we’ve gotten to know each other in the process and had some fun along the way. Which is why I get a lot of questions from friends and colleagues about Eli. What’s he like? (super down to Earth) Is he funny in real life? (sneaky smart, very funny) Does he actually play fantasy football? (yup) And, of course, how much does he really know about technology, IBM, and watsonx? Rather than trying to answer this last question on Eli’s behalf, I thought it would be better to go straight to the source.

Noah Syken: What would you say is your level of sophistication when it comes to technology?

Eli Manning: I would give myself a solid 8 out of 10. Data has always been a big part of the game of football. And that’s especially true for quarterbacks. When I was playing, I sought out statistical data. Anything I could get my hands on. Anything to better understand an opponent. Anything to give our team an edge. What’s changed is the way we collect that data, the way we analyze it and the way we put it to use. This might surprise people to hear, but professional football teams are very technologically advanced. Behind the scenes, these teams are investing heavily in technology. And a lot of that technology is designed to turn data into insight for coaches and players.

NS: What do you know about AI? Did you use it when you were a quarterback?

EM: Not really. There were some guys in the organization who were playing around with it when I retired. But it was early days. I didn’t really learn about AI until I started working with IBM. I spent a whole day at IBM Research last year, learning all about hybrid cloud, cybersecurity, AI and quantum computing from Dr. [Talia] Gershon. I only understood a fraction of what she was talking about, but that’s still probably more than most people know.

NS: From what you know about it, how do you think AI will change the game of football?

EM: Teams will get smarter and faster. When I was playing, we spent a ton of time in the film room, studying “tape.” Play. Rewind. Play. Rewind. Studying opponents. Seeing the way plays develop. And that’s always going to be a part of the game. But with AI, video is data. An entire game can be broken down and analyzed instantly. Even in real time. AI can read through injury reports. It can distill expert opinions from millions of articles. When you put those AI-powered insights together with the expert analysis from coaches and players, that’s pretty powerful stuff.

NS: It is powerful. And we’re starting to bring video analytics into our work in the world of tennis. So lots of opportunity there. What would you say you’ve learned from playing ESPN Fantasy Football with the AI-generated insights from IBM?

EM: Well, I learned that I’m a lot better at playing actual football than fantasy football. That was a hard lesson. I mean, like to think I’m a pretty good judge of talent…but there are a ton of decisions you have to make in fantasy football every week. So I’ve come to rely a lot on IBM’s AI-generated insights in the app. I’ve seen what can happen when you put a ton of quality data together with a powerful AI. And it’s amazing. Finding the perfect player for your team from the hundreds of guys available on the waiver wire. Or predicting whether a player is going to over-achieve or under-achieve on any given Sunday. It’s not hard to imagine how that’s going to change the way real football teams are managed or coached. It’s even easier to imagine how it’s going to change the way we live and work. So, yeah, I learned a lot from hanging out with IBMers. Do you think they learned anything from me?

NS: Sure they did. Talia learned how to throw a tight spiral.

EM: Yeah. I’m sure that’ll come in handy in the lab.  

Learn how IBM helps ESPN deliver AI-generated insights in fantasy football

Source link

Robert Kiyosaki says Bitcoin is ‘bargain today … but not tomorrow’

0


  • World-renowned author Robert Kiyosaki shares his view on Bitcoin.
  • Pseudonymous crypto analyst sees BTC at $100,000 in 2025.
  • Bitcoin slid back under $27,000 after Fed’s rate decision today.

Bitcoin – the world’s largest cryptocurrency is a “bargain” today, says Robert Kiyosaki – the renowned author of Rich Dad Poor Dad.

Robert Kiyosaki shares his view on Bitcoin

The so-called digital gold has not been a lucrative investment over the past two months.

But the Founder of Rich Global LLC recommends seeing the recent weakness as an opportunity to build a position in the “future” at a discount. His recent tweet reads:

Gold, silver, bitcoin are bargains today … but not tomorrow. America is broke. Buy GSBC today before stocks, bond, real estate.

Robert Kiyosaki even called fiat currencies “fake money” this past weekend. Also on Wednesday, Santiment – a crypto analytics platform reported an increase in Bitcoin’s on-chain activity to levels not seen since April.

What could help propel Bitcoin price?

Note that there’s a bunch of tailwinds that could catalyse the price of a Bitcoin in the near to medium term – the pending approval of an exchange-traded fund for example.

And then of course, there’s the halving event scheduled or April or May of 2024. In fact, PlanB – a pseudonymous crypto analyst who also goes by 100trillionUSD reiterated today that BTC could hit $100,000 in about two years.

He even left the prospect for it to eventually be worth $1.0 million much like what Cathie Wood has forecast multiple times this year.

Bitcoin is currently trading at $26,900 versus its year-to-date high of $31,500 in mid-July.



Source link