Command Palette
Search for a command to run...
Live Stream Replay | HyperAI Hosts "2023 Meet TVM", Shanghai Jiao Tong University, Tencent, MachineTime, and Suiyuan Technology Gather in Shenzhen

Overview: On September 16th, the 2023 Meet TVM · Shenzhen, hosted by the MLC.AI community and HyperAI, and co-organized by Openbayes and Tencent AI Lab, was successfully held at Tencent Building. Five senior speakers from Shanghai Jiao Tong University, Tencent, MachineTime, and Suiyuan Technology shared best practices on TVM & MLIR, drawing on their own business applications. HyperAI also livestreamed the entire event on Bilibili; the replay is below.
Live broadcast time: September 16, 2023
Keywords: TVM, technical salon, online live stream
September 16,The 2023 Meet TVM Shenzhen event, hosted by the MLC.AI community and HyperAI, and co-organized by Openbayes and Tencent AI Lab, was officially held.Despite a week of heavy rain, the enthusiasm of the community members remained undiminished. Over a hundred participants from universities, major companies, chip manufacturers, and research institutes traveled from all over to join this offline gathering for AI compilers. Meanwhile, many others who were unable to attend in person actively participated in the technical salon via online livestream on HyperAI's Bilibili channel.

For this event, we invited five senior lecturers from Shanghai Jiao Tong University, Tencent, MachineTime, and Suiyuan Technology to share best practices on TVM & MLIR based on their own business applications.
Event Review
The following is a brief introduction to the event and a video review of the event.
Follow the WeChat official account "HyperAI" and reply with the keyword "TVM Shenzhen" to get the speaker's complete PPT.

Share topic:TVM-based CPU-side dynamic shape optimization
Contents:Traditional deep learning compilers (including TVM) lack support for dynamic shapes, making them inadequate for handling language models (dynamic sequence length) and detection models (dynamic width/height). To address this, we designed and implemented a CPU-based dynamic shape operator optimization scheme based on TVM, which outperforms existing static shape schemes and requires almost no search time.
Live playback:bilibili.com/video/BV18u4y1z7NM/?spm_id_from=333.1387.collection.video_card.click

Share topic:Automatically Design an AI Processor: Compiler is Dominant
Contents:With the development and popularization of AIGC represented by large language models, the demand for computing power has increased exponentially. Therefore, the design of AI processor chips and the corresponding programming have become more complicated.
To make both simpler and more efficient, an automated compiler-computing architecture co-designed a potential solution.
Live stream replay: bilibili.com/video/BV1hj411k7v4/?spm_id_from=333.1387.collection.video_card.click

Share topic:MLIR and its AI graph compilation practice
Contents:With the rapid development of AI chips and AI frameworks, AI compilers have also emerged, such as XLA and TVM. MLIR, as a general-purpose and reusable compiler framework, is widely used in AI compilation systems because it helps hardware manufacturers quickly build DS AI compilers.
This sharing mainly introduces some basic knowledge elements of MLIR, the Codegen process of MLIR, and the practical steps for building an AI compiler. In addition, we will also discuss with you the ideas of MLIR to solve the key problems of AI compilers.
Live playback:https://www.bilibili.com/video/BV1wj411C7kJ/?spm_id_from=333.1387.collection.video_card.click&vd&vd

Share topic:Design and implementation of an AI compiler based on MLIR
Contents:There are many different software frameworks in the field of AI and machine learning (such as TensorFlow, PyTorch, etc.), and hardware devices are becoming increasingly diverse (CPU, GPU, TPU, etc.). As a bridge connecting the two, AI compilers face many challenges.
As a compiler infrastructure, MLIR provides a series of reusable and easily extensible basic components for building domain-specific compilers. Tencent has built an end-to-end AI compiler based on MLIR to provide compilation optimization for users' AI models, thereby simplifying the deployment of models on a variety of AI chips and achieving maximum performance.
Live playback:bilibili.com/video/BV1vk4y1F7Ku/?spm_id_from=333.1387.collection.video_card.click

Share topic:Opportunities and Challenges of Machine Learning Systems in the Era of Big Models
Contents:Significant progress has been made in generative artificial intelligence and large language models (LLMs), demonstrating remarkable capabilities and the potential to fundamentally transform many fields. Simultaneously, this presents both new opportunities and challenges for machine learning systems. On one hand, the enormous computational demands increase the need for system optimization; on the other hand, the reliance on a single model architecture and high-performance hardware is causing the previously open machine learning ecosystem to begin to converge.
Live playback:bilibili.com/video/BV1A34y1N76w/?spm_id_from=333.1387.collection.video_card.click
2023 Meet TVM · Year-End Gathering
From Q1 to Q3 this year, we successfully hosted three offline meetups, attracting many friends who are interested in the field of AI compilers to gather together in different cities to learn and discuss together.
Q4 is just around the corner, and we will be hosting the 2023 Meet TVM Year-End Gathering to bring this year's 2023 Meet TVM series of events to a perfect close. We sincerely invite all businesses and community partners to participate and co-create in various ways, whether it's recommending speakers or sponsoring venues and refreshments.
Let's work together to build the most active AI compiler community in China! Finally, here's a group photo from the event ❤️

Get the PPT:Follow the WeChat official account "HyperAI" and reply with the keyword "TVM Shenzhen" to get the speaker's complete PPT.
Organizers and partners

As the organizer of this event, the MLC.AI community was established in June 2022. Led by Chen Tianqi, the main inventor of Apache TVM and a well-known young scholar in the field of machine learning, the team launched the MLC online course, which systematically introduced the key elements and core concepts of machine learning compilation.
In November 2022, thanks to the collaborative efforts of MLC.AI community volunteers, the first complete Chinese documentation for TVM was launched and successfully hosted on the HyperAI website.It further provides domestic developers who are interested in machine learning compilation with the infrastructure - documentation - to access and learn a new technology.
In the fourth quarter of 2023, the "2023 Meet TVM" series of events will be held in Hangzhou, and enterprises and community partners are welcome to participate in co-creation.
MLC Online Courses:https://mlc.ai/
TVM Chinese Documentation:https://tvm.hyper.ai/

China's leading artificial intelligence and high-performance computing community,We are committed to providing high-quality public resources in the field of data science to domestic developers.So far, it has provided domestic download nodes for more than 1,200 public data sets, supported more than 300 artificial intelligence and high-performance computing related term queries, hosted the complete TVM Chinese documentation, and will soon launch multiple basic and popular tutorials.
Visit the official website:https://hyper.ai/

OpenBayes Bayesian Computing is a leading high-performance computing service provider in ChinaBy grafting classic software ecosystems and machine learning models onto new-generation heterogeneous chips, it provides industrial enterprises and university scientific research with faster and easier-to-use data science computing products. Its products have been adopted by dozens of large industrial scenarios or leading scientific research institutes.
Visit the official website:https://openbayes.com/

Tencent AI Lab is Tencent's enterprise-level AI laboratory.Founded in Shenzhen in April 2016, AI Lab currently has over 100 top research scientists and over 300 application engineers. Leveraging Tencent's long-term accumulation of rich application scenarios, big data, computing power, and top talent, AI Lab is forward-looking, open to collaboration, and committed to continuously improving AI's cognition, decision-making, and creativity, moving towards the vision of "Make AI Everywhere".
Tencent AI Lab emphasizes the development of both research and application.Basic research focuses on four major directions: machine learning, computer vision, speech technology, and natural language processing; technology applications focus on four major areas: games, digital humans, content, and social interaction, and initially explore the research and application of AI in industry, agriculture, healthcare, medicine, life sciences, and other fields.





