AI-Stack
Machine Learning Cloud Platform

ai_banner.jpg

GET AI-STACK FOR GPU MANAGEMENT

Accelerate AI Adoption Today!

AI-Stack can transform a single GPU server or server cluster into a controllable, manageable, shareable, and horizontally scalable machine learning/deep learning computing resource pool, bringing flexibility and efficient collaboration to GPU computing resources with improved cost-effective operations.

By connecting people with resource, processes and tools needed for AI, we help companies to adopt AI faster, easier and more efficiently.

Project & Departmental Resource Management Workflow

Project & Departmental Resource Management Workflow

AI Professionals ’ Working Desktop, Image Templates & Job/Assignment Workflows

AI Professionals ’ Working Desktop, Image Templates & Job/Assignment Workflows

Intuitive and easy-to-use machine learning user interface like AWS

  • Simple and clear dashboard

  • Machine learning service (MLS) function with graphical user interface

Simplify provisioning on ML computing environment and framework with template menu

  • Automatically create container to dispatch GPU resources

  • Select AI Framework and development tools (e.g. Jupyter) from the lists

AI-Stack:Simplify provisioning on ML computing environment and framework with template menu
AI-Stack:Integrated access to most-used development tools for AI/ML researchers

Integrated access to most-used development tools for AI/ML researchers

  • Configure one or more physical GPU pass-through resources per user/container to ensure computing performance

  • Web-based portal access, ready to log in to the server environment for developing and training tasks

  • Automatically log in to commonly used community development software (e.g. Jupyter Notebook)

AI-Stack Satisfies IT Managers & ML Practitioners alike

EASE OF USE

Momentary set up of (individual/team) ML environment in easy steps

2.png
RESOURCE MONITORING

Flexible resource sharing, individual/team limit, job scheduling

3.png
AUTONOMY

Tenant Management, SSH Key or password login

4.png
ALLINGMENT

Account/storage integration, batch and application workflows

Two Different AI-Stack Solution Packages to Meet Different Needs

AI-Stack lite with GPU Server is easy to use and improves work efficiency​
  • Improved visual and controllable management of system and user resources

  • Achieve multi-user in single machine by sharing resources (user computing is isolated from the local hard disk environment)

  • Automatic supply and installation of container template and resource specification menu to improve work efficiency

AI-Stack express with multiple GPU servers set up pod collaborative management
  • Can be combined with user identity authentication system and can mount existing storage resources

  • Resource sharing is more secure (support container IP whitelist + storage isolation)

  • Automated, batch and scheduling capabilities for more efficient collaboration between individuals and teams

AI-Stack Satisfies Different Needs From Multiple Users

For teams that manage using GPU servers as machine learning computing resources, we provide :
  • Effective GPU resource pool management capabilities (hierarchical resource permission management, resource monitoring, management reports, etc.)

  • Safe and controllable GPU resource sharing environment with improved resource utilization (interactive mode and batch job scheduling mode)

  • Single platform for efficient team collaboration and to improve team's productivity (batch job environment creation, distribution, image template)

DECISION MAKERS AND MANAGERS

For data scientists / engineers:

A working environment that facilitates independent and collaborative development and research, enormously reducing the preparation time of computing environment that originally occupied up to 35% of the workload, so that professionals can spend more time on model development, testing, training and optimization to improve personal and team output.

MACHINE LEARNING PROFESSIONAL

For professors and teaching staffs:
  • Efficient teaching environment with batch creation, distribution and recycling of GPU container resources, saving time in preparing and switching classes.

  • Multiple sets of AI Frameworks provided in the platform can be reused for teaching and student exercises, eliminating the need to repeatedly install various AI Frameworks.

PROFESSORS, TEACHING STAFFS

For students and researchers:

Convenient system and automatic environment deployment tools can reduce the preparation time of the repetitive ML stack that originally occupied up to 35% of the workload, and can spend more time on learning, development, testing, training and optimization models to improve research results and outputs.

RESEARCHERS AND STUDENTS