Our goal "Deep learning solutions"
Machine learning (deep learning) is one of the technological elements that have dramatically developed the current third-generation AI, and an essential part of utilizing AI and machine learning is the maintenance of data, known as teacher data or training data.
Data warehousing (DWH) has been traditionally used for retaining and utilizing data. In this method, the goal of data utilization is set in the form of outputs (statistical data, graphs, forms, etc.) designed in advance for the business, and data are stored in the appropriate format and items by working backwards from the required data form.
However, this method stores data in a way that is only optimized for a specific extraction style and a specific analysis algorithm; and the data needs to be reorganized and stored again if it is to be used in a different way. Therefore, when introducing AI and deep learning, instead of storing data in a form that depends on specific analysis algorithms or statistical tools, it is desirable to store data in the form of "data lake," where the data is stored in the same form as when it was generated and can be retrieved as needed when the various algorithms and tools including AI are modified.
Safe, easy, and fast implementation from data accumulation to generation of learning results
PowerAI, a platform optimized for deep learning
Proven deep learning framework in one package
IBM PowerAI provides well-tested binary software and libraries for deep learning packaged for easy implementation. Helps build deep learning development environments quickly and easily. PowerAI supports major frameworks such as Caffe, TensorFlow, and Chainer. The minimum hardware prerequisite for using PowerAI is the IBM Power Systems AC922 (known as IBM Newell) equipped with NVIDIA GPU.
All the major frameworks can be installed by running the following command: (It is also possible to install individual frameworks)
$ sudo apt-get install power-mldl
GPU-equipped servers for high-speed computing
- Gain unprecedented performance and applications with a CPU-GPU bandwidth 5.6 times that of POWER9 NVLink2.0. - x86 based servers.
- Power machine with high-end GPUs required for Deep Learning
- First to feature NVLink2.0 technology to show the difference in bandwidth
- Versatile 2U Linux-only machine equipped with NVLink GPUs that support configuration of 2 POWER9 CPUs and up to 4 latest GPUs (Tesla V100 "Volta")
- CPU: GPU NVLink2.0: Only PowerAI can use NVLINK between CPU and GPU (not provided for x86 servers)
- Product Specifications
- Model Number: 8335-GTG
- Chassis: 2U rack-mount
- CPU: POWER9 32-core or 40-core (2 sockets in total)
- Memory: 256 GB, 512 GB, 1024 GB (DDR4 2666 GHz 16 slots)
- Memory bandwidth: 340 GB/s (total system, 170 GB/s x 2 sockets)
- x16 Gen4 Low Profile: 2 slots (CAPI-enabled)
- x8 Gen4 Low Profile: 1 slot (CAPI-enabled)
- x4 Gen4 Low Profile: 1 slot
- SFF(2.5”): Two SATA bays (HDD: up to 4 TB or SSD: up to 7.68 TB)
- GPU: Two or four NVIDIA Tesla V100 (air-cooled)
- Power supply: Single-phase 200-240 V
- OS: RHEL7.4 LE