Custom Training for Deep learning

You can customize the environment for deep learning-based AI learning and start learning in 1 minutes.

Easy training server setting

You can quickly and easily configure an appropriate server for the artificial intelligence operating environment.

Customized Training Development Support

You can develop custom artificial intelligence by coding freely in the Jupyter environment.

Modeling training using Magic Code

You can utilize the automatically generated training code provided by DS2.ai's Magic Code.

Rapid deployment via API integration

Developed AI can be easily deployed by utilizing DS2.ai's auto-generated API.

Real-time server visualization dashboard

You can monitor the training server in real time and respond to issues.

Custom Training

It is easy to configure servers and develop your own models.

CLICK AI's Custom Training makes it easy to set up a cloud learning server and develop artificial intelligence in a Jupyter environment.

Customized Learning Server Configuration

Without the ability to deploy back-end servers, customized learning servers with desired performance can be easily configured.

Code with the Jupyter environment

In a Jupyter environment, you can code the desired algorithm directly and develop artificial intelligence by tuning the hyper-parameters.

Magic Code Automatically Generated

Just copy and paste into Jupyter to automatically generate Magic Code that even non-professionals can start developing artificial intelligence.

Learn more →

Easy Deep Learning Training Server Setup

DS2.ai's CLICK AI supports easy configuration of custom server environment for custom training.

Setting up your own training server environment

After renting the GPU server, set up the environment and start model development by linking with Jupyter Notebook.

  • GPU server performance setting, renting or buying
  • Manual GPU Server Environment Settings
  • Install Jupyter Notebook and Deep Learning Library
  • Manual integration of Jupyter Notebook environment and GPU server
  •  
Environment setting through CLICK AI custom training

Access cloud-type and server-installed service by automatically creating a custom server with simple performance settings and start model development in Jupyter environment.

  • Training can be performed immediately after configuring and running the server at the start of the project

Training environment for custom AI

Freely develop custom artificial intelligence in the Jupyter environment based on the generated GPU training server.

Service provision method selectable

Supports all custom training server settings such as cloud-type and server-installed methods, existing or newly set training server using the CLICK AI custom training function.

Public Cloud
Cloud server-based services available only with provider and GPU server


Custom training functions allow instant configuration and training
Private Cloud
A service that can be used by simply installing DS2.ai on a private server with the desired performance.

Private server lease-type, high security

Enterprise
Purchase or utilize existing GPU server and access service by installing DS2.ai.


Ability to leverage on-premises security, high security
Services available
anywhere

Any desktop/laptop with Internet access can access and use the service regardless of location.

GPU-based
deep learning training environment

You can select and configure a GPU-based, not CPU, deep learning dedicated training server with the desired performance.

Set up your training environment
in 1 minute

By just starting a custom training project, everything from the deep learning GPU server to the library is prepared to provide an environment where it can be trained immediately.

Multi-clustering
support

It supports multiple GPU clustering to process large datasets quickly and efficiently.

Flexible GPU server performance
choices

Depending on the need or usage, you can use a multi-cloud environment that adds cloud GPUs and split the GPU of the physical server.

Convenient modeling and
management through SDK

You can conveniently access, model, and manage custom training projects through the Python-based SDK.

SDK support for convenient programming development

By using the provided Python-based SDK, you can conveniently use it as code input in the Jupyter environment, and you can use all the functions of DS2.ai by using the SDK for artificial intelligence that has been developed.

Learn more →
    
    from ds2 import DS2
    
ds2 = DS2(apptoken=“s2234k3b4”)
ds2.predict(
    "people.jpg",
    quick_model_name = “person”
    #model_id=20000 # Or You can also use your custmized AI.
)


{
    "images": [
        {
            "id": "60a212aac869a1fea276480d",
            "file_name": "/images/img_labelingExample.jpg",
            "width": 4000,
            "height": 2084
        }
    ],
    "type": "instances",
    "annotations": [
        {
            "segmentation": [
                [
                    1200,
                    907,
                    1200,
                    1882,
                    2903,
                    1882,
                    2903,
                    907
                ]
            ],
            "area": 1660425,
            "iscrowd": 0,
            "ignore": 0,
            "image_id": "60a212aac869a1fea276480d",
            "bbox": [
                1200,
                907,
                1703,
                975
            ],
            "category_id": 2621,
            "id": "60a216ae2cd9eb1bbde44e2b"
        }
    ],
    "categories": [
        {
            "supercategory": "none",
            "id": 2620,
            "name": "person"
        },
        {
            "supercategory": "none",
            "id": 2621,
            "name": "person"
        },
        {
            "supercategory": "none",
            "id": 2622,
            "name": "person"
        }
    ]
}

Rapid artificial intelligence development using Magic Code

Using Magic Code, one of CLICK AI's functions, you can automatically generate artificial intelligence training code and start training in the Jupyter environment, or you can start custom training through tuning.

Learn more →
!pip install pyyaml==5.1
!pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html

!pip install ds2ai
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()

import numpy as np
import os, json, cv2, random
from google.colab.patches import cv2_imshow

from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
import requests
import os
import time
from ds2ai import DS2
import ast
import zipfile
import torch, torchvision
from detectron2.engine import DefaultTrainer
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
from detectron2.data.datasets import register_coco_instances

ds2 = DS2(apptoken="----")
project = ds2.get_project(14883)
model = project.models[0]
file_url = None
if not os.path.exists("./data"):
    if project.filePath:
        file_url = project.filePath
    else:
        label_project = ds2.get_labelproject(project.labelproject)
        async_task = label_project.export(is_get_image=True)
        for i in range(0, 1000):
            time.sleep(10)
            async_task = ds2.get_asynctask(async_task.id)
            if async_task.status == 100:
                file_url = async_task.outputFilePath
                break

    if not file_url:
        raise (Exception("Please upload the training file."))

    file_name = file_url.split("/")[-1]
    response = requests.get(file_url)
    with open(file_name, 'wb') as output:
        output.write(response.content)

    os.makedirs("./data", exist_ok=True)
    os.makedirs("./models", exist_ok=True)

    with zipfile.ZipFile(file_name, 'r') as zf:
        zf.extractall(path="./data")
        zf.close()

print("project.id")
print(project.id)

configFile = f"data/coco.json"
fileRoute = f"data/"


cocoData = None
configFileValid = None

if os.path.exists("data/cocovalid.json"):
    configFileValid = f"data/cocovalid.json"

try:
    register_coco_instances(f"{model.id}", {}, configFile, fileRoute)
    if configFileValid:
       register_coco_instances(f"{model.id}_valid", {}, configFileValid, fileRoute)
except:
    pass


cfg = get_cfg()

cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml")
cfg.DATASETS.TRAIN = (f"{model.id}",)
if configFileValid:
    cfg.DATASETS.TEST = (f"{model.id}_valid",)
else:
    cfg.DATASETS.TEST = ()

cfg.DATALOADER.NUM_WORKERS = 1
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.8
cfg.SOLVER.IMS_PER_BATCH = 1
cfg.SOLVER.BASE_LR = 0.02
cfg.SOLVER.MAX_ITER = 300
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128

if project.yClass:
    cfg.MODEL.ROI_HEADS.NUM_CLASSES = len(ast.literal_eval(project.yClass))
cfg.OUTPUT_DIR = f"./models/"

trainer = DefaultTrainer(cfg)

trainer.resume_or_load(resume=False)
trainer.train()
evaluator = COCOEvaluator(f"{model.id}", ("bbox", "segm"), False, output_dir="./output/")
val_loader = build_detection_test_loader(cfg, f"{model.id}",)
bbox = inference_on_dataset(trainer.model, val_loader, evaluator).get("bbox")
print(bbox)

learn_path = f"./models/model_final.pth"
files = {'uploadedModel': open(learn_path, 'rb')}
values = {'apptoken': '----', 'project': 14883, 'bbox': bbox}  # TODO
r = requests.post('https://api.ds2.ai/predictmodelfromcolab/', files=files, data=values)  # TODO


image_sample_file_path = None
for root, dirs, images in os.walk(f"./data"):
    if '__MACOSX' in root:
        continue
    for image in images:
        if not image.lower().endswith((".jpg", ".jpeg", ".png")):
            continue
        image_sample_file_path = f"{root}/{image}"
        break
    if image_sample_file_path:
        break

im = cv2.imread(image_sample_file_path)
predictor = DefaultPredictor(cfg)
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
print("You can go back to the ds2.ai to check the output details.")

Easily configure your training server and start custom training.