Research Institute
Maintaining the driving force of innovation and exploration
DCS Research Institute

DCS Research Institute is the R&D business department of Deheng Group, engaged in big data operation and maintenance, engine research and development, undertaking major national and municipal science and technology projects, setting up big data State Key Laboratory, cloud computing data center, artificial intelligence innovation center, multimedia big data maker center, science project department, and DCS Silicon Valley and Tel Aviv R&D center to provide the group and its subsidiaries with algorithm realization, driving behavior based and scenario based cloud data center Incubation preparation will be carried out for multi-channel data block floating-point quantization processors, integrated data centers for film and television scenes, quantum signal encoding and storage, and ubiquitous data centers.

Heterogeneous computing
Heterogeneous computing refers to the calculation method of using computing units of different types of instruction sets and architectures to form a system. The types of computing units include: CPU, GPU, FPGA, etc. How to automatically allocate different computing tasks to the chip most suitable for processing that task, in order to achieve the highest energy efficiency ratio and transistor utilization. The system bus led out by the CPU serves as the backbone, and other modules are hung on this bus. For example, when developing a CPU based system on an FPGA, when the FPGA is powered on, the hardware logic is successfully configured through the chip, and the software files are read and transferred to SDRAM, where the software runs.
Multi-channel data
The arrangement of data in the same direction is called a path array. A matrix is a two way array of data arranged in both horizontal and vertical directions. A tensor is a multi-channel array representation of data, and the most commonly used tensor is a third-order tensor. The three-way array of third-order tensors is not proportional to row vectors, column vectors, etc., but is called tensor fibers. They are the horizontal fibers, vertical fibers, and longitudinal fibers of the third-order tensor, respectively. Higher order tensors can also be represented by sets of matrices. These matrices form horizontal slices, lateral slices, and frontal slices of third-order tensors. In the analysis and calculation of tensors, a third-order tensor can be reorganized or arranged to form a matrix.
Intelligent park services
The speed of a sequential circuit is limited by the longest path between any two registers, or between an input and a register, or between a register and an output. Pipeline design is the use of registers to segment complex combinational logic circuits based on the expected critical path delay time. Pipeline type registers can be inserted into key positions of combinational logic, dividing the logic into groups with shorter paths. The layout of these registers is determined by the feedforward cut set of the data path to ensure that the data remains relevant. Pipeline technology reduces the number of stages in combinational logic and shortens the data path between storage components.
Semantic Retrieval
URLs are the source of the initial dataset and can be viewed as massive raw data that requires further classification and refinement. This refinement is accomplished with the help of the Resource Description Framework (RDF). The same word can have different meanings or even usage. This ambiguity problem can be solved by the aggregation of ontologies. However, semantic search based on URL, RDF, and ONTOLOGY is not compatible enough with mainstream networks. At present, the simplest way is to directly embed descriptions in web pages. To make the structure of a webpage clearer, new semantic elements in HTML5 can be used. These elements can give additional meaning to the content they annotate.
Multi agent technology explores a set of autonomous agents that complete complex system control and task solving processes through interactions, cooperation, competition, negotiation, and other behaviors in a dynamic open environment. Each intelligent agent has clear goals, communicates with other agents by perceiving their internal state and environmental information, improves their reasoning and control abilities, and completes problem solving. Multi agent systems can alleviate limitations on centralized control, planning, and sequential control, provide functions such as decentralized control, emergency response, and parallel processing, reduce software or hardware consumption, and provide more convenient problem solving.
Edge Computing
Edge computing refers to a new computing model that performs computing at the edge of the network. The specific data computing includes two parts: the downlink cloud service and the uplink Internet of Things service. The "edge" in edge computing is a relative concept, which refers to any computing, storage and network resources between the path from the data source to the cloud computing center. The resources on this path can be viewed as a "continuum", from one end to the other. Depending on specific needs and actual scenarios, the edges can be one or more resource nodes on this path. The core idea of edge computing is that "computing should be closer to the source of data and closer to users".
Algorithm implementation
Data center operation and maintenance
The research institute has cooperated with top domestic and foreign universities and has established big data laboratories, cloud computing data centers, artificial intelligence innovation centers, multimedia big data maker centers, scientific project departments, as well as DCS Silicon Valley and Tel Aviv R&D centers