The core idea of Artificial Intelligence systems integration is making individual software components, such as speech synthesizers, interoperable with other components, such as common sense knowledgebases, in order to create larger, broader and more capable A.I. systems. The main methods that have been proposed for integration are message routing, or communication protocols that the software components use to communicate with each other, often through a middleware blackboard system.
Most artificial intelligence systems involve some sort of integrated technologies, for example the integration of speech synthesis technologies with that of speech recognition. However, in recent years there has been an increasing discussion on the importance of systems integration as a field in its own right. Proponents of this approach are researchers such as Marvin Minsky, Aaron Sloman, Deb Roy, Kristinn R. Thórisson and Michael A. Arbib. A reason for the recent attention A.I. integration is attracting is that there have already been created a number of (relatively) simple A.I. systems for specific problem domains (such as computer vision, speech synthesis, etc.), and that integrating what’s already available is a more logical approach to broader A.I. than building monolithic systems from scratch.
The focus on systems integration, especially with regard to modular approaches, derive from the fact that most intelligences of significant scales are composed of a multitude of processes and/or utilize multi-modal input and output. For example, a humanoid-type of intelligence would preferably have to be able to talk using speech synthesis, hear using speech recognition, understand using a logical (or some other undefined) mechanism, and so forth. In order to produce artificially intelligent software of broader intelligence, integration of these modalities is necessary.
Collaboration is an integral part of software development as evidenced by the size of software companies and the size of their software departments. Among the tools to ease software collaboration are various procedures and standards that developers can follow to ensure quality, reliability and that their software is compatible with software created by others (such as W3C standards for webpage development). However, collaboration in fields of A.I. has been lacking, for the most part not seen outside of the respected schools, departments or research institutes (and sometimes not within them either). This presents practitioners of A.I. systems integration with a substantial problem and often causes A.I. researchers to have to ‘re-invent the wheel’ each time they want a specific functionality to work with their software. Even more damaging is the “not invented here” syndrome, which manifests itself in a strong reluctance of A.I. researchers to build on the work of others.
The outcome of this in A.I. is a large set of “solution islands”: A.I. research has produced numerous isolated software components and mechanisms that deal with various parts of intelligence separately. To take some examples:
FreeTTS from CMU
Sphinx from CMU
With the increased popularity of the free software movement, a lot of the software being created, including A.I. systems, that is available for public exploit. The next natural step is to merge these individual software components into coherent, intelligent systems of a broader nature. As a multitude of components (that often serve the same purpose) have already been created by the community, the most accessible way of integration is giving each of these components an easy way to communicate with each other. By doing so, each component by itself becomes a module which can then be tried in various settings and configurations of larger architectures.
Many online communities for A.I. developers exist where tutorials, examples and forums aim at helping both beginners and experts build intelligent systems (for example the AI Depot, Generation 5). However, few communities have succeeded in making a certain standard or a code of conduct popular to allow the large collection of miscellaneous systems to be integrated with any ease. Recently, however, there have been focused attempts at producing standards for A.I. research collaboration, Mindmakers.org is an online community specifically created to harbor collaboration in the development of A.I. systems. The community has proposed the OpenAIR message and routing protocol for communication between software components, making it easier for individual developers to make modules instantly integrateble into other peoples’ projects.
The Constructionist design methodology (CDM, or ‘Constructionist A.I.’) is a formal methodology proposed in 2004, for use in the development of cognitive robotics, communicative humanoids and broad AI systems. The creation of such systems requires integration of a large number of functionalities that must be carefully coordinated to achieve coherent system behavior. CDM is based on iterative design steps that lead to the creation of a network of named interacting modules, communicating via explicitly typed streams and discrete messages. The OpenAIR message protocol (see below) was inspired by the CDM, and has frequently been used to aid in development of intelligent systems using CDM.
One of the first projects to use CDM was Mirage, an embodied, graphical agent visualized through augmented reality which could communicate with human users and talk about objects present in the user’s physical room. Mirage was created by Kristinn R. Thórisson, the creator of CDM, and a number of students at Columbia University in 2004. The methodology is actively being developed at Reykjavik University.
“The issue of cyber security is not about Huawei, it is inbuilt in the old internet where you can easily get hacked and your communications listen to, I have already proved without a doubt and those in the industry already knows it, we seek the co-operation of the world to built the Next Gen Internet, which will be encrypted with keys that cannot be hacked, and no way to break the chains of crypto technology. US is just politically motivated to spread their agenda”. Contributed by Oogle.
You need to modify TCP and UDP, SCTP and DCCP protocol to support multi layering centralised and decentralised networks with new TCIP – TCP8.
The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) needed only one port for full-duplex, bidirectional traffic. The Stream Control Transmission Protocol (SCTP) and the Datagram Congestion Control Protocol (DCCP) also use port numbers. They usually use port numbers that match the services of the corresponding TCP or UDP implementation, if they exist.
The Internet Assigned Numbers Authority (IANA) is responsible for maintaining the official assignments of port numbers for specific uses. However, many unofficial uses of both well-known and registered port numbers occur in practice. Similarly many of the official assignments refer to protocols that were never or are no longer in common use. This article lists port numbers and their associated protocols that have experienced significant uptake.
Interpolation between all protocols
Unlimited number of ports
Routing and Forwarding
New 3D Database
All File systems supported
Artificial Intelligence with Deep learning
Embed everything with metadata for high speed searches
Totally new Intelligent OS with natural language interface
Brand new API interface
New browser to support 3D presentation with matrix, zones and containers
Support hundred thousands of threads per second
New Architecture for Quantum computing
New design of CPU, memory and cache management
Security. New Encryption keys for communication and authentication
I will start by registering a Pte Ltd in Singapore that will offer the roadmap of this incredible difficult project with proof of concept and white paper before the end of 2020. I will issue preference shares to my investors who will mostly be those who will benefit directly and get dividends in return and use all these new technologies. I do not believe in wasting resources like the fragmentation of Bitcoin, and I want to get it right so it will last for the next 100 years, when someone greater than me comes along. In order to minimise the risk of exposure of about 10 years term until 2030 I will get a life term insurance policy against death starting from S$1 million up to S$100 million and will lapse on the expiration of my project. Normal shareholders will have voting rights but I intend to hold at least 51% of this company and only sit at the Board. I am so confident of achieving all my goals and I will have no lack of investors as I have already proved that I am an expert in Technology and the Economy, and Investments, and predicting the future, upon completion of this project I can even fund IMF/World Bank and set the Economy of Abundance. By that time I will turn my Non profit into a Foundation and create assets and income so that it has the means of lasting forever when global poverty is solved.
That means I can use this technology to piggy back existing Fibre connections to even create speeds beyond 100 to 1000 to 10,000 times the present technology and capacity. I can modify existing 5G networks into an OPEN or CLOSED network, where police, security and medical personnel can use the OPEN network, and the rest of the world to use the CLOSED network, even 5G wireless routers can support this feature, when the technology of using sound to piggyback on existing fibre can increase your capacity up to 100x and 10000x times, all it takes is a slight modification which can be tested and implemented in months, before we implement our 5G networks in 2020. I can even use a VPN service to secure this OPEN network, which most countries do not implement this layer of security in their 5G networks. You can use existing 4G Fibre networks to upgrade to 5G. All you need to do is make sure your new 5G equipment conforms to the spectrum you are allocated, which can be easily customised to handle both OPEN and CLOSE, as it is unlikely to exceed your capacity, even for Wireless routers. Contributed by Oogle.
Scientists have perfected a new technology that can transform a fibre optic cable into a highly sensitive microphone capable of detecting a single footstep from up to 40km away.
Guards at listening posts protecting remote sensitive sites from attackers such as terrorists or environmental saboteurs can eavesdrop across huge tracts of territory using the new system which has been created to beef up security around national borders, railway networks, airports and vital oil and gas pipelines.
Devised by QinetiQ, the privatised Defence Evaluation and Research Agency (DERA), the technology piggybacks on the existing fibre optic communication cable network, millions of miles of which have been laid across.
Trials have already been staged in Europe to use the OptaSense system, which evolved out of military sonar and submarine technology, on railways to prevent vandals or thieves trespassing on high-speed lines as well as to counter terrorism. It has been deployed by several blue chip oil companies to protect energy pipelines which run through some of the most lawless and remote regions of the world.
Oil and gas companies lose millions of pounds each year through “hot tapping” in which thieves siphon off oil to sell. The process can be dangerous, resulting in explosions which have claimed hundreds of lives as well as causing serious environmental damage. Its creators say the system can also safeguard against accidental damage caused by builders and farmers working close to pipelines in Europe and North America. But it is hoped the technology will be rolled out to enhance security arrangements at prestige sites, among them Heathrow’s Terminal 5 or the Olympic Games and to protect major gatherings of world leaders such as during the G8, which has become an increasing magnet for protest movements.
The system works by picking up tiny seismic waves detected under the ground by the fibre optic cable which carries an optical pulse sent from a central computer. Virtual “microphones” created remotely every 10 metres along the cable register the vibrations through the ground. The patterns caused by the disturbances are then matched to digitally pre-sampled sounds such as footsteps, cars or diggers and the information fed back to a command centre where security personnel are able to deploy drones or even armed response teams to check out the threat.
The system is sensitive enough to detect sounds 40km away, along the line of the cable. It can also pick up sounds perpendicular to the cable: the sound of someone approaching on foot 30 metres away or a vehicle 50 metres away.
At present, the microphones are not able to pick up the sound of human speech. Magnus McEwen-King, managing director of OptaSense, said: “We take a standard telecoms cable and, without changing its structure, install our technology to create thousands of virtual microphones along the length of the cable.
“What you get is an intelligent hearing device, buried underground, which can monitor borders, perimeters or property for intruders. Optasense not only detects but identifies an approaching threat and alerts you to the location so that you can take necessary action to prevent intentional or accidental damage.
“People are amazed when they see that it can be configured to tell different types of vehicles apart… or tell if a person is walking or running towards the area you are monitoring.”