Open post

Technology trends in companies by 2020

The integration of emerging technologies in the last year, the continued development of the AI at an accelerated pace, digital transformation as the agenda of theleaders of large and small businesses, the leading science, technology and business consultancies, define the panorama that will be presented in 2020, as a roadmap for continuing to forge the excellence of society and the basis of the understanding of the future.

Below is the prediction of the top 10 scientific and technological trends for  2020. Some of these technologies are already common, but the proliferation of new use cases and applications will make the industry take advantage of its benefits and opportunities.

Automation at  2020 level: Hyperautomation. Hyperautomation takes applications for task automation to the next level. It enables the application of advanced technologies, such as artificial intelligence (AI) and machine learning, to increasingly automate processes and increase human requirements.

In some cases, hyperautomation requires  the creation of an Digital Twin of Organization, allowing you to visualize how functions, processes, and key performance indicators interact to generate value. DTO becomes an integral part of the hyperautomation process, providing continuous, real-time intelligence of the organization and driving meaningful opportunities. This tool brings intelligence to business management, and allows you to discover, analyze, design, automate, measure, monitor, reevaluate the entire business model.

Multiexperience is the new experience. Starting in 2020, multi-experience will see the traditional idea of computing evolve from a single point of interaction to include multisensory and  multitactileinterfaces, such as portable devices and vanity computer sensors. This trend in the next 10 years will become what I know  as  “environmentalexperience”.

But multi-experience  currently focuses on immersive experiences that use augmented reality, virtual reality, mixed reality, multichannel human-machine interfaces, and detection technologies.

Democratization,  democracy of  2020. The democratization of technology  is about providing people with easy access to technical or commercial expertise without extensive or expensive training. Known as “citizen access,” this trend will focus on four key areas: application development, data and analysis, design and knowledge. Democratization is expected to see the emergence of citizen data scientist, programmers and other forms of technology involvement.

The increase becomes human. The controversial trend of human augmentation focuses on the use of technology to improve cognitive and physical experiences of aperson. It leads to a variety of cultural and ethical implications,such as the use of CRISPRtechnologies, asthey involve DNA modifications. Physical augmentation changes physical capacity when a technology is implanted or hosted inside or above the body,such as the  wearable devices that in some industries are used to increase worker productivity. As for cognitive augmentation, it refers to the use of devices to increase sensory aspects, biological function, brain and genetics.

Increased transparency and traceability. Digitization    is cawaved to a crisis of confidence,  just as  consumers become more aware of how their personal data is stored  and used, organizations  realize  the responsibility  of  storing and collecting data. In addition, the use of  AI and machine learning to make decisions ratherthan humans isincreased, promoting  the need for methods such as  explainable AI and AI governance. This trend requires a focus on key elements of trust: integrity, openness, ethics, responsibility, competence and consistency, establishing regulations for protection such as the European Union’s General Data Protection Regulation (GDPR).

The edge enhanced. The trend towards implementing IoT-reinforced Edge Computing technologies based on maintaining local and distributed traffic will reduce latency and costs. This involves a topology where information processing and content collection and delivery are placed closer to the sources of capture of information boosted with increasing IoT devices that augment and form smart spaces, bringing key applications and services closer to the individuals and devices with which they interact. It is estimated that by 2023, there will be 20 times more smart devices at the edge of the network than in traditional IT. 

The Multicloud. Multicloud or distributed cloud consists of distributing and dividing public cloud  services to different physical data centers of the cloud provider, maintaining control of the  provider,asit continues as responsible for the cloud service architecture, delivery, operations, control and updates.

This evolution from the centralized public cloud to the distributed public cloud marks the beginning of a new era of cloud computing,  allowing data centers to be located anywhere,avoiding technical and  regulatory  problems (latency and data sovereignty).  

The most autonomous things yet. Autonomous things, such as drones, robots, cars, ships, devices, etc., that take advantage of AI to perform regular human work and whose intelligence development ranges from semi-autonomous to fully autonomous, in different environments, including air, sea and land, although its use is maintained in controlled environments, will evolve more and more to be included in public and open spaces, from independent swarms to collaborative swarms.

Practicing Blockchain. Today’s enterprise blockchain takes a hands-on approach and implements only some of the elements of a complete blockchain. Everyone who has authorized access sees the same information, and integration is simplified by having a single shared blockchain. Thereal blockchain or blockchain complete will have the potential totransform industries and eventually the economy, as complementary technologies such as AI and IoT begin to integrate together with  Blockchain,expanding the type of participants to include machines, which will be able to exchange a variety of assets. For example, a car could  negotiate the price of a repair,  based on data collected by its sensors. In addition, blockchain will be fully scalable by 2023.

Increased security of the AI safer. The evolution of tegnological trends such as hyperautomation and autonomous things, offer opportunities for transformation in companies. However, they create the possibility of new attack points by making security vulnerable. 

Security teams should be aware of these challenges and reflect on how AI will affect security.

AI security in the future will have three key aspects: first, protecting AI systems, protecting AI data and training lines and machine learning models; second, leverage AI to improve security defense, using machine learning to understand patterns and attacks, and automate parts of cybersecurity; third, anticipating the  use of AI by attackers, identifying attacks and defending against them.

They evoke futuristic visions, but in the ’80s when we saw Blade Runner, we wondered can a machine be more human than a man? in November 2019, which is when the action of the film happened, that is, in our present.

Hiperautomation  with the use of Digital Twin, Blockchain, AI security, distributed cloud and autonomous things drive  disruption and create strategic technological opportunities.

These trends could not exist in isolation; organization leaders will need to decide which combination of technologies will drive innovation and strategy for their business models.

“Hyperautomation requires the creation of an organization’s Digital Twin, allowing you to visualize how key performance indicators interact, processes, and functions to generate value”

Want to start hyperautomation with NOA?

Open post


When a company wants to start in Industry 4.0, its managers will come across a number of concepts that have some technical complexity and that, in many cases, are not well explained or not in one place. In order to facilitate understanding and save searches regarding the vocabulary that is handled in the sector, we publish ed this basic dictionary of Industry 4.0 in which we briefly explain the most common concepts.

Industry 4.0: This concept requires an explanation in itself. When we refer to Industry 4.0 we are not only talking about incorporating a number of technologies into the different industrial or business processes, but it is a global concept that goes beyond the sum of its parts. The fundamental objective of Industry 4.0 is to improve the profitability and competitiveness of companies by incorporating a “digital layer” that encompasses all aspects of production and management, providing digital technologies in all links of the value chain. We talk about digital marketing, communication with customers by digital means and chatbots, the application of Artificial Intelligence in the different aspects of management and production, 3D simulation and digital twin, predictive algorithms, machine learning, Internet of Things, Big Data and many other tools that interact cooperatively, both with each other and with humans, simplifying decision-making and giving the company greater capacity to respond and to take advantage of resources.

Internet of Things (also IoT or Internet of Things): Using broadband connectivity (fiber optic, 4G, 5G) to communicate directly with each other, without having to go through human operators. This ranges from autonomous vehicles to communication between robots, cobots, sensors, data servers or industrial applications. The distributed architecture of the Internet ensures smooth communication between machines without the need for large central servers, so that each machine or object in general communicates with the person who needs it and when it needs it, either to receive or to transmit data.

Artificial Intelligence (AI): Computing systems based on algorithms and decision trees that are able to mimic (not replace) certain reasoning capabilities of humans in certain situations. For example, AI is used in the cameras of many mobile phones to determine for themselves the best level of light, brightness, color intensity and focus for a photo based on the elements that appear in it and environmental conditions. Artificial Intelligence is a technology designed for cooperation with humans in decisions that machines can make more quickly and efficiently. In addition, AI systems can “learn” to some extent of the experience, allowing them to improve their performance over time.

Chatbot: This is an application of Artificial Intelligence and Industry 4.0 to customer relationships. It allows them to establish a conversation with a machine, either by voice or through written chat, using a natural language that it is able to recognize and give an answer. It reduces waiting times, helps the user guide him when he has a problem at any time of the day and allows us to have a 24 hours /7 day service at an affordable cost. Virtual assistants of great technology companies, such as Siri, Alexa or Google Assistant, are examples of applying Artificial Intelligence to chatbots.

Cobots: Abbreviation of “Cooperative Robot”. These are industrial robots designed to interact with humans and function as their assistants, unlike large traditional industrial robots that must operate in isolated environments. In addition, cobots can cooperate and communicate with each other in a similar way as a swarm of bees or (algque already applied on many logistics platforms), so that they do not interfere with each other or with the humans around them.

Digital Twin: A digital twin is a virtual copy of a real environment. It can be applied to production lines, factories, complete companies, buildings, cities, etc. The digital twin presents, by virtual reality or augmented reality, the data provided by a multitude of sources of information (machines, sensors, computers, humans, etc.) and allows to interact with them, both in real time (visualization of data “touching” an element with a gesture, if it is a 3D virtual reality environment) and simulating different scenarios (what happens if I change site elements, if I increase the rate of production, what machine should replace, where the necks will be …) Its applications are almost endless.

Machine learning: Ability of machines or information systems to “learn” from experience. This learning can be directed by humans subjecting the system to training, can be automatic or can be based on “trial and error”. Machine learning systems typically use, to a greater or lesser extent, the three forms of learning. The self-driving car is the best example of this: First you have to train him to know what road markings are and road signs, then he has had to learn only in real situations (controlled by a human operator) and ask when presented something he does not understand, and then he has been allowed to travel in automatic mode to accumulate experience that is very complicated to program previously (interaction with pedestrians, unforeseen behaviors of other vehicles, etc).

Big Data: Big Data is the name that is led to the exploitation and crossing of large amounts of data from different sources to obtain results that were not evident with the naked eye, or directly new results. Most companies have a wealth of information that they don’t exploit properly. In addition to its commercial aspect and relationship with customers (something that large technology and corporations already do to offer us offers customized to our needs or interests) Big Data allows such interesting things as optimizing stocks, adapting production rhythms, predicting market behavior or improving preventive maintenance, among many other applications.

Virtual Reality and Augmented Reality: These are two types of communication interface with digital systems by humans. While Virtual Reality does not provide a completely digital environment in which we can “enter”, for example using 3D glasses and gesture recognition, Augmented Reality uses electronic devices to overlay a digital layer on the real world, so that we can access “overprinted” data about what we’re seeing. Neither technology is new, but it has been necessary to develop ultra-fast communication networks and increase the exponential computing power so that both can realize their full potential.

Additive Manufacturing and 3D Printing: These are manufacturing processes that do not require molds or physical patterns to manufacture components. 3D printing, which already allows to manufacture parts of the size you want and almost from any powdered or moldable material, is the maximum exponent of this manufacturing system that is already used in the aerospace, automotive, architecture or manufacturing of industrial and electronic components. The “mold” is a virtual 3D design that a machine or set of machines are able to manufacture on their own from raw materials such as plastic and composite materials, concrete or even certain metals.

Probably not all the terms related to Industry 4.0 (we would need an encyclopedia) are in this dictionary ( we would need an encyclopedia), but the ones that are handled more commonly. In future installments of our blog we will expand it with new terms.

Open post

Machine learning: A basic tool of Industry 4.0

When we talk about machine learning,  we mean, in general, the ability of certain machines and, therefore, the computer systems that govern them, to “learn” from experience. Modern computing is based on algorithms (mathematical formulas with many factors) that can become very complex, and on “decision trees” containing instructions for decision-making of the type “if X reaches a certain value do this” or “if A happens, do B” (said in a tremendously simplified way).

Machine learning techniques allow these algorithms to not only take into account the input data it receives, but analyze whether the final result is appropriate (according to previously set parameters) and take those results into account when the algorithm reruns. It is, therefore, a type of artificial intelligence.

Until not so long ago, machines were limited to executing the instructions given to them regardless of the final result. That is, in the face of a mismatch it was a human operator who had to intervene, either in the machine itself or in its programming, to correct the problem. At most, for example, industrial robots were prepared to stop if an error occurred or there was a failure in the data or input components. That is, a soldering robot in an automobile production chain was programmed to weld specific points previously programmed, but it was not able to know if the weld was right or wrong. At best, they could stop (and with them the whole chain) if they detected that the body to be welded was misplaced.  But they weren’t able to learn.

Machine learning-based systems are designed to learn in three different ways:

  1. Supervised learning: In this case there is a “training” supervised by a human operator. For example, if I want a robot to distinguish between dogs and cats, I will have to provide you with the data to help you identify what a dog is and what a cat is, and tell it by means of “tags”. Example: Showing you images of many different dogs and cats, each labeled “dog” or “cat”, so that when the system has enough data it can recognize a dog or cat by itself for similarity.
  2. Unsupervised learning: In an unsupervised learning model, following the example above, we provide the system with photos of dogs and cats, but we don’t tell you anything about them. The system learns to recognize that there are two different categories (although you won’t know what they’re called) and will be able to, when we show you a photo you’ve never seen, tell us if it’s A or B. If the system is able to connect to different external databases (e.g. the internet), it will look for similarities with the images you have seen, words related to them and end up naming them “dog” and “cat” without anyone telling you.
  3. Reinforcement learning: The system learns from experience, not only from the data provided to it (input data) but from whether its decisions have been accurate or wrong (output data). Wrong decisions “penalize” the system not to repeat them, while the right decisions reinforce it.

The role of Machine Learning in Industry 4.0

The ability to learn is not only already present in industrial robots, but in many computer applications: Google and Facebook are a good example of how, through continuous exposure to user searches, preferences and interactions, they are able to to make a “robot portrait” of each of us and our tastes. Although we don’t tell you, they know our age, our sex, what we like to eat, drink, our favorite trips, who we like and who we don’t and hundreds (if not thousands) of other data that are used to show us, both in search results and in the advertising, what we’re really interested in seeing.

Obviously, Google or Facebook algorithms are extremely complex and require a mireal computational power. However, industrial processes (both in manufacturing and services) are developed in much more controlled environments, where the input data is not as numerous. This simplifies algorithms and decision trees and requires much less computing power, making them accessible to many companies in their everyday applications. For example, a robotic warehouse is already able to decide, depending on the dimensions and volume of the object to be stored, the most efficient way to do so taking up as little space as possible. This allows more products to be stored in less space, something that is already applied not only by factories, but by retail establishments that handle many different references, such as pharmacies.

Sensors are the key

An automatic Machine Learning-based system cannot learn if you do not have input data. Obviously, it’s no use having an intelligent system if we have to feed it data manually. To do this, industrial automation requires sensors of very different types: cameras that allow to recognize shapes and objects, location and position sensors in space, pressure sensors, temperature, lidar sensors (for example in autonomous vehicles) and a long etcetera. These sensors are the “senses” of a Machine Learning-based system.

All these auxiliary systems that enable the proper operation of an intelligent system are not only that they are already available, but have experienced a very significant reduction in their cost in recent years. This, along with advances in connectivity, makes the application of Machine Learning already possible outside the select circle of large corporations.

What benefits do these systems bring to a company?

Many and very diverse. Compared to traditional automatic systems, Machine Learning-based systems have proven to be more versatile and adaptable, dramatically reduce the error rate, are able to interact in environments where there are humans and each other, and, above all, are able to efficiently perform tasks that were unthinkable for a robot only a few years ago. All this makes them a key part of the productivity increase resulting from the implementation of Industry 4.0.

Open post

Digital twin: a revolution within reach of all companies

In our experience attending talks and conferences on digitization and Industry 4.0 and being speakers in many of them, we have often encountered a certain perception among medium and small companies that “this is not for me” or “my company is not prepared” when we talk about the digital twin as a key technology for the nearest future.

There is a widespread perception that identifies new technologies. Obviously, we are not going to present the process of implementing any disruptive technology as a path of roses; it would be absurd to think so, but it is no less true that the technologies are becoming more mature, their implementation times are shorter and the whole process is much simpler and more intuitive than it might seem at first glance.

Let’s look at any company today, both in its production area and in all its departments, and compare it to how it worked 15 years ago, when phone and fax were the kings of the office and production automation was much lower. It is true that many companies have stayed along the way, but an average company, to this day, is much less dependent on the telephone and fax when making, receiving or confirming orders, logistics companies allow us to carry out a real-time tracking of which Any shipment and on the production floor there are touch screens, digital control systems and automatic machines where there were none before, to give three examples.

We are living in a time of technological revolution in which the arrival times and implementation of new technologies are accelerating. What used to take 10 or 12 years to become everyday today takes half, and those times are going to continue to drop because the locomotive of innovation is accelerating steadily.

Standing still is not an option

The economy is already fully digital. This applies to all productive sectors, from agriculture or the production of raw materials to manufacturing or services. It is true that a few years ago we did not have the right connectivity so that digital technologies could fulfil their full potential, but that is already history, and it will be even more so with the recent arrival of 5G communication technology. In view of this perspective, doing nothing or waiting for others to do so first constitutes a clear (and sometimes irreversible) loss of competitiveness. We are talking about productivity improvements that reach 20%,  which is no joke. In the face of the new Industrial Revolution that Industry 4.0 entails, any company is obliged to react, because the company that does not will have serious difficulties to compete.

Accessible digital twin: Norlean’s challenge

In Norlean we knew from the beginning that our reason for being was to “democratize” the digital twin as a basic tool of Industry 4.0. That is, to turn a very complex tool that already works in many large factories into something simpler and more accessible, even by people without engineering knowledge, but no less powerful. The result is NOA (Norlean Operations Analyzer), which is more than just software to use. It is a tool that is able to feed data from very different sources (and that are now scattered in different applications), process it using powerful artificial intelligence algorithms and use it to perform a virtual 3D recreation of both the current situation and any scenario that you want to simulate.  Simply put, it allows us to move from a reactive approach (we make decisions based on past results and therefore cannot change) to a predictive approach (we make decisions based on the simulation of different future scenarios and knowing how the company will react to each of them). This represents a revolution in investment planning, CAPEX optimization, layout, planning of personnel needs and, in general, all the resources necessary to achieve the results that are in need.

The idea behind our digital twin is not to replace anything, but to add a top layer that allows the correct and scientific interpretation of the data without the need to be an expert in industrial engineering. Our decades of experience in the field of engineering and management optimization has allowed us to pour all that knowledge into a flexible tool, adaptable to the needs of any company or institution. We can simulate production results, but we can also simulate what will happen to a city’s traffic in different scenarios or what the energy performance of a building will look like before building or reforming it, to give three current examples.

The future awaits no one. It is time to seriously consider taking a different approach to management. NOA is Norlean’s tool that makes it possible.

Open post

Cobots and exoskeletons: when robots and human beings interact with each other

Robots have been incorporated into manufacturing and logistics processes in many different sectors for many years. They can be seen in all the production chains of automotive factories, working as forklifts or autonomous loaders in large logistics centers and, increasingly, in small and medium-sized companies, performing heavy or repetitive tasks. Even many pharmacies already have an automated warehouse in which a robotic arm stores and dispenses the medicines.

However, until very recently the robots had to work in isolated environments and without human interaction (beyond the remote control and the tasks of maintenance and repair thereof) since they were large, heavy machines, capable of exercising much strength and with low capacity to discern the environment in which they operate, so they are usually “caged” in a physical space in which humans can not be present while the robots operate, since the movement of the robots is preprogrammed and they could hit a human being and not even find out about it.

One of the aspects of Artificial Intelligence is known as “machine learning”. If until recently reprogramming a robot was a complex and time-consuming task, more and more systems are being developed that are able to “learn” by means of algorithms that allow them to correct errors and increase efficiency. Thus, the more exposed the system is to the manufacturing process, the more it learns and the more efficient it is.

There are two new elements in Industry 4.0 related to robotics, which are already operating in the most advanced factories and that will gradually spread to the rest: We talk about the cobots (abbreviation of “cooperative robots”) and the exoskeletons.

What is a cobot?

A cobot is a robot designed to operate safely in an environment in which there are human beings
. In fact, they usually work as assistants to human beings in tasks that require a lot of precision. There are fixed and mobile, and all are equipped with advanced sensors that allow them to continuously map their environment, detect the presence of a human being in their security perimeter and turn off (if they are fixed) or move out of their way (if they are mobile) . They are usually much smaller and lighter than traditional robots.

Its main difference with a conventional robot lies in that, while the robot is specifically designed for a specific task and executed in the best possible way, a cobot can be easily reconfigured and even perform several different tasks according to production needs. In short, a robot is specialized and a cobot is versatile.

The cobots are not designed to replace humans, but to complement them. For example, many of Amazon’s large logistics warehouses have swarms of mobile cobots that help humans prepare orders by going to look for products and bringing them closer. In addition, cobots can also be designed to cooperate with each other and not hinder each other. Following the previous example, the system that controls the Amazon cobots optimizes its distribution in the warehouse, so that when a product is requested it reaches the employee’s hands in the shortest possible time.

Exoskeletons: Making humans stronger

Another fundamental aspect of advanced robotics is the incorporation of physical aids to the human being to increase their capabilities. They are the so-called exoskeletons, which are “wearables” robots that allow a human being to carry weights without having to make a great effort or use a robotic arm superimposed on their own to be able to make more force or have more precision of movements.

Exoskeletons serve in environments where the human being is the one who must perform a task, but this task requires specific capabilities that not all human beings have. This facilitates the management of personnel (we do not need to look for a person with special strength skills or a surgeon’s pulse for high-precision work), in addition to minimizing the risk of injuries, making work environments safer and more comfortable for workers. employees.

Both cobots and exoskeletons are technologies that are already available and that constitute another piece of Industry 4.0. It is not only about increasing productivity, it is also about making the work of human beings more comfortable and safer.

Open post

Smart factory: the smart or connected factory and Industry 4.0

Within what is already called the 4th Industrial Revolution, that of automation and digitization of production, there are sectors that go much more advanced than others. Take, for example, the car factories: thanks to the application of the latest technologies in terms of productivity, robotization and constant innovation (fruit of the commercial pressure that the whole sector is subjected to) European automotive factories remain at the forefront in the world productivity ranking, and that despite the being a real logistical, productive and technological challenge in which thousands of different parts must be assembled from many different suppliers different and that they must fit with millimeter precision. This allows the European automotive industry (and the ecosystem of companies around it) to be competitive with factories in the Southeast Asian, Chinese or Latin American area, where, despite lower labour costs, the same levels of productivity are not achieved.

The vast majority of car factories are already in a very advanced state of digitization, so they can already be called “smart factories”. Traditionally, the automobile’s production innovations have since spread to other sectors (from Henry Ford-era chain production to robotization, ‘Just In Time’ logistics or Lean Manufacturing) so with smart factories there is no doubt that the same process of “technological contagion” is already being followed.

When we talk about smart factory,smart factory or connected factory we mean production plants that interact in real time in aspects such as demand identification, supply chain, reconfiguration of production, maintenance, logistics and shipping (and any other element of the value chain)  generating a flow of information in real time that is processed and analyzed for immediate decision-making or even on a character predictive.

How does it affect my sector? How do you do this?

When we talk about smart factories these questions arise in the minds of the leaders of many companies. Obviously the answer to these questions requires a thorough study since each sector and each company are different. Digitization is not a substitute technology, but additive. It’s not about replacing thinking heads with Artificial Intelligence, but about adding both to a combination that generates value. And today we are not only talking about immediate monetary value but also customer value, environmental value and corporate social responsibility value.

First, we must assume that the customer has changed the way they relate to the industry. To a greater or lesser extent, today’s customer uses mobile technologies, spends much less time in their office, and is a connected customer. On a commercial level the predominant trend is the omnichannel or multi-channel client in which we want the shopping experience to be similar by any of the channels youuse. This poses a challenge in terms of production flexibility and ability to adapt to rapidly changing demand. Going back to the example of the automobile, manufacturers must launch new models every year if they do not want to miss the innovation train, something unthinkable 20 or 30 years ago.

Therefore, the technological evolution of industrial production must adapt to these new requirements, and can only be done competitively if it incorporates technologies that take this flexibility into account. These technologies are as follows:

  • Advanced sensorization. For a smart factory to work, it must have a set of sensors at every step of the production process, providing real-time information and even inspecting both the product and machinery to detect faults and correct them non-stop on production lines. We’re talking about the  Internet of  Things (IoT). A system in which the machines themselves dialogue with each other constantly while generating a flow of data that allows an advanced degree of monitoring and control. We are not only talking about traditional robots, but also  autonomous systemsconnected to the factory but with individualdecision-making capabilities.
  • Interoperability and commonlanguages. Components of an IoT network in one or more factories (which may be located in different locations) must have a common communication standard that enables dialogue and data delivery efficiently. The coexistence of different standards and protocols for communication and data management tends to create “watertight compartments” within companies that are sources of inefficiency and loss of productivity.
  • Cybersecurity. A smart factory necessarily has to have advanced cybersecurity and backup systems, as well as adequate risk coverage, to ensure the continuity of operations in the face of a computer attack. Let’s not forget that 80% of attacks target companies.
  • Big Data and Artificial Intelligence. The large amount of data generated requires adequate processing and exploitation. Big Data technologies and the application of Artificial Intelligence algorithms allow company managers to have all the information they need at all times without having to ask for it.  Artificial Intelligence is one of the keys to the flexibilization of production,sincethe factory can adapt on its own to the productive circumstances of each moment, from the generation of orders to suppliers (or the choice between several different suppliers) to the maximum optimization of the production process and logistics control.
  • 3D modeling andsimulation. Virtual Reality and Augmented Reality are necessary technologies so that all this amount of data can be viewed intuitively and in real time. For this,  the creation of a digital factory twin is a key tool, because it allows to overlay to reality the necessary data (in case of using Augmented Reality) or dive directly into a digital model (Virtual Reality) with which you can interact naturally, without having to sit on a keyboard or have great engineering knowledge. In addition, digital twins allow simulations of different scenarios using real data, which takes us from forensic analysis (things that have already happened and cannot change) to predictive analysis (things that are going to happen and that we have to adapt to). Similarly, digital twins are an invaluable tool for optimizing investments in new machinery, expansions and even new factories.

Yes, but this whole smart factory thing is the future. Or not?

Not at all. All of these technologies are already working and are operational in many factories. We are talking about present, here and now, of a train that no industry should lose. Obviously, each company will require a different degree of digitization, but the digital transformation of companies is cross-cutting across all sectors. Companies have to be digitized because the customer has already done so.  In addition, it is the only way to deal with productivity improvement (we are talking about an average of 20% of the income statement) that a large number of industries need to face already if they want to continue competing in the market.

If digitization affects all sectors of society, obviously the productive sector cannot stay out. In fact, the European countries with the highest degree of robotisation and industrial digitization (as is the case in Germany) have created the most jobs in recent years and those that endured the crisis most robustly. The figures show that it is not true that robotization destroys jobs, but transforms them and, in fact, creates net employment. The train is at the station and they have already announced their departure, it’s time to get on it.

Open post

Technology trends in companies for 2019

The year that has just begun will mean that we will continue to advance the technological revolution in which companies are already immersed. The maturation of many technologies, together with their cheaper, will allow many more companies to start or advance on the road to Industry 4.0, something that until now was mainly available to large corporations. In this regard, we believe that 2019 will be the year of the “democratization” of technologies applied to the improvement of productivity and, consequently, of the income statement. These are the most disruptive technologies that we believe will land more strongly in the field of companies, according to the opinion of different specialists:

  • Sensorization: Although industrial machinery already has in many cases remote two-way connectivity (which allows monitoring and centralized control) the generalization of technologies such as RFID tags, biometric identification, high-precision geolocation  (combining the current GPS network with the new European Galileo Network, much more accurate) and the  cheapening of all kinds of sensors will provide an important flow of information that can be managed by  autonomous systems. We’re talking about the Internet of Things (IoT) where all physical systems capture and receive information.
  • Machine Learning: The combination of a large amount of data  (Big Data),provided by sensors and by market analysis, customers and sales, together with Artificial Intelligence (AI) algorithms  capable of performing analysis and reacting to different scenarios in real time, will allow both machines and people to be more effective in their tasks (and therefore more productive). Improving productivity means being more competitive, so companies will have to be very attentive to this technology so as not to miss the train.  The application of the different technologies of Industry 4.0 allows productivity improvements that generally range between 10 and 25%. The key to competitiveness lies in the proper integration of different technologies  to function as a system (or rather, an ecosystem) in which each technology brings a benefit that can be leveraged by different technologies. But there always has to be a system “conducting the orchestra”, and this is something that can be done very efficiently if we integrate digital neurons with human neurons so that each of them does what it does best.
  • Digital Twin: Consultants as prestigious as Gartner do not hesitate to cite the digital twin (a term originally coined by NASA to define a virtual replica of a real environment, whether this is a building, an assembly line, a warehouse or even a city) as one of the major players in the technological revolution that is already underway. Digital twins allow you to simulate the behavior of machines, people, vehicles or any other element of a company’s production chain, model different scenarios and calculate, according to algorithms, the most favorable configuration in each of those scenarios. They are therefore an essential aid for both layout and investment decision making, staff recruitment and optimization of all types of resources, including preventive maintenance and logistics.
  • Immersive technologies: Why interact using a keyboard if we can do it with gestures or with voice? Why analyze data tables when we can view them in a 3D digital environment? Why resort to screens when we can use custom viewers that present us with information superimposed on reality?. None of these technologies is a big novelty: they have been around for a long time. The novelty is that, in order to be truly effective, they need computing power that until recently was out of reach of companies. Something that is no longer the case today, thanks to miniaturization, the lowering of costs and technologies such as distributed computing that we will mention later. All this already allows to incorporate a “digital layer” into any business process, so that those who must make decisions know that they are doing so from real-time data, fully reliable and with all the information they need at all times.
  • Distributed computing (Grid computing and Cloud Computing): Just as the World Wide Web standard was born at CERN Geneva to be able to efficiently transmit data between machines, the premiere a decade ago of CERN’s LHC collider (whose detectors produce 30 Gb of data per second) required the design of a new computer system, called grid computing in which the analysis of that data is carried out in hundreds of universities and research institutes around the world in a coordinated manner, being virtually impossible (and in any case, exorbitantly expensive) to have a machine capable of analyzing such a large stream of data. Hence the idea of “distributed computing”, which is strongly linked to the rise of Big Data. A company can efficiently use all the computing resources available to it on its network, and even contract external services. We thus move from  distributed or cloud data storage to distributed processing of that same data.
  • Blockchain: Blockchain (literally “blockchain”) is the technology developed for the creation of bitcoin and virtually all cryptocurrencies. It takes ideas from cloud computing in such a way that transaction security is also guaranteed by a distributed and therefore unalterable system, since each node of the system keeps an encrypted copy of the transaction and it is impossible to modify all nodes at the same time. All banks, finance startups (Fintech) and any other system that requires a secure and inviolable registration of information are already adopting blockchain technology. We are therefore talking about “smart contracts” and other security systems in the registration and transmission of information.
  • Smart spaces: Like  smart cities in 2019, the trend towards smart factories will be consolidated, in which the space itself is   endowed with “intelligence” and in it live autonomous machines, manual machines and people. The smart factory can sort production flows at any given time according to needs, reorient priorities and manage information bidirectionally, so that each person and machine is where it needs to be, at the moment and doing what it needs to do to keep productivity at optimal levels.
Open post

Artificial Intelligence: Science applied to industry and decision-making

When we talk about artificial intelligence, we all get Hollywood movies, from the evil Terminator Skynet to the no less evil Matrix. It must be said that these kinds of sci-fi films are much closer to fiction than to science.

So what do we mean when we talk about artificial intelligence? We’re talking, basically, about math. Computers are terribly efficient at speeds unimaginable to a human being. Artificial intelligence is only  the application to any area in which it is necessary to perform many calculations very quickly of very complex mathematical formulas (algorithms) made by computers  or, rather, by microprocessors.

A practical example that many of us already carry in our pockets are smartphones with cameras that use artificial intelligence to automatically adjust camera parameters based on lighting, whether people or objects appear, whether they are in movement, etc., to get the best picture from among all possible combinations.

Artificial intelligence comes from the exponential increase in the computing power of microprocessors. Algorithms that only a few years ago could only be executed in supercomputing centers are today available to computers or servers that any company can have.

Applying artificial intelligence to enterprise decision-making

One of the most interesting aspects of artificial intelligence is its predictive capability. Virtually all large financial market operators use this technology to predict market developments in near real time, so that computers control portfolios of securities and, based on the data available to the algorithm, make the decision to buy or sell those securities.

Artificial intelligence is one of the basic pillars of Industry 4.0. Until now, decision-making was mainly based on the “forensic” analysis of what has already happened. We refer to the usual strategic meetings of companies in which the results of the month or quarter are analyzed and from there decisions are made to plan the next period. But the big drawback is that projections are being made to the future of things that have already happened, trusting that the trend will be maintained or will change based on a limited amount of data, but almost never in real time as it is very complex to do so.

However, if we know that computers are better than humans in data analysis and decision-making, why not take advantage of that possibility? Industry 4.0 provides tools, such as robotization and production sensorization, that generate real-time data. The same is true of logistics, orders or many other parameters that are fundamental in the good governance of a company. An artificial intelligence algorithm can simultaneously handle a huge amount of different data, incorporate it into a mathematical model (the “digital twin”) that faithfully reproduces the functioning of the company and make predictions from them, something that a human being, however expert in strategy, is not able to do at the same level.

Does that mean we have to leave the company government in the hands of computers? Absolutely not. The companies that are dedicated to the development of Industry 4.0 do not raise that scenario. What we are raising is the cooperation of human biological neurons with the digital neurons of machines. Strategic planning remains (and will be for a long time) people’s heritage. But artificial intelligence is a valuable tool that allows us, thanks to simulation techniques, to predict different scenarios from a given situation with a high degree of reliability. Questions of the type:

  1. When will I need to increase my production capacity and in what aspects of my company should I?
  2. How far should I increase that capacity (or reduce it, if that were the scenario)?
  3. Is my company properly sized in terms of personnel and equipment to cope with the strategic plan?

… and many similar ones can be answered much more easily and, above all, much more precisely thanks to artificial intelligence. And in a company, in any company, not going short or going too far means an automatic improvement of productivity while a significant savings in costs and investments.

In the same way, there are mathematical models that allow to simulate the forecast of demand that facilitate results to adjust, balance and optimize stocks, guaranteeing incredible levels of service.

Until now, the application of high-tech artificial intelligence was only economically justified in large corporations, given the cost of its implementation. But, as we said at the beginning, the necessary technology is no longer so expensive, so more and more companies would gain in the cost/benefit equation if they implemented the technologies of artificial intelligence,  robotization, sensorization, simulation and digital twin among others) proposed by Industry 4.0. And in the very near future, companies that don’t will be losing the productivity train.

NORLEAN is much more than a simulation company. We are a multidisciplinary team (Engineers – Experts in productive operations, Business Experts, Marketing & Sales, Data Analysts, Technologists, Mathematicians and Financials),we make clear the direction in which the technology applied to the world of the company moves and we have set out as a mission to democratize access to that technology so that not only do large corporations benefit from it. Because the future is already present, here and now.

Open post

3D simulation as a productivity improvement tool

The popular saying says that experiments, better to do them with soda. We can’t think of a more concise and simple description for the need to use simulation tools in the  industrial environment.

Simulating in this context is equivalent to the possibility of testing and learning in a safe environment that mimics as faithfully as possible to reality. Something that has been done for decades to train airplane pilots, ship captains or astronauts.

Simulation in the industrial field is not a new concept for large companies, which have been using mathematical models and algorithms for a long time to simulate different processes within their value chain.

However, these tools were so far restricted to the field of engineering, given their complexity and the need for deep technical knowledge for their management and use, in addition to requiring considerable computing power to handle large amounts of data.

On the other hand, 3D software is also nothing new in the field of product design and product manufacturing, since CAD applications have been used for decades, first in 2D versions and later in 3D.

To put it in some way, the bricks to build the 3D industrial simulation building were already available, but its high cost and the complexity already mentioned restricted its use to very specific aspects and sectors in the field of engineering, architecture and construction. industrial design.

Industry 4.0 is, according to many experts, a new Industrial Revolution. Digitization already reaches any area of ​​the company and is no longer the heritage of large corporations, due to the progressive lowering of its tools and applications. Today we have more computing power and storage capacity in a laptop than we could have 20 years ago in large servers.

If we have the ingredients and they are already accessible to everyone, the time has come to get to work.

3D simulation: Optimization of resources and saving of costs and times.

All studies on Industry 4.0 mention simulation, virtual reality and augmented reality as part of the basic elements of the technological revolution involved in Industry 4.0. The creation of a “digital twin” of a company (or a building, or even a city) allows to study different scenarios to identify, through the use of mathematical algorithms, the optimal scenario to improve productivity while maintaining an adequate cost/benefit balance. Let’s look at a practical example:

A company wants to implement robotic systems or automations that allow it to improve its productive capacity. But where is it profitable to locate robots in a production chain, and how many of them? How to know in advance, when making an investment effort in robotization (and its corresponding sensorization)if we are not going to fall short or, on the contrary, we are overestimating our needs? That is where 3D simulation plays a fundamental role, by allowing us to study different scenarios and accurately calculate, known the dimensions of physical space, logistical capacity and many other factors, what is the optimal point at which we will maximize the results minimizing the investment.

But this is not the only role of 3D simulation technology. A major problem that many companies face is that, unless they are completely new factories or companies, there is often a high degree of technological dispersion: various technologies living together and producing data that, in many cases , are scattered in different places and hinder analysis and overall vision, making this task unnecessarily complex and making decision-making difficult or delaying.

NOA (Norlean Operations  Analyzer) is a new tool that overlays a virtual reality layer on all this data, integrating them into a single interface that can not only be viewed on a  screen but can also be “entered” through virtual reality glasses.

In addition, NOA not only allows you to simulate machines. You can also simulate people, layout  and processes, presenting them not at a click, but a simple hand gesture. The user no longer has to be a specialist to understand what is happening or access specific data, as he can move and “walk” towards them. A totally immersive experience that puts at the reach of any company a technology so far heritage of large corporations. All data, even in real time with an adequate sensor network, can be viewed in a way that is superimposed on that virtual reality environment.

Creating an intuitive, immersive and manageable digital twin for people without great technical knowledge is a revolution in decision-making. 3D simulation makes it possible and NOA is a real example that the future is already here.

Open post

Digital twin, a key piece in Industry 4.0

Virtually everyone in business circles has heard of digital transformation and Industry 4.0. However, if we ask a little more in depth, many people understand digitization as something purely computerized: the elimination of paper, the implementation of ERPs and other types of software, communication by digital means or, at most, robotization.

When we talk about Industry 4.0 we are talking about something much deeper, a new industrial revolution that is not “for the future”, but is already underway. It is a train that industries can not lose, under penalty of seeing their competitiveness is definitely ballast.

To better define what is Industry 4.0, nothing better than following the example of the big ones. The international consultancy Gartner, one of the world leaders in technological innovation programs and trend analysis (it is no coincidence that its headquarters is in Stanford), has defined a decalogue of basic technologies of Industry 4.0. These are technologies that are already available and applicable:

1. Artificial Intelligence (AI): Algorithms that carry out actions autonomously, learning from mistakes and maximizing the chances of success in the function or task entrusted to them. Something that until recently was reserved for large corporations like Google, and today we already have many everyday objects, such as smartphones cameras.

2. Smart App: Interactive applications that use those AI algorithms to interact with the user. Something that we also have in many everyday devices (Siri or Alexa are practical examples) but that have applications in all areas of the company, eliminating downtime and improving efficiency.

3. Smart objects: (Internet of Things, or IoT): As with applications, objects and machines are also increasingly using these algorithms, in addition to internet connectivity. In 2019 there will be more objects connected to the Network than people. These objects can communicate with each other remotely, cooperate with each other to execute tasks and obtain information directly from the Network.

4. Digital Twin: The creation of “digital twins”, which are virtual replicas of different production processes or the company as a whole, allows to carry out simulations of different scenarios (for example: I want to introduce robots in my production chain, How many do I need and where does it make up for me to do it?). The digital twin is a model applicable to factories, buildings, cities …
The main advantage of the digital twin is that it gives the company or organization unprecedented anticipation capacity until now, especially when they are developed in 3D and immersive technologies such as virtual reality or augmented reality are used. The digital twin is more than a tool, it is a new concept of management of complex systems (ecosystems) such as those that occur in industrial production, construction and management of buildings or smart cities.

5. Cloud to the Edge: The amounts of data that are generated are too large to be managed by a single server, in addition to involving a continuous investment in systems upgrade and digital storage space. The cloud is the option, both in data storage and in leading systems such as distributed computing (cloud computing).

6. Platforms of conversation: A company is, above all, a group of people who need to communicate efficiently. E-mail has already demonstrated its limitations in this aspect, which is why more and more companies incorporate project management and information exchange systems, which allow the realization of documents and cooperative projects and improve the effectiveness of internal communication, ensuring that it reaches the right person at the right time to make the right decision.

7. Immersive experience: Everything we can do with natural gestures, we will stop doing it with keyboards. Today no one conceives a mobile phone with physical keyboard, and the same is happening with the rest of computer systems. In this sense, virtual reality and augmented reality take on all the protagonism, allowing to visualize and interact with the system in a much more natural and fluid way.

8. Blockchain: The blockchain technology, which is already being incorporated by banks and insurance companies, represents a real revolution in the confidentiality, security and inviolability of transactions. Being distributed systems, they are practically invulnerable to attacks, one of the growing concerns of all companies. Blockchain will be a basic element of cybersecurity in a short time.

9. Model controlled by events: It constitutes the “digital layer” of the business. Modeling, to be effective, requires to be simple. If digitizing is more complicated things, then it will not work. Therefore, simple graphical interfaces are required, adapted to each need so that a simple touch triggers a chain of responses that allows real-time information of what is happening. For this, it is vital to have a series of sensors that inform the system of what is happening at each moment, which allows the digital model to adapt to each circumstance.

10. Adaptive risk and confidence: Security management must be continuous, based on levels of trust and flexibility. If it is too rigid it will be counterproductive since it will slow down the start-up of new processes. If we manage the risk in a lax way we will be creating security breaches. Therefore, the risk must be continuously monitored and the systems, interfaces and authentications must be able to change depending on the level of risk at each moment. It is obvious that a company under attack must respond instantly by shielding its systems, but until now that was done manually. Gartner proposes a strategy that, based on the risk level monitoring, automatically adapts the security systems.

At NORLEAN we are pioneers in the creation of 3D digital twins, using immersive virtual reality technologies that allow both the simulation of different scenarios and the real-time management of the system with data from different sensors, applications and machinery. That is, they allow not only to visualize the entire ecosystem, but also to immerse and interact with it. In future publications we will go deeper into our digital twin model. At NORLEAN we do not wait for the future, but we are engines of technological change so that the future becomes a reality today.

Scroll to top