Blog BN′B


Designed & Made
in America (DMA)

BASIL Networks BN'B

The BASIL Networks Public Blog contains information on Product Designs, New Technologies. Manufacturing, Technology Law, Trade Secretes & IP, Cyber Security, LAN Security, Product Development Security

Internet of Things (IoT) Security, Privacy, Safety -Platform Development Project Part-10

saltuzzo | 04 May, 2018 12:19

Part 10: IoT Core Platform Development - Product Development Management
The IoT Embedded Core Platfrom Documentation Management Introduction

"Learning is not attained by chance, it must be sought for with ardor and diligence." - Abigail Adams

Part 1 Introduction - Setting the Atmosphere for the Series (September 26, 2016) 
Part 2 IPv4 & IPv6 - The Ins and Outs of IP Internet Addressing (November 11, 2016) 
Part 3 IPv4, IPv6 DHCP, SLAAC and Private Networks - The Automatic Assignment of IP Addressing (November 24, 2016)
Part 4 Network Protocols - Network, Transport & Application (January 10, 2017)
Part 5 Network Protocols - Network, Transport & Application -Continued (Aug 17, 2017)
Part 6 Network Protocols - Network, Transport & Application -Continued -Ethernet Protocol (Sept 21, 2017)
Part 7 Network Protocols - Network, Transport & Application -Continued -The CRC-32 and Checksums(Nov 23, 2017)
Part 8 IoT Core Platform - SoC Core Processor of Embedded Systems (Jan 12, 2018)
Part 9 IoT Core Platform - SoC Core Processor of Embedded Systems-Vulnerabilities (Mar 16, 2018)

Quick review to set the atmosphere for Part 9
From the previous Internet of Things Part-1 through Part- 9:

  • (Worth Repeating) - Since the beginning of this series in September 2016 there have been many hacked IoT devices using COTS embedded hardware and software creating high visibility to security and privacy.  The current database of breaches encouraged us to present a more detailed hardware and software presentation to assist designers and educate new comers of the new challenges with security and privacy.  Due to the complexities of processors today we will continue to follow our technical presentation methodology,  Overview → Basic → Detailed  (OBD).   We will be addressing the many sections of the Core IoT Platform separately to keep the presentations at a reasonable length.  The full details will be presented during the actual hardware, firmware and software design stages.
  • The atmosphere has been set for the Internet operation overview in parts 1 through 6.
  • The Ethernet physical protocol is the most used for communications over the Internet.
  • All communications throughout the Internet is performed as UserRouterInternet RoutersRouterUser
  • According to Netcraft there are over 1.8 billion websites active on the Internet that means over 3.6 billion routers minimum.
  • The basic selection of protocols for the IoT Platform have been defined.
  • The conceptual functional block diagram of the IoT Platform has been presented.
  • Basic types of CPU architecture on the market today

What we want to cover in Part 10:
Now that the basics on the Internet and embedded processor technology has been presented we are at the time in the series that we have to create the project design management documentation that defines the product we are going to build and market.

We will give a brief summary of how the semiconductor market thinned out as new technology enters the market through Mergers and Acquisitions.

We will indroduce Product Design Management, Project Management, Traceability Management and Asset Management aspects of creating a product from conception to manufacturing.  The links below are the sections of this part of the series.

Lets Get Started:
A Brief summary - Embedded Processor Selection:
Now that the short detour of vulnerabilities have been covered in part 9, we can get back to the business of the Core IoT Platform development.  For this part of the series we will acquire the data required to make hardware selections for the embedded platform development to move forward.  In the previous sessions we mentioned Mergers and Acquisitions and for those who were curious why we bring a business process into a hardware and software design series is that M&A's have a direct impact on product designs as we will see in this section.  Since the series started in September 2016 there have been several important mergers and acquisitions in the embedded market place.  There have been many vulnerability issues directly related to the embedded market with embedded product designs being hacked and causing servers to drop off line from toasters to fish tanks accessing local networks and databases connected to them.

Mergers and Acquisitions are a critical part of product development due to the fact that inventories merge and components of the inventories are discontinued and many times with no replacement component.  This is not a new practice, to give an example, the discrete semiconductor digital logic chips (LSI, SSI) 74xx series IC's were manufactured by several difference independent corporations.  With the introduction of the CPLD (Complex Programmable Logic Device) and the FPGA (Field Programmable Gate Array) the discrete digital chip market dropped in sales over time which caused Mergers and Acquisitions, Texas Instruments, Burr Brown, Fairchild, National Semiconductor and others entered the Mergers and Acquisition process to thin out the market and Altera, Xilinx, Lattice, Cypress and others saturated the CPLD and FPGA markets   These same discrete digital logic semiconductor manufacturers have also entered the embedded market and mostly offer the licensed ARM processor in their own flavors to saturate the market.  There are over 5000 devices to choose from that are offered by various semiconductor manufactures which makes it difficult to just select a few to narrow down the selection process.  

The more M&A's the more the playing field is reduced, so why do we bring up mergers and acquisitions in a technical design series?  Simple- the M&A's throughout the embedded market narrows the playing field and dominates market share.  M&A's also force the end of life of older products as their embedded processor becomes discontinued.  Two major players have emerged from all the M&A's for market share and they are Microchip® and NXP® corporations "at this time frame" and have presented by example the longevity of their products for the embedded markets.

The two that we have selected in Table 10.0 show that their main product line is embedded processors and the secondary product lines support their embedded line.  From a supply chain point of view this puts these companies in the lead for their commitment to the embedded processor market.  Each company has their own line of embedded processors as well as the more common licensed line of ARM processors as shown in the table below.  It is no surprise that ARM processor licensing business model has made an impact on the industry.

32 bit Family

Microchip NXP

ARM 32 Bit

MPC5xxx   X
QUICC(85xx)   X

Table 10.0  Selected Manufacturers to Embedded Processors

Both companies have a unique challenge ahead with their IDE environments.  Microchip MPLAB only supports the PIC line at this time, the Microchip ARM line is supported by the Atmel IDE.  Integrating these two IDE's would be a challenge since they both require a lot of storage and have different Assemblers and compilers.  All the IDE's for NXP as well as Microchip are project oriented type environments.  There are so many embedded processors to choose from and would apply to specific applications that it becomes difficult to just choose a single platform or architecture for a base platform.

The embedded processor market has specific requirements that differ from desktops and servers, that is the embedded processors have to have at least a five year preferably 10 year shelf life which makes Microchp and NXP a likely choice for maturity reasons alone.  The other main criteria pertains to the packaging of the selected processor.  Today many of the IC's are fine pitch and require a class III pcb for reliability and quality in manufacturing, the difference is in prototyping, a FPBGA (Fine Pitch Ball Grid Array) would require an experienced assembly house where a QFP (Quad Flat Pack) may be hand soldered manually and have access to the pins for testing.  Packaging will be addresses in the selection.  

The intent here is to present to the reader how to plan a flexible product line that will evolve by adapting to the changing technology and business models over time to minimize the TOC (Total Cost of Ownership) of a product line.  There will always be M&A's, business model changes as well as technology changes for a faster better embedded processor chip.  With that in mind we will present the best ways to accomplish a core platform that will have longevity and flexibility to adapt to new technologies.

From our market research of the Internet of Things the embedded processor market has been brought into high visibility for the next generation of technology, hence, mergers and acquisitions for market share of embedded processor manufacturers are already starting to thin out the market form the top major players.  Microchip, NXP are the embedded processors major players at this time however, Intel and AMD appear to be entering this embedded market as well even though they have been rolling silicon in the past by entering their past processors in the embedded market the turn in silicon every year keeps them at a desktop, tablet area manufacturer where 18 month life cycles are part of the marketing campaign.  Companies like Microchip, NXP, Altera (now part of Intel), Xilinx, Lattice have all shown a greater than a five year product life and have established the confidence to be incorporated in the embedded devices.

The embedded market anticipated CAGR (Compound Annual Growth Rate)  is about 4% for 2018-2019.  The market size is estimated at about $200+ billion considering the 1.5 Billion smart phones and the 500+ million tablets per year, 15 million cars manufactured along with a few other general markets.  The IoT markets is anticipated to be in the $20 Billion range by 2020.  Companies like AMD and Intel are now entering the market with a different perspective and expect to corner about $10 to $15 Billion segment with their new lines of processors, however both these companies have yet to commit to the real requirements of the longevity of the embedded market, time will tell how serious these companies are.

[Section_Menu]     [Top_Menu]

The Common Hardware of Embedded Processors
Of the few thousand embedded processors to choose from by various semiconductor manufacturers they really do have common internal peripherals regardless of the processor types 8, 16, 32, 64 bit.  All have embedded processor chips, incorporate a variety of unique integrated peripherals and there are a few common peripherals that we will cover shortly.  

The common hardware issue is how these integrated peripherals are brought out to the package pins to interface with the physical environment.  The common practice in place to setup the chips internal peripherals is through a pin configuration matrix that is controlled by configuration registers at initial power on sequence, Boot Time.  Of course analog input and output pins for the A/D and D/A type internal peripherals are connected directly to the pins and are shared always, the digital signals are multiplexed through a matrix selection process.  Figure 10.0 below shows the common embedded processor functional block diagram of how these assignable pins play a roll in our selection.  The Pin Select matrix is unique for each of the processors and the configuration registers for each of the embedded processors, therefore understanding the matrix and how they function is a key performance issue with embedded architecture.  Of course market share is very important to all manufacturers and to attempt to lock in applications they all have a different pin assignment matrix making second source from other manufacturers difficult.

Figure 10.0  Embedded Processor Chip Common Block Diagram

The Embedded Processor Selection Criteria "Enigma":
OK, here we go - In the previous parts we identified the protocols and presented different types of embedded CPU architecture.  We will now address this series common environments and characterize the hardware requirements for the Core IoT Platform.  Many developers/designers characterize a specific application for their embedded project, hence a toaster controller, a coffee maker controller, a HVAC system monitor all define a fixed set of I/O functions for the application, therefore defining the hardware for a specific application.  Our Core IoT Platform is intended to be the heart of many different applications and therefore requires outside of the box performance characteristics such that the core is at the highest level of reuse as each application is established.

There is "no single embedded system" that will fulfill all the applications, however they all start with a core platform and grow from there to fit the application.

Technology today allows the designer to be more flexible in determining the full cost of the application.  From a business point of view, the profit margin of a product line has many variables and one of the strongest variables is reuse, therefore the core IoT platform intent is to present the best competitive margin for product manufacturers.  

If you have to maintain a large inventory of different platforms the redesign of any one of them could cost the margin for that line.  If you have a common core a redesign not only saves money and time in the long term but also allows marketing to present a bigger, better, faster mouse trap presentation for maintaining their market share.  Attempting to configure a common platform is a challenge, however not impossible.  We will not be looking at the simple controller for a toaster or fish tank, (not to be used in Las Vegas), since a single chip would handle those applications easily.  The Core IoT Platform for this series requires full Internet security, safety requirements as well as privacy control, which will exceed the majority of single chip applications today.

[Section_Menu]   [Top_Menu]

The Common Issues of Embedded Processors:
We discussed this before and it is always beneficial to keep this in mind when working with embedded systems.  When making a selection of embedded processors for a platform it is recommended that you have a plan B processor to implement when plan A processors is discontinued.  With today's competitive market semiconductor manufacturers attempt to stretch the product line as far as possible which narrows down to less time to redesign a product for the end user or forces a lifetime buy.  The end result is to either end the life for the product line or choose another embedded processor and initiate a redesign to keep the product line active.

Choices to keep in mind:
Choose a set of embedded processors that have the same speed, databus width and the ability of an Extended Bus Interface to run the processor with extended memory if the applications require more memory as many applications probably will.

Manufacturers of embedded processors tend to focus on a specific application arena in order to become the lead supplier in their application market which makes it a bit more challenging to categorize cross manufacturer features with capabilities.  We will only do a few here to set the platform for the cross reference table.  Keep in mind that just because the embedded processor has the peripherals that are common to several others does not mean they will be capable of configuring these peripherals to function the same way or all be active the same way.

Even though the embedded processor is marketed for a specific application does not mean it is only applicable to that specific market area.  The same PIC32 series and the ARM-32 series processor series are used throughout a vast array of applications.  

As stated M&A's have surfaced two major players outside the standard IC (Integrated Circuit) manufacturers and they are Microchip and NXP.  Both companies have the majority of the embedded market share and appear to be making commitments for many years of support by example.  Microchip has maintained its PIC line for beyond the expected lifecycle up to 12 plus years for some of their PIC lines.  NXP published a 10 year embedded device lifecycle guarantee which is a first for public advertising.   Table 10.1 below are just a few of the players in the embedded market that handle 32 bit processors that we are interested for this series core IoT platform.  Keep in mind that this does not cover the entire embedded market however this is a sufficient number of  players for a short list of 32 bit processors.

Manufacture Processor
Microchip PIC32MZ 100-200 >128K >256K >128K Yes Yes
Microchip Cortex A 180-400 >128K >256K >128K Yes Yes
NXP Cortex M 180-400 >128K >256K >128K Yes Yes
STMicro Cortex A 100-300 >128K >256K >128K Yes Yes
Cypress Semi Cortex M 80-200 >128K >256K >128K Yes Yes
Texas Instruments Cortex A 300-1000 >128K >256K >128K Yes Yes
Maxim Integrated Cortex M 80-200 >128K >256K >128K Yes Yes
Renesas Electronics Cortex A 400-1500 >128K >256K >128K Yes Yes
Analog Devices Cortex M 100-240 >128K >256K >128K Yes Yes

Table 10.1  Embedded Processor Manufactures Short List

As we stated prior, not all embedded systems will allow their integrated peripherals to be selected at one time, the "Wild West of Embedded Processors".  The "short list" of companies that we selected from for embedded processors are shown in Table 10.1 above.

When we look at the embedded market it appears that every discrete IC manufacturer has jumped on board with their own flavor of an ARM series processor since it is just a license agreement away.  Looking at distribution and conducting a search for embedded processors - we get a selection of a few thousand chips from various manufacturers.  As we stated the issue in the embedded processor market is longevity for a selected processor chip; the manufacturer of discrete semiconductor IC's constantly fine tune the profit wheel and discontinue parts when they feel that the part will no longer sustain a desired product growth.  

It takes time to recover costs of manufacturing for a product line that uses several IC's and show a profit, if a redesign becomes mandatory too soon then the profit margin will reduce and in many cases the manufacturer is forced to maintain the product line to sustain some form of market share and manufacturer integrity or discontinue the product line.  Will a company keep a product line if it does not sell enough or yield the profit margin they want? the answer is-- wait for it -- HHMmmm "NO".  The common questions that designers have to keep in mind when choosing an embedded processor for a longevity product are the following:

  • The initial release date of the device - this starts the life cycle clock
  • The revision history of the device - This shows the support of the device
  • The roadmap of the device family - this gives an idea of how long it will be active
  • End of Life (EOL) of the device - the LTB (Last Time Buy) notice.

Merges and acquisitions pitched a curve ball into all promises for maintaining product longevity when inventory merging processes begin, usually within the first full year after the merger.  The new questions to be added to the design review list and answered are:

  • Is the device part of a Merger & Acquisition inventory?
  • Does the device compete with the companies standard line before the M&A?
  • Does the M&A roadmap,(if published) sill list the device ?

In any event it would be a fair assessment not to use the products of a M&A until the dust settles or some type of guarantee is established in order to determine if the device selected will have some form of stability for a longevity product.  If the quantities are high enough a separate contract to guarantee a lifecycle of the product could be negotiated.

[Section_Menu]    [Top_Menu]

The Peripherals "Conundrum":
Embedded Processors have several common peripherals integrated on the chip that are industry mature and relatively easy to use within the chip, that is as long as they do not interfere with other peripherals for the same application, hence the peripheral conundrum.  The entire line of embedded controllers on the market today, yes, I stated "entire", allow some type of peripheral configuration methodology, usually pin selection via configuration registers.  The conundrum arises when the selectable pin assignments conflict with the selection of two different peripherals that share the same pin through the configuration register matrix and both are required for the application.  Programmable pin assignments are generally unique to the manufacturer assiduity focused on maintaining their market share and eventually lock users into the manufacturers product line.  This unflagging pin assignment effort is true in the FPGA and CPLD programmable logic manufacturers as well and it is all part of competition and free market enterprise.  The final effect is a sole source device and will always be an issue in a supply chain environment where silicon rollover may easily determine an end users product life if an alternate source is not available.   In order to have a plan B in advance we will look at the common peripherals within the embedded market as well as the common processors to create a reuse plan for our core platform.

For this series core IoT platform we have created a list of common peripherals shown in Table 10.2 that are integrated on many of  the embedded processor chip.  Not all peripherals are integrated in all the selected embedded processors so some would require external peripheral configurations to complete the common platform.

Peripheral Description
Timers Watch Dog for program runtime stability
Counters Standard Binary counters synced to the processor clock frequency
SPI Serial Peripheral Interface
SQI Serial Quad Interface
I2C Inter Integrated Controller
Serial RS 232/422 type interface
Boot FLASH Power On- Initialization program FLASH Average 256K
Program FLASH Runtime Application Program FLASH Average 2048K
SRAM Static RAM for Runtime R/W data Average 16K - 512K bytes)
DRAM Controller User Configurable ( 32Meg - 1024 Megabyte )
Digital Ports Generally Digital I/O Ports 8/16/32 bits
System BUS This is the System Data BUS, Address/Data

Table 10.2  IoT Platform Embedded Processor Desired Peripherals

Integrating all these peripherals into a single chip becomes obvious that the package selected will have a direct impact on the number of peripherals that may be active for the application at any one time; running out of pins is a common issue with embedded processors ICs.  This is addressed by programmable pin assignments through a selection matrix controlled by configuration registers to give the best performance for the selected peripherals for an application.  Where it becomes interesting is that just because the core processor is a PIC, ARM or RISC, CISC type processor does not mean that other processor manufacturers that supply a similar or for that point an ARM processor core with the same footprint will assign the same pins or the same peripherals to the configuration registers, hence: again "the conundrum".

Research has shown that there are a few embedded processors that would fit the recipe for this series core IoT platform, they are the NXP 68xxx series (originally Motorola-Freescale line) some will be discontinued also, ARM 32 bit Cortex A, M series, Microcips PIC32MZ series.  Seasoned (older-more experienced) engineers, author included with prejudice, were mentored in the art of design by starting at the "output, problem or need" to be solved and work towards the input of a design, analyzing required results to resources insures that the resources will address the required results.

With all the integrated peripherals on a single embedded processor chip today, it appears on the surface the embedded processor chip could address every possible application know to mankind, however in reality as we stated only a few peripherals may be used at any one time since both pin count and pin sharing configurations will only allow certain peripherals to be part of the real world.   This gets more complicated when we attempt to second source a typical ARM embedded processor chip only to find that all ARM processors on a LQFP-176 pin footprint do not have the same pin assignments.  Since redesign is a high visibility issue today we would like to keep any form of redesign to a minimum as well as Total Cost of Ownership, (TCO) to a minimum as well. .

The short list of embedded processor manufacturers shown in Table 10.1 all have several peripherals in common and are available in common package footprints.  All of the processors have programmable function select registers to allow the Data and Address Bus to be brought out to a select pin assignment defined by the manufacturer for configuring the embedded system as a straight forward CPU chip for adding external components, peripherals and memory.   Figure 10.1 shows the typical mind map process flow we use at BASIL Networks, PLC to determine the package format, common peripherals and embedded processors that may be switched out easily with the least amount of hardware change.   To continue on with the series we selected the LQFP 176 pin package to start, there are FPBGA packages however if the application is to be used in harsh environments the mechanical integrity of FPBGA in the 0.8mm ball and over 300 pads becomes an environmental quality concern,  smart phones etc. not included since the life span is expected to be shorter.

Figure 10.1  Embedded Processor Selection Map

[Section_Menu]    [Top_Menu]

The Memory "Dilemma":
The limitations within embedded applications begin to compound when the applications leaves the simple washing machine, toaster or coffee maker application, I was going to add refrigerators until the latest commercial of a refrigerators equipped with a full size touch screen, a database that records everything inside and creates a shopping list and sends it to your smart phone.  It is a easy to get a "clouded" vision of embedded processor applications today.

The standard embedded controller on average comes with 2048K of program FLASH and 512K static RAM that will generally handle single task applications, however, since we are looking at the more complex industrial and commercial arena this configuration will find itself with limitations.  Some have a Memory Protection Unit that incorporated some form of  Virtual Memory with constraints as to the number of virtual task it may perform and the size of the virtual Translation Lookaside Buffer (TLB).  

So as all good technologists we look at extensions to handle the limitations, just add external memory, simple right?  Not as simple as one may think, in fact adding external memory to an existing embedded controller has requirements, compromises and limitations.  The first compromise is pin assignments, address and data pins take up pin resources, especially if you are looking at 32 bits data and 24 bits address, even multiplexed it is still 32 bits data and several control lines which takes clock cycles and reduces performance.  Limitations and compromises come into the picture with how to access the memory and what do we want the memory to be used for, program memory that requires access to the CPU, API storage EEPROM or RAM type data with access through some other form of I/O.

FLASH Memory - Boot, Program, Storage:
To show how sparingly we use memory these days, In the 70's a full Disk Operating System (DOS) actually ran in 4096 bytes and the size of the hard disk was 256K Bytes.  Today "Hello World" with a C compiler takes 16KBytes and that is the link to the Operating System resources it is running on.  It only stands to reason that as applications evolve so will the memory requirements.  

Serial memory such as SPI or I2C are limited in size and speed and generally inefficient to store data or program overlays due to the single serial line I/O.  SQI (Serial Quad Interface) is a bit more efficient and allows larger FLASH sizes as well, however not all embedded processors incorporate the SQI interface.  SQI is also not a choice for program memory since communicating with the CPU requires a direct path and the data would have to be loaded into static RAM to be used by the CPU.  

Boot FLASH that resides internal on the embedded system chips on the above list all have adequate size Boot FLASH greater than 128K Bytes.  For our IoT Platform FLASH is not efficient for fast Read/Write data storage for the simple fact that there is a limited Erase R/W life cycle with all FLASH.  For permanent storage that does not require fast retrieval it would be sufficient.

RAM Memory- Static, Pseudo, Dynamic:
Random Access Memory will be required to read and save real world data temporarily until it can be transferred to data storage via some type of communication protocol.  Depending on the application this could range from 16K Bytes for small transfers up to 32 Megabytes and more for larger applications.  It is advisable to use DRAM if the application goes beyond 4096K bytes outside the normal internal embedded processors memory, mainly for the fact that DRAM uses less pins for addressing and control of memory.  Selecting and incorporating external DRAM is an interesting issue since it requires a DRAM controller that many of the embedded processors do not incorporate.  So we will look at embedded processors IC's that incorporate a DRAM controller and the resources that is used to attach external memory.

[Section_Menu]     [Top_Menu]

Common Embedded Processor Platform:
OK, now that we addressed some of the common characteristics of embedded processor chips it is time to decide on how to implement these common features.  At this point in the development process we have gathered enough information to answer the question of how do we separate this development into sections that will allow the maximum reuse and minimum TCO.

Experience and errors over years gives us a more educated choice on the design direction.  Managing the design is very important and the initial approach has a direct influence of the finished product.  The IoT Platform will be used on-line and security, safety and privacy will be a critical part of the development, therefore traceability has to be maintained throughout the design process.

In order to have a common IoT platform the development direction would be to separate the hardware into categories, hence: the Embedded core processor, external memory and the peripherals which directs the development to a Modular Design Methodology (MDM) which has been used at BASIL Networks for the past 30 years and has proved to be effective and efficient for both hardware and software development.   Isolating the embedded processor chip to a single PCB and defining a standard pin assignment through a high reliable connector platform allows a stable pin assignment for communications between processor and peripherals and addresses the concern of various processor families and footprint pin assignments.

The mezzanine board form factor requirement approach incorporates MDM and will allow the flexibility for the manufacturer to develop an IoT platform for several applications, allow common software and firmware development, reduce peripheral conflict between manufacturers of embedded ICs and much more.

Separating the high priority peripherals required for applications on to a secondary programmable IC, hence: an FPGA or CPLD would allow greater flexibility and reuse especially if the embedded processor IC enters the discontinued device state.  MDM ideology has been around for many decades and applied to many different fields and is widely used today in many electronic devices through out the commercial and industrial markets.

Design of the embedded processor mezzanine printed circuit board for a LQFP-176 is not a difficult project and would allow testing independently.  We will cover the design process for the Printed Circuit Board (PCB) as the series progresses and we enter the physical design stages.  Separating the embedded processor on a mezzanine PCB would also allow other embedded processors families to be designed on separate mezzanine cards from different manufacturers for performance comparison.  Keep in mind, just because the peripherals are integrated does not mean you have to use them.  Using them could have its compromises as well when it comes to performance and programming drivers for them.

The advantages of putting the embedded processor on a small mezzanine board allows the flexibility of replacing the embedded processor while maintaining the peripherals that are critical to the application intact.  This allows the least amount of redesign as silicon rolls over as well as allowing to keep the same processor family which allows the code to be reused as well.  Other mezzanine cards to be considered are the WiFi, Bluetooth and the RJ45 Ethernet that have rolled over silicon many times over the past few years.

Over the years we have experienced with embedded processor the effects of revision cycles when some new added feature removes or does not allow the previous features to be configured the same way.  It is less expensive to rework a small mezzanine board than rework the entire design, this also allows the ease of field revisions if required.

An educational point of view, the mezzaninie board approach will allow the use of the embedded processor the reader is most familiar with, from the hobby market to the industrial COTS market.  Figure 10.2 below is this series approach to MDM mezzanine boards to separate peripherals that are unique to the application and the basic peripherals that are implemented is several embedded processor IC's giving a wider range of selection to fit applications.

Figure 10.2  Embedded Processor Mezzanine and Main Peripheral Block

Rapid development kits are available for many of the embedded system and helps in reducing development time as well as testing unique protocols and application software.  We have a few of the Microchip development cards and a few of the CPLD and FPGA development cards from Altera and Xilinx that we have .  The mezzanine card for the embedded chip is small and compact and easily placed on the main board with a set of micro connectors if small size is required.  For the Fine Pitch BGA devices with 288 pins the embedded chip is only 15mm square by 1.5mm height.  The entire mezzanine card with external memory is only 1.5" square with the full data/address bus at the connector.  Of course the board would be slightly larger for a QFP176 or 208 pin, however the cost of replacing the embedded processor for a redesign is minimal at best using the QFP packages.  The Main Peripheral Board in Figure 10.2 shows that the common peripherals like I2C, SPI etc are just a pass through since most embedded chips do these functions very well and are mature.  However, if pin assignments do not allow these peripherals on separate pins the FPGA is easily programmed to support these peripherals easily as well which will allow common drivers for those peripherals.

Embedded applications are getting more and more complex and the size of the data being processed is also increasing.   Since most embedded systems have a maximum of 2048K of program flash and an average of 512Kbytes of static RAM which is adequate for most standard applications, however for those applications that require continuous data gathering and buffering to transmit the data over a network the use of a DRAM controller and some type of Synchronous DRAM should be included in the core system.

The DRAM controller narrows the selection field at this time, however more and more embedded systems are incorporating a DRAM controllers. Manufacturers also have their own versions and limitations for the DRAM controller as to memory size, DMA, access, transfer speed, memory type and so on.  This also narrows the selection of a large amount of chips to choose from.

The DRAM controller integrated in the PIC32MZ will allow a 32 MB DDR2-400 SDRAM to be directly connected to the chip, if a larger SDRAM memory is required the larger pin count will allow up to 134MB SDRAM memory may be connected and require the LFBGA-288 pin chip.  We will address this again in the next part of the series and weigh the pro's & con's of a separate DRAM controller and memory system for the platform.  

The integration of a GPU (Graphics Processing Unit) in several embedded processor IC's are also surfacing with more capability, these GPU peripherals also take up a lot of pins and limit the other internal peripherals and do require a DRAM controller with external DRAM to obtain the graphics performance.  For the IoT Platform for this series we would like a large amount of internal memory both program and data as well as the External BUS Interface Address & Data BUS to be used for both Program and data.  Graphic embedded systems such as the refrigerator mentioned earlier will be discussed in the applications part of the series.

The DRAM controller does use some of the embedded devices resources, however these devices also come with a DMA controller to accommodate these extended R/W memory and may be accessed through the serial peripherals as well as a few parallel ports allowing for high speed data collection.  If larger amounts or DRAM are required over 134 Megabytes you may want to look at a different category of CPU chips that will support 256MB or higher.  When very large memory size is required a solid state disk may be an alternative as a peripheral device, a physical hard drive is not presented here since it would be easier to stream over an Internet connection to a server or other scaled process systems if available.

[Section_Menu]      [Top_Menu]

Project Management & Requirements Traceability Introduction:
We have introduced the basic discussions for the development of the IoT Platform and for many entrepreneur's and engineers the best part of product development is the real hands on the hardware and software, author included.  However, the critical part of product development is the documentation management system.  Documentation traceability is a critical part of product development that documents all aspects of a product development and vary depending on the type of development, from business to technical.  Product Development documentation is required to insure the successful design and manufacturing implementation of a finished product.  We will cover the basic introduction of the different documentation required to create a product from conception to manufacturing.

Typical business level product development documentation consists of the following sections:

  1. Generating Information  - What kind of mouse trap is it ?
  2. Screening the Idea and Information - Is it a realistic device ?
  3. Testing the Concept - How do we prove it works ?
  4. Business Analysis - Will it be profitable ?
  5. Marketability Testing - How do we test the market ?
  6. Technical Product Details - What are the specifications for marketing ?
  7. Commercializing the Product - Distribution and channel Marketing ?
  8. Pricing and pre-launching - Is the price competitive

Documentation changes when we actually begin the physical design process which involves Computer Aided design systems, Fabrication processes that may be outside the companies resources and have to outsource. Specifications for the actual design of each section of the product that covers interfacing individual subsystems for a completed system product. All of these design functions are separated into different project tasks and require system interface documentation.  So as we see from a simple 8 step business point of view to a multi-function design point of view there are many separate tasks that have to be documented in order to connect all the parts for a smooth product development.  BASIL Networks, PLLC has been in business over 38 years as a small R&D product development service and has developed a Interactive Product Development (IPD) System that we will present here that is similar to the Interactive BUS Protocol Development (IBPD) System presented on the site except it is tailored towards a Documentation Management System (DMS) for Program Management.

There are several areas of the physical product development of the IoT platform that we will address in this introduction:  Product development is generally separated into manageable projects/tasks that are presented in Project Management Timeline or Gantt charts for each section and used to allocate resources and expenses, traceability guidelines for the development.  The Project Management and Requirements Traceability Matrix documentation are considered a living process and grows with the project.  The Project Management software BASIL Networks uses is MS Project as well as the Turbo Project Professional, both handle multiple development tasks easily.  It is always a good idea to plan ahead and categorize internal and external resources for any project, we use our own Calibration and Asset Management system for this that is integrated with BASIL Networks IPD System we have put together over the years.  We will have a version of the IBPD and the IPD Systems on this series for education and reference that may be downloaded as the series progresses into the physical design and testing stages.

The management side of design and development is very seldom presented when discussing technology, however in reality it is a critical component of all design stages in order to incorporate traceability.  Design management is and probably always will be a debate among engineers and managers.  Putting requirements in a matrix form for traceability becomes a management nightmare if not developed early in the process.  The past nine parts of the series we have been laying the ground for the various requirements of the Core Platform prior to actually selecting any hardware, software or form factor, design or testing.  We are now ready to create a starting list of requirements and discuss the process of design management, "Traceability".  Traceability is simple to insure that the process used complies with our security, control and safety-critical policies for the core platform.  Once you decide that the device is to be on-line on a network and/or the application incorporates a safety-critical process, Requirement Traceability becomes a critical part of the management development process. Even a toaster embedded control process has a safety-critical process that is part of a UL or CSA and other agency approval processes which would require a Requirements Traceability Matrix.  Granted for the tinker engineer, design management is put aside for more fun on the bench putting the device together.

The way we have performed this management process varied for BASIL Networks over the years from using Project Management Software to a relational database management system to an enhanced matrix form.  The end results are all the same and essentially becomes part of a Document Management System  (DMS) that is integrated into the companies policy structure.  As an R&D development house we are flexible and generally adapt to which ever methodology being used by our clients.  For this series we will use a Requirements Traceability Matrix  (RTM) form in order to present the RTM creation process as a reference when we enter the interactive design and test stages of the development.  The RTM is a link spreadsheet X/Y that allows the Business Requirements Documents (BRD), Technical Design Requirements (TDR) and what ever other names given to the requirements documents for the application.  

At BASIL Networks our label is TSD (Technical Specifications Document) keep in mind that there are many variations of the RTM's and it is probably one of the most flexible documents and is considered a checks and balance to insure that requirements are fulfilled during the development.  Typical RTM contain the following matrix components referencing or linking the actual documents for the development of a project which makes it easier to incorporate a private controlled Intranet internally for document control.

  • Technical Specification Documents (TSD) - Reference to all Technical Specifications
  • Technical Hardware Requirements Documents (THD) - Hardware Form Factor Specifications
  • Parts Validation Documents (PVD) - Component Parts Validation Documents
  • Software API Requirements (SIR) - Software Application Program Interface specifications
  • Resident Firmware Interface Requirement Documents (RFI) - Resident firmware specifications
  • Test Requirements Documents (TPR- Testing specifications of what is to be tested
  • Test Procedure Documents (TPR- Test procedure specifications
  • Packaging Documents - Shipping and handling  procedures

Many DMS allow a main project document that allows a linking matrix to be used as the RTM during development allowing real time updates.

Development management is very difficult if not impossible to apply the full discipline of the ISO-9000-18000+ procedures during actual development.  Many contract houses follow tight documentation procedures during the design and analysis portions, ISO RHoS parts qualification during prototype development and test for the flexibility of PoC (Proof of Concept) or PoD (Proof of Design).  Engineering prototypes are "experiments" for PoC/PoD and the interaction is in real time and documented is a step by step by hand in an engineering notebook.  

Following strict ISO procedures are great for manufacturing quality control of a product however, experimentation requires a bit more flexibility.  This does not excuse the documentation requirements it just documents the development in an engineering format instead of an ISO format which is more flexible   Once the PoD has been tested and working, transferring the documentation over to the full ISO requirement procedure for formal validation and manufacturing becomes more manageable.  I will probably here from my adversaries about this opinion, however after 37 years as a design house their are processes that are both effective and efficient.

We will create the pro's & con's list of this approach when we put the basics for the platform hardware requirements in table form, yes we will do a SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis, for those that like acronyms SWAT (Strengths, Weaknesses, Advantages and Threats).  Also the creation of a Requirements Traceability Matrix (RTM) will be required for the core platform and the associated applications which we will cover in the series.  At this point we should be realizing that project management and documentation is playing an important roll in the success of this series.  Combining all of these does give an interesting acronyms to play with.

  [Section_Menu]    [TOP_Menu]

The embedded processor arena is constantly changing not only the technology but businesses models through mergers and acquisitions which will eventually have an impact on products being developed and availability of components.  This is a common trend with technology and will continue as technology advances.  Older hardware and software are no longer supported forcing corporations to upgrade to keep up.  Just like the rabbit in Alice in Wonderland running just to keep up.  The challenge is to maximize the reuse and minimize the Total Cost of Ownership (TCO) of devices and equipment from a technical aspect as this series is based on.   The IoT Platform being presented here addresses the latest approach in product development to fine tune the technology and handle the changing business models and companies grow.  

When we get to the physical hardware design we will be incorporating design software from Cadence for the schematic capture and PCB layout and Autocad for the mechanical packaging.  We will also present other formats for education purposes allowing the readers to follow with other design software.

When we get to the software development portion of the IoT Platform all routines will be flowcharted along with the appropriate code for the routine.  

Part 11  "Preliminary Outline" Embedded Processor Systems Hardware: -Continued

  • More on Project Management and Requirements Traceability
  • Protocol Hardware - Ethernet
  • HS-USB
  • WiFi devices
  • Bluetooth Devices
  • SQI, SPI, I2C,
  • Analog Inputs, Analog Outputs
  • Digital Parallel Ports
  • PWM ports
  • Software tools for embedded processors
  • IDE- Integrated Development Environment
  • Macro Assemblers
  • Compilers
  • Hardware Design Tools
  • and more ....

Reference Links for Part 10:

Requirements Traceability Matrix  (RTM)
Project Management
Mezzanine Board

The majority of Internet scheme and protocol information are from a few open public information sources on the net, IETF (Internet Engineering Task Force) RFC's that explain details on the application of the protocols used for both IPv4 and IPv6 as well as experimental protocols for the next generation Internet  and the Network Sorcery web site. The remaining of this series on the IoT platform will be from BASIL Networks MDM (Modular Design Methodology) applied with the Socratic teaching method.  Thank You - expand your horizon- Sal Tuzzo

Network Sorcery:
The Internet Engineering task Force: IETF - RFC references

Memory Segmentation
The Memory Management Unit (MMU)
Virtual Address Space
Virtual Addresses and Page Tables
Extended Memory

Part 1 Introduction - Setting the Atmosphere for the Series (September 26, 2016) 
Part 2 IPv4 & IPv6 - The Ins and Outs of IP Internet Addressing (November 11, 2016) 
Part 3 IPv4, IPv6 DHCP, SLAAC and Private Networks - The Automatic Assignment of IP Addressing (November 24, 2016)
Part 4 Network Protocols - Network, Transport & Application (January 10, 2017)
Part 5 Network Protocols - Network, Transport & Application Continued (Aug 17, 2017)
Part 6 Network Protocols - Network, Transport & Application Continued-The Ethernet Protocol(s) (Sept 21, 2017)
Part 7 Network Protocols - Network, Transport & Application Continued-The CRC-32 and Checksums (Nov 27, 2017)
Part 8 IoT Core Platform Development
- Embedded Processor Systems-(SoC)-(SIP) Core Processor -Embedded System Configurations  (Jan 12, 2018)
Part 9 IoT Core Platform Development - Embedded Processor Systems-(SoC)-(SIP) Core Processor -Embedded System Configurations  (Mar 16, 2018)

Publishing this series on a website or reprinting is authorized by displaying the following, including the hyperlink to BASIL Networks, PLLC either at the beginning or end of each part.
BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-10  Embedded Processor Systems: Core Processor Configuration Development- (May 4, 2018)

For Website Link: cut and past this code:

<p><a href="" target="_blank"> BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-10 Product Management: <i>Continued (May 4, 2018)</i></a></p>


Sal (JT) Tuzzo - Founder CEO/CTO BASIL Networks, PLLC.
Sal may be contacted directly through this sites Contact Form or
through LinkedIn

Internet of Things (IoT) -Security, Privacy, Safety-Platform Development Project Part-9

saltuzzo | 16 March, 2018 17:35

Part 9: IoT Core Platform - Embedded (SoC), (SIP)
The Core Processor of Embedded System Configurations - Vulnerabilities Continued

"Firmness of purpose is one of the most necessary sinews of character and one of the best instruments of success. Without it, genius wastes its efforts in a maze of inconsistencies." - Lord Chesterfield

Part 1 Introduction - Setting the Atmosphere for the Series (September 26, 2016) 
Part 2 IPv4 & IPv6 - The Ins and Outs of IP Internet Addressing (November 11, 2016) 
Part 3 IPv4, IPv6 DHCP, SLAAC and Private Networks - The Automatic Assignment of IP Addressing (November 24, 2016)
Part 4 Network Protocols - Network, Transport & Application (January 10, 2017)
Part 5 Network Protocols - Network, Transport & Application -Continued (Aug 17, 2017)
Part 6 Network Protocols - Network, Transport & Application -Continued -Ethernet Protocol (Sept 21, 2017)
Part 7 Network Protocols - Network, Transport & Application -Continued -CRC-32 and Checksums (Nov 23, 2017)
Part 8 IoT Core Platform - SoC Core Processor of Embedded Systems (Jan 12, 2018)
Part 10 IoT Core Platform
- SoC Core Processor of Embedded Systems -Documentation Management (Apr 5, 2018)

Quick review to set the atmosphere for Part 9
From the previous Internet of Things Part-1 through Part- 8:

  • (Worth Repeating) - Since the beginning of this series in September 2016 there have been many hacked IoT devices using COTS embedded hardware and software creating high visibility to security and privacy.  The current database of breaches encouraged us to present a more detailed hardware and software presentation to assist designers and educate new comers of the new challenges with security and privacy.  Due to the complexities of processors today we will continue to follow our technical presentation methodology,  Overview → Basic → Detailed  (OBD).   We will be addressing the many sections of the Core IoT Platform separately to keep the presentations at a reasonable length.  The full details will be presented during the actual hardware, firmware and software design stages.
  • The atmosphere has been set for the Internet operation overview in parts 1 through 6.
  • The Ethernet physical protocol is the most used for communications over the Internet.
  • All communications throughout the Internet is performed as UserRouterInternet RoutersRouterUser
  • According to Netcraft there are over 1.8 billion websites active on the Internet that means over 3.6 billion routers minimum.
  • The basic selection of protocols for the IoT Platform have been defined.
  • The conceptual functional block diagram of the IoT Platform has been presented.
  • Basic types of CPU architecture on the market today

What we want to cover in Part 9:
A short detour to present an overview before we cover the MMU and Virtual Technology.  There are many publications on vulnerabilities relating to the x86 multi-core processors and the Management Engine environment incorporated as part of the processor chips for the past 10 years that should be addressed before we move into the security and privacy policies.  We will present this as an introduction to security, privacy and control of the Memory Management Unit, the DRAM controller and the Paging for Virtual Technology.

  • A short historical (not hysterical) summary of high performance processor development to the current state.
  • The current vulnerability issues of high performance processors that are circulating on the Internet.
  • A look at the internals of x86 processors and hidden OS vulnerabilities
  • An Engineering Approach to identifying the issues of concern.
  • An Engineering Solution "suggestion" for the main issues.

Lets Get Started:
A Brief summary - CPU, MMU, Page Memory Virtual Memory Summary:
While writing this series my thoughts were simple just remain focused on the series, moving forward linking every part of the series together smoothly, well, there is always construction on the highway and some detours.  This short presentation detour will present the two main vulnerability issues, yes jut two for now, that we will be facing since they address processors for the past 10 years and future processors.  Since the current state of concerns are security and control as well as privacy, we are at the point where we have to talk about security policies, control policies and of course memory access and create them for our Core IoT Platform.

Wow, things change fast in the technology arena with everyone writing about the majority of CPU vulnerabilities with Meltdown and Spectre and others.  This series is not going to disrespect or criticize the huge effort that has been on-going to develop operating systems for the industry. So lets give this a positive attempt to put this into the engineering perspective, that is before a problem may be solved it has to be analyzed and understood with given facts that have been vetted and the desired results.  There are two main technical papers covering Meltdown and Spectre but before we get to them we have to put some core facts on the table.  There are two issues of vulnerability that exists, the first is the internal control of all peripherals by a secured processor that runs an internal OS that is part of the multi-core processor chip, second is the Virtual Page vulnerability fault identified by Meltdown and Spectre publications.

OK, not to give age, al little history is good to understand as not to repeat it.  I started working with designs in the 70's on Data General NOVA, Micro-NOVA and Digital Equipment Corporations PDP-11 Series minicomputers, by the late 70's early 80's I started designing with Intel Processors in the days of the 8080 when it was first released and moved to the 8088 and was fortunate enough to own an IBM-PC in December 1981 that was used for developing hardware and software, no I do not use a walker although I do sit a lot more,  so you might call me seasoned or just strong minded.  So what does this have to do with the price of tea?  Nothing, however it has to do with the advancements of technology in the industry over time and the forgotten problems that have been solved as with all new technologies from someone that experienced those advancements while working in development for over 35 years.  There were two main developments in the computer processor world at that time, mini-computer mostly CISC architecture and microprocessor RISC architecture advancements, both have teams of talent.  Keep in mind that the development of MINIX and other operating systems started way back in the 1960's as well.

Processors in those days required a lot of external support chips to make up the system motherboard and memory management issue was studied and was implemented into the chips however, required more development to make it practical.  As time passed and the fabrication process technology improved more of the support chips were integrated into the processor support chipset's. Figure 9.0a and 9.0b are the 19" rack mount relics' of the computer age in the 70's.  There are still some PDP-11 systems controlling power plants in Canada.

Figure 9.0a Data General Nova,(1971) - Micro Nova  (1978)

Figure 9.0b   DEC PDP-11 (1970) - LSI-11/23 (1978)

A Brief Historical Summary of Technology Advancements Over The Years.
To really start this brief historical journey we should start way back in the 50's, yes, there were working computers then, the Univac-1101 or ERA 1101,  the first computer was named Colossus in 1943.  For you SiFi groups Colossus was a 1980's movie.  The programs that were run were broken down into small partitions which were placed in secondary storage and called up to be overlaid into a predefined block of core memory for processing.  This virtual overlay process and virtual memory was created from a thesis by German physicist Fritz-Rudoff Guntsch in 1956.  The Atlas Computer developed in 1959 was the first to incorporate Virtual Paging Overlay methodology at the University of Manchester and was formally commissioned on the Atlas computer in 1962.  Also in 1962 Burroughs Corporation (for us old timers) released the first commercial computer with Virtual Memory Overlay technology, B5000 that used segmentation rather than paging.  Figure 9.1 shown the Virtual Memory Functional Block Diagram With Segmentation.  Paging was still a theoretical process and still being developed due to the complex translation software and hardware limitations at the time, segmentation was a more practical first approach.

Figure 9.1   Virtual Memory Segmentation Block Diagram

IBM in 1969 solved the problems of virtual memory for commercial computers by proving their Virtual Overlay System was reliable on the IBM-370.  From there the storage and chip technology advanced to a point that made page memory feasible and Digital Equipment Corporation incorporated Virtual Address Space in their VAX/VMS (Virtual Address eXtension / Virtual Management System) and their Multi-User/Multi-Tasking OS  RSX-11M and MUMPS.  Data General Corporation initially developed RDOS (RealTime Disk OS) Data General was acquired by EMC in 1999 and EMC introduced VMWare a Virtual Machine OS. The main architectural differences between Digital Equipment and Data General is Digital Equipment is a Memory Mapped I/O architecture where the I/O devices occupied an actual memory address; the Data General is a dedicated I/O architecture where all I/O devices have a unique address that is not part of the memory addressing and specific I/O instruction set that is accumulator or register based.  The mini-computers were all CISC architecture computers and the microprocessor introduction incorporated RISC architecture; Intel and AMD are RISC architecture base processors.

Over time Intel and AMD addressed the Virtual Technology in steps.  First came the multi-tasking applications running in Virtual86 mode that was part of the 80386 processor by adding hardware instructions to the RISC processor for task switching with a TASK handler along with Global and Local Descriptor Registers, .  This enhanced the virtual operation and lead the way for multi-core processors.   Until the introduction of multi-core processors there were multi-processor chip motherboards 2-way, 4-way that had some unique logic as to operate in a symmetrical environment, Symmetrical Multi-Processors (SMP), or Multi-Processors (MP).  Intel's Xeon-DP chips were used for the 2-Way motherboards which is what we still use here for testing.

SMP configurations share the BUS and Memory and are controlled by the main processor in SMP systems as shown in Figure 9.2 Symmetrical Multi-Processors SMP configuration.  Each of the Applications ran in Virtual86 mode and each processor was capable of multi-tasking as well increasing the performance of the entire system.  SMP configurations only ran one OS and would require a reboot to run a different environment which was typical in those days using a multi-boot startup program.

Figure 9.2   Symmetrical Multi-Processor Configuration Block Diagram

This SMP configuration was used for several years and continues today, hyperthreads linked to the Virtual Processor which is a separate core still controlled by the associated processor core the hyperthread was attached to.  The multi-core with hyperthreads increase the performance to the next level in the Xeon Processors which are still used today.  The latest Xeon is a true multi-core Virtual Technology processor chip that allows multiple operating systems as well as virtual multi-tasking per core.

For a true Virtual Machine by definition each processor must be independent, no dependencies on other processors in the core, and be capable of sharing all the functional peripherals attached to the system on demand.  Both Intel and AMD have protected IP as well as patents to protect their core technology advancements.

In 2005 both Intel  (Intel® VT) and AMD developed their own methodology of Virtual Technologies ending up with the same results using multi-processor core chips and entered the market of Virtual Technology.  This is when Intel and AMD incorporated total control over the Virtual Memory and Virtual Processor core to implement their propriety Virtual Technology.  Figure 9.3 shows the current Virtual Technology Block Diagram, a portion of the technology is propriety and protected from the public.

Figure 9.3   Virtual Memory To Physical Page Memory Block Diagram

Prior to 2005 there were no virtual processor desktops at the time, just SMP configurations, however Intel had a few years of accumulative experience developing the Memory Management Unit (MMU), the MMU being incorporated into the i486 chip along with the Floating Point Unit (FPU) and gaining knowledge with Virtual Address Space technology partly from the Open Source VAX/VMS software lead the way for the next generation processor.

The new Virtual Technologies started to gain momentum and several talks surfaced about hardware modifications, peripherals and what is essential for a desktop system, how small can it be to maintain functionality and performance., keep in mind that the only peripherals that were in the x86 processors were the Keyboard, Mouse controller, FPU the MMU, the Internal BUS arbitration logic and the easy back door for hacking, the "System Management Mode (SMM) controller", all these incorporated peripherals were controlled by an external drivers loaded when the OS was loaded except the keyboard and mouse that were part of the BIOS from the getgo; software driver updates were downloaded and installed externally.  Windows 3.x was released in 1990 and introduced the a pseudo virtual memory driver mechanism called a loadable virtual device drivers (VxD) and capable of running applications in protected mode that ran on top of DOS.  The processor support advanced to include DRAM Controller, MMU and  Real World BUS arbitration control were in a separate chipset's for each processor, these were called Nortbridge and Southbridge chip sets.  They handled different I/O functions for the specified x86 processor.

The advancements continued to Windows 95 in 1995, Windows NT  and Windows 98 in 1998 which introduced the Windows Driver Model (WDM).  Along the way Windows NT also introduced a new file structure very different from the standard FAT32 and entered the server Multi-tasking/Multi-user arena.  Windows NT was very easily hacked, in fact while at one company one of the engineers hacked all the user passwords and sent them to upper management to prove a point.  In 2001 Windows XP was introduced and by this time Microsoft and already had several years dealing with page memory management multi-tasking, also Windows NT changed names to Windows Server 200x and maintained the NT file structure for the XP environment as well.

The advancements and demands for faster, more powerful processing kept the pressure on the key players in the x86 processor marketplace, Intel®, AMD® and Microsoft® , to develop a more self adjusting secure user oriented Windows OS.  There were and still are many discussions of the pro's and con's of incorporating the video controller into the same chip with the CPU as well as other peripherals like disk management, USB etc., of course peripherals were incorporated inside the chip, however the drivers were still outside and installed as a virtual driver for the first level of change as the OS was installed.  Remote control of the drivers was short lived, in 2006 Intel® and AMD® incorporated a complete OS internal to their virtual processor chips.  The complete control using an internal OS was classified as a trade secret and was well kept till the vulnerabilities surfaced and that is where we are at today.

Vulnerabilities and Concerns for Chips Today:
The fact that the fabrication technology has advanced to a point that we are able to place many controllers in a single chip is not an issue it is a concern it is a technology advancement that many peripherals along with the multi-core processors are on a single chip..  As an example the Q9500 quad core 64 bit processor was released in 2007, connects to a LGA775 (Land Grid Array 775 pads) socket has 465 Million transistors Package size of 37.5mm x 37.5mm and was processed using 45nm lithography technology.  The 80486 16 MHz 32 bit processor released in 1989 in a 208 pin QuadFlatPack incorporated a little over 1 million transistors using 1µm lithography technology, (1µm = 1000nm).   The latest i7 generation quad core  processor with 8 hyperthreads is 3.1GHz 31mm x 58.5mm package built using 14nm lithography technology 2270 ball surface BGA (Ball Grid Array).  The lithography technology went from 1µm to 14nm in density, almost 1000 to 1 density increase, so putting peripherals on a single processor chip is not an issue.

The topology advance did not happen over night, it took several years to go from several LSI (Large Scale Integration) integrated circuits support, Northbridge/Southbridge chipset's, USB, Graphic and other LSI controller IC's.  As the density of wafer fabrication technology increased along with the reliability it was cost effective to incorporate more an more technology into a single chip.  As this process was being implemented so were the drivers, Intel motherboards were shipped a complete set of drivers for all the controllers that were on their motherboards, as did other motherboard manufacturers, the practice of including a setup disk for selected Operating Systems was part of the expectations for system integrators.  The practice of external software drivers still exists for third party PCIe cards and other peripherals to enhance performance or customize for special applications and gave opportunities for the OS manufacturers to advance their products to better support their customers.

Incorporating peripherals into the processor chip is really not a concern, however incorporating the drivers which in turn require some type of OS to control and interface them to the applications does change the game.  The "Secure Boot" process, the added hardware and software for security changed the security platform.  The pushback is coming not only from the operating system software manufacturers but from the security industry and third party software developers.  Now that all the vulnerabilities for a multitude of processors have been published the businesses are now looking at damage control and how they can protect their business from intruders since it does not appear a real solution is coming any time soon.   If a hardware rollover of the silicon chip has is required you are looking at a year plus down the road.  That is a long time to allow that many vulnerabilities to be exposed in the industry.

The incorporation of an operating system on the CPU processors creates a few concerns that have very little to do with the actual operating system itself.  The issue is that it puts a hardware manufacture of CPU chips in the operating system arena and controls how software will communicate with it.  This concept has been a concern for many years all the way back to the DEC and Data General days.  When I consulted for DEC back then the corporate talks throughout the corporate levels were "We are a hardware company forced to include an OS with the product", Data General also had the same point of view, however to maintain competitive they also were forced to developed operating systems.  Things would have been a lot different if the same support for RSX-11 and RDOS spun off and many applications build around it as the IBM-PC support.   It turned out that each application was a engineering project within a specific company and the software and hardware became proprietary and not for sale on the public market.

The next issue arises on how to communicate with the new All-In-One chips, hence the UEFI (Unified Extensible Firmware Interface) specification lays out specifics on how to interface with the processor, all 2899 pages.  From the first release January 31, 2006 to May 2017 Rev 2.7 there have been a dozen updates each year.  The Platform Initialization (PI) and the Advanced Configuration & Power Interface (ACPI) specifications which address the core of security and protection for microprocessors in today's marketplace.  What makes this interesting is that someone probably knew of the many vulnerabilities that existed way before they were published along with the fact of insuring that interfacing with the UEFI is reliable on their application. For the full set of specifications go to the UEFI Forum.

We hope to be conducting a few test here to monitor the communications activity when the system is turned off and when it is in sleep mode.  Updating the operating system is also an issue and how it will effect the outside OS's that have to communicate through it.  For the Intel line the internal OS has been identified as MINIX 3 which has been around for 30 years and has weathered the test of time as being a very efficient core OS.  The AMD Ryzen series is a propriety OS and is controlled by a 32 bit ARM Cortex A5 and monitors and maintains the security environment.  We will get to the operating systems when we address the software development part of this series.  There is no criticism towards any of the operating systems only admiration of the many hours spent developing, maintaining and then giving several of them to Open Source for all to use and learn from.

There have been so many vulnerabilities over the years from many products during their introduction of evolving technologies, many have been addressed and some are still prevalent that have been ignored, it is a real world.  Many issues have been solved with the release of updates and patches.  To understand and publish the vulnerability purely at a technical level always leads to some type of solution that may be administered to fix the vulnerability.  I doubt that this methodology will change any time soon since new code and applications are impossible to test for every type of hack.  As I stated in Part 1 "Where Do We Start" - All manufactures have the right to develop anything they want, sell it and make a profit from the technology.  Manufacturers also have the right to control and maintain intellectual property rights of their developed technology within their product.  Today we have a large volume of open source software under the GNU license that any manufacturer may use and alter as long as they maintain the GNU identification requirements of the source.  As new products, especially processors enter the market the demand to protect Intellectual Property and technology becomes more difficult due the advancements within the Internet and the ability to hack just about any server make patents and IP difficult to protect, so the next best thing is a Trade Secret which is what the major players and many corporations are now practicing and very well since it took about 10 years for the public to be aware of the current processor vulnerabilities.

It is obvious that Intel and AMD kept their trade secrets well hidden until a short time ago when vulnerabilities were uncovered.  The concern is not the peripherals on a single chip, it is all about this internal OS controlling the peripherals and applications have to go through and the possibility of unwanted access to the system the processors are used in.  The "Management Engine" interfacing has taken lead on the pushback.  Trade secrets become a problem when it causes damage to the user by not disclosing the risks as many are experiencing and over the last few months more and more risks have been exposed. Today there are so many processors internal to everything in every day business and private life that intrusion vulnerabilities are magnified 1000 fold considering over 1.5 billion cell phones and several hundred million tablets, desktops, laptops, tablets and can create tremendous losses on all fronts.

OK, The second issue that we are not going to reiterate on is the memory paging vulnerability issue of  Meltdown and Spectre since there are so many articles and opinions already out there.  This went viral on the Internet with just about every publication except teach yourself basket weaving 101 has rephrased it to their publications.  The technical side of this vulnerability is worth noting in order to understand what we should keep that in mind to address during the development of the Core IoT Platform.  The technical details about Meltdown and Spectre are linked for those that are interested.  We will talk about a solution next. Remember the MMU hardware is a peripherals and the API that drives it is software/firmware that is part of  the integrated OS that the processor manufacturers incorporate as part of their chip which has opened a Pandora's box of issues.  The x86 architecture on Intel x86 line and AMD Ryzen's series are now the center of a wide range of publications on the vulnerabilities.

A Proposed Solution with Facts - May Not Require A Chip Redesign- Summary:
OK, writing about problems without a fair reasonable solution is just paraphrasing everyone else's publications, something the author avoids, so here is a reasonable seasoned suggested solution from someone that has had many technical discussions of incorporating everything in a chip.  If you are expecting a "Silver Bullet" solution, well sorry that is only in the movies, however we are able to address the vulnerabilities one at a time and implement a solutions that are reasonable.  We will return to these vulnerabilities and other solutions during the software security part of the series.

The first argument on vulnerabilities is cored around the trade secret part of the internal operating system software as part of the processor chips, Intel x86 line using MINIX-3 and other Ryzens propriety microcode in AMD that takes control of all the peripherals and the Virtual Technology operations of the chip.  Since the internal code has access to the internal peripherals, some microcode to access them would be accessed in some way.  With that being said, a reasonable solution that would require a microcode update would be as follows.

Minix has a microcode kernel architecture for the OS and the drivers are user-applications, the peripherals still go through the kernel for register access, that is typical of all low level access for the application driver to be developed,  I am not sure how AMD performs the similar task, however it has some type microcode or monolithic OS and drivers are in order, considering the Ryzen latest round of vulnerabilities have been published.  Lets start with the Intel x86 Processors, this would apply to AMD as well, however, the OS is different.

  • Remove MINIX OS and all the peripheral drivers from within the chip - easy just erase the EEPROM (about 5MiB of the 8MiB).  
  • Keep the required standard Platform Initialization (PI) part of the BIOS, the MMU, FPU BUS Arbitration etc., the chip manufactures probably best suited for the task.
  • The MMU and Translation hardware that is proprietary and patented is still internal to the chip.
  • Once Initialized transfer all control to the developer and any drivers to the developers OS outside the chip.
  • Add an additional hook to Platform Initialization (PI) for the Operating Systems to initialize and control all internal peripherals.
  • The peripherals are already connected internally to the BUS arbitration crossbar logic therefore a small microcode update would allow these peripherals to be controlled externally.
  • Some parts of the MMU and FPU arbitration that performs a hardware translation may be part of a patent or IP that may remain in the 8MiB EEPROM for faster execution.  
  • Interfacing to this Patented or IP code may be licensed generating a revenue flow as part of the ROI for the chip development keeping the technology intact.
  • Add BIOS enable/disable of the internal peripherals to mask them out if desired allowing third party peripherals for customization.
  • Add an additional reset function for a default restore of the BIOS.
  • Disable SMM at startup on Intel Processor Chips and keep it off permanently.

All this leads to a relatively reasonable microcode update that will give the processor and all of its capability and accountability back to the programmers, developers and OS manufacturers.  There has to be an additional mechanism to reset the BIOS to a default state if the microcode and OS update is interrupted or fails to initiate, encrypt the default bios setup. The microcode and the OS updates may be accomplished with one update process or allow companies to perform the distribution internally manually or automatically through their servers.  The time frame for distribution is to be determined on the complexity of the updates and the companies involved.  The main concern here is the updates and who is the creator of the microcode in order as to insure to the public that the security is traceable for all concerned.

The second issue deals with the Virtual Technology Paging vulnerabilities, that may be addressed in the remaining space in the 8MiB of EEPROM, special intercepts may be incorporated to stop the full dump and the way the page table is rewritten.  Some scrambling may be initiated as to protect the raw data from being dumped as well.  This is probably the smallest hit on performance and may not be recognized if an increase on performance is brought on by removing MINIX and the other support nesting.

Why This Will Work:
Peripherals are just external devices connected to the processors BUS which require a device driver to operate.  For the graphics, nVIDIA and ATI  or other embedded graphic controllers they are just peripheral devices like any other.  Since the major players Intel and AMD already have access to graphic technologies from they previous Mergers and Acquisitions over the years,  nVidia & ATI controllers respectively and that both controller series are also used in COTS graphic adapters for many systems eliminates the issue of driver availability and support.  Intel also is licensing AMD Radeon graphics engine in the latest Intel's iGPU, well that ends the cold war on graphic technology since AMD and NVIDIA own so many patents it would be better to cross license, besides the Intel NVIDA settlement agreement ended March 31, 2017 opening the door to new business.

Even if there were some unique addressing scheme used, both Intel and AMD already have the drivers that work for their implementations of the controllers.  The peripherals do not have to be removed, just allow programming control from the outside.  This may or may not involve some hardware pin reassignment or microcode to redirect the control to the external BUS architecture to share the internal memory for graphics and other peripherals.   However since one of the cores already have access, some small microcode should be able to handle it.   This does not change any "Virtual Technology" capabilities of the chip and in some cases may even add performance to the processor since it may be able to free up a couple of cores that can be used as part of the OS as well.  

As for security, this allows the software to be secured by the OS manufacturer and not the chip maker of hardware, eliminating a level of who has what access and when..  This also gives OS manufacturers more flexibility to maintain security of their platform.  This solution has not changed the way peripherals are required to performs and it gives a broader flexibility for the Virtual OS to increase performance.  Peripherals like the Network Interface Controller, the USB, etc. should be completely controlled outside the chip and access to these peripherals should be allowed only through the OS.  There are a series of methodologies that we have been working on for a few years now and are planning to incorporating them into the Core IoT Platform for this series.

The extra internal memory probably about 5MiB and the associated cores could easily be used for protecting the processor from intrusion by allowing corporations to use that area for propriety company ID encryption and other security protocols that are unique to the corporation and also allow the OS manufacturers to add their own standard protection for consumers that purchase Windows, Linux and any other OS manufacturer for retail consumer use.  This gives control and accountability back to the operating system and developers.  These updates allow unique markets to emerge for the security of the consumer and corporate platforms, allows for secure code in the prevention of the current missed page interrupts for Meltdown and Spectre among others that are not mentioned here.  

The presentation of the current software vulnerabilities in conjunction with hardware limitations has without doubt raised deep concerns of security and privacy.  Presented here may not be the ultimate solution, however the fact remains that all new technologies have issues and eventually they are solved as they have been in the past to bring us to the level we are at today.  The embedded processor world is expanding at such a rate that security is being bypassed for the fastest to market at the expense of the publics privacy and safety.  The best we strive to do is to insure that today's solutions are not tomorrows problems.  Yes, hardware has bugs as well! combining the hardware bugs and the software bugs is enough to create a new feature!  OK, for you real serious minded that was a pun.

The Free Enterprise System:
Freedom of private business to organize and operate for profit in a competitive system without interference by government beyond regulation necessary to protect public interest and keep the national economy balanced.  The U.S. economic system of free enterprise operates according to five main principles:

  • Freedom to choose our businesses,
  • Right to private property,
  • Profit motive,
  • Competition,
  • Consumer sovereignty.

Consumer Sovereignty
In the end, it is the customers, or consumers, who determine whether any business succeeds or fails.  In the U.S. free enterprise economy, consumers are said to have sovereignty-the power or freedom to have final say.  Consumers are free to spend their money for Product X or for Product Y.  If they prefer Y over X, then the company making X may lose money, go out of business, or decide to manufacture something else (perhaps Product Z).  Thus, how consumers choose to spend their dollars causes business firms of all kinds to produce certain goods and services and not others.

"Those who cannot remember the past are condemned to repeat it" - George Santayana



Part 10  "Preliminary Outline" Embedded Processor Systems: Continued

  • Back to the Core IoT Platform Development.
  • Selecting a Platform Compiler
  • Selecting a Processor
  • and more ....

Part 1X+  More To Come -"Preliminary Outline" Embedded Processor System: Continued

  • Power On Sequence Initialization Test - AKA (POST)
  • Using external FPGAs, CPLDs for Power On Initialization
  • Introduce a security policy right at the beginning of the design process
  • How much control do we really have?
  • Security protocols and how they play a roll in initialization.
  • Selection of the Core IoT Platform Processor(s) and peripherals.
  • Embedded Processor Technology, CPU Chips Technology,  FPGA System on a Chip - How much control do we really have
  • Types of processors, MIPS, RISC, CISC, Single Core, Multi-core etc.
  • Boot up Initialization processes and Vulnerabilities using embedded processors and independent CPU chips.
  • Secure Boot processes and Embedded Cryptographic Firmware Engines - A Closer Look.
  • An introduction to support chips for processors,  Northbridge (MCH, GMCH) /  Southbridge support chips (ICH),  SoC Platform Controller Hub
  • Security protocols and how they play a roll in initialization.
  • Key Security Requirements (KSR) list review
  • Selection of the Core IoT Platform Processor(s) and peripherals.

Reference Links for Part 9:
The majority of Internet scheme and protocol information are from a few open public information sources on the net, IETF (Internet Engineering Task Force) RFC's that explain details on the application of the protocols used for both IPv4 and IPv6 as well as experimental protocols for the next generation Internet  and the Network Sorcery web site. The remaining of this series on the IoT platform will be from BASIL Networks MDM (Modular Design Methodology) applied with the Socratic teaching method.  Thank You - expand your horizon- Sal Tuzzo

Network Sorcery:
The Internet Engineering task Force: IETF - RFC references

 Open Source Linux Conference Europe on Intel® UEFI/ME      [PDF] Slides  

[PDF] Vertual Memory and Memory Management - Angelos Stavrou, George mason University

[PDF]Enabling Intel® Virtualization Technology Features and Benefits

[PDF] Meltdown

[PDF] Spectre 

[PDF] AMD Flaws from CTS Labs on EPYC, Ryzen, Ryzen Pro and Ryzen Mobile

[PDF] UEFI Specifications Version 2.7 All 2,899 pages Specifications

Server Security Advisory AMD Processors
Lessons Learned From 30 Years of MINIX
Memory Segmentation

The Memory Management Unit (MMU)
Virtual Address Space
Virtual Addresses and Page Tables
Extended Memory

Part 1 Introduction - Setting the Atmosphere for the Series (September 26, 2016) 
Part 2 IPv4 & IPv6 - The Ins and Outs of IP Internet Addressing (November 11, 2016) 
Part 3 IPv4, IPv6 DHCP, SLAAC and Private Networks - The Automatic Assignment of IP Addressing (November 24, 2016)
Part 4 Network Protocols - Network, Transport & Application (January 10, 2017)
Part 5 Network Protocols - Network, Transport & Application -Continued (Aug 17, 2017)
Part 6 Network Protocols - Network, Transport & Application -Continued -Ethernet Protocol (Sept 21, 2017)
Part 7 Network Protocols - Network, Transport & Application -Continued -CRC-32 and Checksums (Nov 23, 2017)
Part 8 IoT Core Platform - SoC Core Processor of Embedded Systems (Jan 12, 2018)
Part 10 IoT Core Platform
- SoC Core Processor of Embedded Systems -Documentation Management (Apr 5, 2018)

Publishing this series on a website or reprinting is authorized by displaying the following, including the hyperlink to BASIL Networks, PLLC either at the beginning or end of each part.
BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-8  Embedded Processor Systems: The Core Processor Of Embedded System Configurations- (December 24, 2017)

For Website Link: cut and past this code:

<p><a href="" target="_blank"> BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-9 Embedded Processor Systems: <i>Continued (Mar 16, 2018)</i></a></p>


Sal (JT) Tuzzo - Founder CEO/CTO BASIL Networks, PLLC.
Sal may be contacted directly through this sites Contact Form or
through LinkedIn

1 2 3 4 5 6  Next»
Powered by LifeType - Design by BalearWeb
Copyright© 1990-2017 BASIL Networks, PLLC. All rights reserved