BASIL_NETWORKS


Designed & Made
in America (DMA)

ABOUTABOUTPRODUCTSSERVICESSUPPORTCONTACTARTICLESBLOG
BASIL Networks BN'B | Internet of Things (IoT) -Security, Privacy, Safety-Platform Development Project Part-7

BASIL Networks BN'B

The BASIL Networks Public Blog contains information on Product Designs, New Technologies. Manufacturing, Technology Law, Trade Secretes & IP, Cyber Security, LAN Security, Product Development Security

Internet of Things (IoT) -Security, Privacy, Safety-Platform Development Project Part-7

saltuzzo | 23 November, 2017 06:56

Part 7: IPv4, IPv6, Protocols - Network, Transport & Application: Continued
The Ethernet Protocol(s) CRC-32 and Checksums

Design is a way of life, a point of view.  It involves the whole complex of visual communications: talent, creative ability, manual skill, and technical knowledge.  Aesthetics and economics, technology and psychology are intrinsically related to the process. - Paul Rand

Part 1 Introduction - Setting the Atmosphere for the Series (September 26, 2016) 
Part 2 IPv4 & IPv6 - The Ins and Outs of IP Internet Addressing (November 11, 2016) 
Part 3 IPv4, IPv6 DHCP, SLAAC and Private Networks - The Automatic Assignment of IP Addressing (November 24, 2016)
Part 4 IPv4 & IPv6 Protocols - Network, Transport & Application (January 10, 2017)
Part 5 IPv4 & IPv6 Protocols - Network, Transport & Application Continued (Aug 17, 2017)
Part 6 IPv4 & IPv6 Protocols - Network, Transport & Application Continued - The Ethernet Protocol(s) (Sept 21, 2017)

Quick review to set the atmosphere for Part 7
From the previous Internet of Things Part-1 through Part- 6:

  • All serial hardware protocols have to have some methodology to identify a starting point for the data transfer and the end of the data transfer, as well as when each bit is valid to be sensed.
  • We presented an overview of the list of protocols that IPv4 and IPv6 are able to handle as well as an overview of the list of protocols we wish to initially implement into our IoT Core Platform.

We presented the first part of the Ethernet Protocol hardware characteristics, the software frame structure and how it identifies devices on its network.
The Ethernet Physical Layer incorporates the following capabilities and features:

  • a differential twisted pair configuration for Transmit and Receive lines.
  • cable length is determined by the hardware, general Cat[5][6][7] cables are at least 100 Meters for 1Gbps and lower;  30 meters above 1Gbps
  • a variable length protocol - the payload varies from 46 bytes minimum to 1500 bytes maximum with an added eight bytres (802.2, 803.1Q) for inter layer links if incorporated.
  • an internally hardware controlled asynchronous autonegotiation protocol - uses 8 bytes to determine configuration and 12 idle bytes to end data transfer.
  • auto-configuration to run Full or Half duplex  for the 10/100 BASE-T,  1Gbps BASE-T and above run in full duplex only.
  • Ethernet switches internally maintain the devices IP and MAC address of devices connected to each port.
  • a mechanism through Ethernet switches allowing device to device communication without router intervention.
  • MAC addresses must be unique to the connected LAN devices.
  • Manufacturers must be assigned a block of MAC addresses along with a manufacturer ID.
  • The Ethernet Controller requires eight octets, a  block of 31 sets of state changes 1's and 0's to configure the controller for the start of the data
  • The Ethernet Controller requires a block 12 idle states to identify the end of data transfer or have a specified length in octets 20, 21 of  the frame.
  • Protocols have some way of checking or insuring the integrity of the data being transferred - many protocols use a Checksum or Cyclic Redundancy Check (CRC) -16/ 32 etc. bit algorithms.
  • Protocol and Headers are all not the same in the way the CRC's are implemented.

What we want to cover in Part 7:

The Checksum Algotiyhms and the Ethernet Protocol Cyclic Redundancy Check (CRC-32) 
In this part of the series we will present the error-detection section of the protocols for the Core IoT Platform, the checksum and the Ethernet protocol CRC-32.  We will start off with some ground floor history of error-detection in serial data streams and proceed to the  various algorithms that have been develop for the Internet.  We will then present an introduction with a bit of history of the embedded processor, microcontroller and CPU marketplace.  To close this part we will present an overview of the Core IoT Platform requirements in order to create an IoT Platform functional block diagram. The outline is listed below.

Enjoy the series.

Lets Get Started: A "BIT" of CRC and Checksum History
Serial data transfer from its conception to present day has the task of insuring that the data block of bits being transported does not contain any changed bits during transfer.  Before the Internet serial communications were part of the telecommunications and information theory, coding theory, cryptography and others.  What the innovators looked at was how to detect bit reversals in a block of bits and ignore the actual format of the data being transferred.  All that was of interested was that there were no bad bits in the block of bits period.  Well to look at this reality we have to look at how many bit reversals exists in the source block of bits and calculate a methodology to test for any reversals.  If there are no reversals then it is assumed that the block of bits was transferred correctly with no errors.  This simple explanation has created a "bit" of confusion in the serial digital world and still exists today.  Ok, the research of bit reversals leads us into the theory of how several checksum methodology evolved to and how to place them in the block of data bits for the best performance.  Checksum algorithms vary in the world of processors and programming, there are a set of simplified checksums for embedded processors that handle smaller data transfer sizes to the Internet that handle gigabits of data continually.

The object of the theory is to detect errors in transmission by using some sort of error detection code sequence which was introduce back in the 1940's by Richard W. Hamming who modernized the development of error-correcting codes.  This was fast evolving by the contribution of other theorists and ended up with the name Hamming Distance and so we have the introduction to checksums and CRC by way of Error-Detecting Codes and Correction incorporating Hamming Distance theory.

Hamming Distance:
The Hamming Distance (HD)  is defined as the number of transitions required to make two strings of identical length the same.  Hamming Distance is an interesting theory when applied to comparing a block of bits as we will see shortly.  It is also an interesting application when applied to cryptography and ciphers.  Hamming Distance is a starting point for cracking a cipher based on language, region, and bit changes within blocks, there are several other brute force methods as well which will be discussed in another blog on cryptography in the future.  OK, so the question, what does Hamming Distance have to do Checksums and CRCs?  The larger the Hamming Distance number means the more bit reversals within the block of bits being tested and the reference block of bits, the more difficult it is to be exact that a bit error will be detected.    Lets take both Checksum and CRC separately and look at the methodology and how they would detect bit errors.

There are many documents published detailing several different methodologies to test a block of bits, in fact so many that some sort of standard had to be set for the Internet in order to insure some type of data transfer integrity where all system use the same methodology at both ends to insure consistency throughout the networks.  As we stated in previous parts of this series, "as long as you follow the protocols in place the data will be routed source to destination however, there is no guarantee that the data will not contain bit errors "and" that is the main task of the CRC and Checksum algorithms.  The user data extraction is up to the application to encode/decode the users data which also should contain some sort or data integrity checking process as well.  So with that said we will look at the protocols being presented which are Ethernet, IP, TCP, and UDP and the checking methodologies used to insure data transfer integrity.   There is an intense amount of research information on serial data bit testing and grouping of bits into blocks of various sizes attempting to create the best process of identifying a single bit error in a large block of data during transfer.  We will give some references at the end of this part for reference, one could make a career in error-detecting-codes, that is not the intent here, just an overview with an understanding to implement the algorithms for our IoT Core Platform development.  There are so many web pages that give the source code in several languages for the Hamming distance function eliminating the need to present it here.

We often see in the protocol header used in the Internet fields labeled  "Checksum", Frame Check Sequence (FCS) , CRC and others, these fields may be a Checksum or CRC or some other error checking methodology unique to the protocol. To clear this up we will look at the checksum functions and the Cyclic Redundancy Check (CRC) algorithm which are the main functions used for many of the protocols, specifically for the initial IoT Core Platform.  We will return to the Checksums and CRC later in the series when we address the programming of selected protocols and the OSI model.

Hamming Distance And The Checksum:
Here we are now 20 years later into the 70's and still wondering how to best incorporate the error detecting and error correction codes for large amounts of data transfers.  Well lets do a slow walk first to see how this bit reversal and error detection evolved.  The Hamming Distance theory set the sites of looking at the reduction of bit reversals in a segmented block of bits for transferring data insuring that if a transfer error or bit reversal happened it would be detected.   This lead to the simplest proof of the Hamming Distance Theorem when it was implemented into a ASCII hex file data format representing bytes of data.  This simple checksum algorithm was introduced by Motorola and based on the 6800 microprocessor series and given the names SRECORD, SREC, S19, S28, S37 file format which was commonly used to program the flash memory, the loading format was not very fast but very efficient at the time.  Intel Corporation® reviewed and revised the file format for use on the 8008 CPU in the early 70's and eventually created a new specification for the Intel processor line in the late 80's and labeled it the IntelHex File Jan 6, 1988 for the entire x86 line of processors.  OK enough of ancient history, although roots are important, even though they are routinely ignored the tree would not have grown without them.

The IntelHex file format checksum is the interesting section of the IntelHex File format that we want to cover here which is defined as the sum of all the data bytes in the single line as shown in Figure 7.0 below.  The line of code has a maximum length of  [Start Code(S) + Byte Count + Address +  Record Type + Data ] bytes all represented in ASCII Hex format.  Each eight bit byte is represented by two ASCII hex characters (0-9-A-F)  for a maximum ASCII hex character line length of [1*2 + 2*2 +  1*2 + 256*2 ] =  1032 ASCII hex characters + 1*2  characters for the checksum which is attached to the line after calculation.

The checksum calculation is the two's compliment of the Least Significant Byte of the summation of ASCII Character bytes in the line less the Start Character, ASCII colon ":".  So how is this effective?  Lets look at Figure 7.1 The Hex Number Notation and the actual data transitions of the hex data 0-9, A-F  in binary format, they are 0x30 - 0x39 and 0x41-0x46 respectfully to binary string values are 00110000 - 00111001  and  01000001 - 01000110.  So as we see the number of transitions to make any two binary hex digit strings to be the same is always less than 5 transitions.  This is important for checksums since the lower HD number the better chance of catching a bit error in a sequence.  The upper case A-F  is used since bits 5, 6, & 7 never change leaving only 5 bits to test. Bit 4 gives a minimum HD of 1 and the detection of  a bit reversal would only require the checksums to be different.  This is the reason that the simple IntelHex File checksum is effective.  This checksum algorithm would have a greater opportunity to miss the detection of a bit reversal if the entire byte of 255 reversals were allowed.

The IntelHex File Format for transferring data is one of the "least efficient" methodologies and would totally burden the Internet, however for loading an embedded processor program memory, FPGA's, CPLD's EEPROM etc. it is efficient and accurate which is why several embedded processor manufacturers incorporated it into their Integrated Development Environment (IDE) tool and still widely used today.  Since the IntelHex File Format is only applied for single text line of characters with data blocks of 256 bytes [256x2 hex characters] or less makes this algorithm not applicable for Internet data transfer checking function that require large amounts of data transfers.

IntelHexFileFormat
Figure 7.0  Intel Hex File Format Example

ASCII_HEX_BinarySrting.jpg
Figure 7.1  Hex Numbers 0-9 A-F in Binary Notation

Taking into to account what we are looking for is actual bit reversals from the transfer of data from source to destination through a medium.  So if we look at the data and ask the following questions while visualizing the data blocks shown in Figure 7.2.

  • What would be the optimized block size for the checksum?
  • What is the reliability of detecting a single bit reversal anywhere in the block of bits?
  • What is the probability of detecting several bit reversals in the block of data?
  • How many unique bit reversals would have to convolve to give the same source checksum results?
  • What is the probability of several bit reversals that would convolve to give the same source checksum results?

Checksum_Blocks
Figure 7.2  Hex Numbers 0-9 A-F  Sum of Blocks Calculated Checksum

For this example we are only using the ASCII Hex byte for simplicity since the resultant G(x) size for the sum would be less than 16 bits and the highest bit change would be bit five would be 64 x 1032 = 66,048 which would require 17 bits sum register for G(x).  Looking at the bit patterns in Figure 7.1 and the blocks in 7.2, in order to get the same checksum from bit reversals two consecutive blocks would have to have a specific bit reversal that would subtract from one and add to the other to give the same sum.  The probability of this happening is very small due to the uniqueness of the ASCII Hex bit patterns.  The probability of detecting one to several bit reversals is very good.  This is why the ASCII Hex file format has a high reliability.  If all eight bits were to be used this would reduce the reliability and be difficult to handle if the total number of blocks were to increase.   This is the basics to understanding bit reversals and the front door of error-detecting, error-correcting code sequences.

Moving forward, to the late 70's an individual from Lawrence Livermore Labs credited for creating a flexible checksum algorithm that incorporates a variable block size and was given the name the Fletcher Checksum after its creator John G. Fletcher (1934-2012).  This added more credibility to the error-detection process and is used throughout the Internet today.  However, keep in mind that all checksum algorithms that use the sum algorithm process have their limitations since it is just a sum and XOR of bits that give a result on a block of bits.  Adding two byte/word bit strings is a simple non-cpu taxing process and is the fastest algorithm for obtaining a sum of a block of bits for a simple checksum algorithm however, as stated it does have its limitations.

The Fletcher Checksum implemented a Checksum Size to the process which allowed variations for different applications and improved performance. If the groups are sized properly depending on the Hamming Distance the probably of missing the detection of a bit reversal is reduced to a very usable state for transferring large amounts of data as well as loading embedded system memory that would be apparent over the Internet, however it is still questionable today of its performance with present Gigabit network speeds.  The Quality of Service (QoS) of the Internet is quite high and if the physical layer is properly installed the transfer errors are very low;.in a gigabit network even if the data resends a few packets it would not be realized over time.

CRC Versus Checksum:

Checksum Overview:
A Checksum is a simpler runtime calculation of a group of bits to determine if any errors occurred during a data transfer process.  There are several checksum functions or algorithms depending on the design goals and would usually work on small datum sizes.  Checksum algorithms are Parity byte or Parity word,  Modular Sum,  Position-Dependent.  Before the innovation of disk drives serial data transfer existed way back and used the simple audio cassette player - the Commodore Vic20 C2N-B Datasette.  The Datasette was so reliable that they were widely used in Europe while the USA was moving to the disk drive.  The first digital error checking was writing the data twice and checking it twice, not much for efficiency however very accurate.   This progressed to a faster method for transferring digital data that incorporated the simple checksum methodology.  The original simple checksum was created by Motorola for the 6800 and progressed to the Intel Corporation IntelHex File Format as stated earlier.  

Cyclic Redundancy Check CRC Overview:
The CRC (Cyclic Redundancy Check) algorithm was initially created in 1961 by W. Wesley Peterson.   The CRC size used in the Ethernet physical layer protocol is 32 bits (CRC-32) and was derived from the work of several researchers and was eventually published in 1975.  The CRC was created to work on blocks of binary bits, hence the CRC-dd where dd is the size of the binary block.  The CRC is an integer algorithm, no fractions or remainders, which implies that it is not exact and has "bit" limitations. In fact there has been a lot of research into the CRC algorithm(s) to select the optimized polynomial size and bit blocks for the Internet as well as other serial protocols.  The tradeoff for using a CRC or Checksum algorithm is that the data it is applied to is not changed in any way as with encryption and other algorithms that change and add data to the original data and increase or decrease the length.  Before we get into irregularities of the CRC lets focus on the protocols we want to present, the Ethernet Protocol which uses a CRC-32.

Hamming Distance, Polynomials And The CRC
Hamming Distance and the CRC polynomial methodology has been around for many years prior to the Internet that DARPA was developing in the 70's.  Today after all the research and papers published the conclusion is that there is no single CRC polynomial "silver bullet" that will yield the same performance for all the applications.  This has created some challenges for the Internet Standards Group to define a error-detection code standard for Internet traffic as well as create some confusion for hardware and software developers.   There still are different CRC size algorithms being used today for specific applications; it is important to keep in mind as we present this series that CRC algorithms are generally tailored for the best performance for the application.  Several reference links to CRC papers on various Hamming Distances and CRC sizes versus performance characteristics will be at the end of this part of the series.  We will revisit Hamming Distance, CRC, Checksums and other algorithms along with some core applications when we present the security and encryption sections of this series.  The paper presented at the International Conference on Dependable Systems and Networks covers the issue of CRC polynomials with data.   We will return to these algorithms when we address the protocol software implementation.

Cyclic Redundancy Code (CRC) Polynomial Selection For Embedded Networks 2004 

Philip Koopman
ECE Department & ICES
Carnegie Mellon University
Pittsburgh, PA, USA
koopman@cmu.edu
Tridib Chakravarty
Pittsburgh, PA, USA
tridib@alumni.carnegiemellon.edu

Hamming Distance, CRC, Checksum Summary:
Ok, to even attempt to put in ones own words the vast amount of research and experimental data performed over the years on the checksum vs CRC would be an exercise in futility.  So instead lets acknowledge the brilliant minds that already performed these tasks with precision.  Review at your leisure to understand the unique variations of the two algorithms and keep in mind that there are issues with many of the error-detection coding used on the Internet.

Performance of Checksums and CRCs over Real Data (1998)
Craig Partridge (Bolt Beranek and Newman, Inc),
Jim Hughes (Network Systems Corporation),
and Jonathan Stone (Stanford University)

Looking at systems in mathematics and definitions, multiplication is group addition and division is group subtraction; the basics of the checksum is group addition, we see that both CRC polynomial evaluation and checksum have similar properties.  The uniqueness of the CRC is that the polynomial allows a combination of groups because of the division and has a greater probability of detecting a bit reversal because of the Hamming Distance.  The conclusion is that error-detection for bit reversals during transmission will remain a challenging topic for one to create a fool proof methodology.  The interesting part of all this is the ability to detail the limitations of the algorithms and still be able to implement them and generally rely on their functionality as we have for the past 50 years and will continue until a better methodology in created.  We will be implementing various security protocols and policies during once the hardware has been defined for the Core IoT Platform.  This covers the initial presentation of the Internet IPv4 and IPv6 in general and gives us enough details to select the hardware platform. We will be implementing the Checksum and CRC in a software module for the first time through development.  Hardware implementation of CRC algorithms will be addressed after the initial Core IoT Platform has been through a POD (Proof of Design).

Introduction to the Embedded Processor and CPU Marketplace:

A Brief History of the Embedded and CPU Major Players
The $64,535 question of the day, which one should we pick?  Lets backup for a moment to look at the Embedded Processor and CPU Chip Marketplace and the timeline of how it evolved.  Since this marketplace is huge we will only cover some of the Mergers & Acquisitions of the major players in the marketplace to get a glimpse of the arena we are entering.  You may easily search the various manufacturers to see the transitions if you want to research this further.  OK, the major players back in time were Motorola®, Intel®, Cyrix®, AMD®, SMCC®, Microchip® and Texac Instruments®.

  • Motorola®  processor division dominated the embedded and automotive embedded market (68x00 Processors) for many years then decided to diverse its embedded division to become Freescale Semiconductor®.
  • Intel® entered the embedded market after 1986 when they moved the 80x86 CPU line to the newly formed embedded division, from that point on after every silicon rollover of older processors enter the Intel embedded arena.
  • AMD® aquired the x86 instruction set perpectual license by acquiring National Semiconductor® and Cyrix®, the creator of the Intel Math Co-Processor chip.
  • Microchip® acquired SMSC® (Standard MicroSystems Corporation)  in 2012 and Atmel® in 2016.  Microchip markets the MIPS32/64®, ARM® Cortex™ processor lines..
  • NXP® acquired Freescale Semiconductor the original HC68000 embedded processor line.
  • Texac Instruments® - Acquired Burr Brown in 2000,  acquired National Semiconductor in 2011, National Semiconductors cross license of the x86 instruction set for the Cirrus Processor.

There are several other players that cross license cores and put their own name on them which we did not mention here for simplicity.  All of the above companies and several other younger players incorporate the ARM Cortex line of processors, since ARM allows the cross licensing of  the processor technology.  This allows each of these manufacturers to incorporate the ARM processor technology and incorporate their own unique interfaces and software development environment.  ARM also distributes its own development software as well as training for the processor line. OK, that is a brief history of the embedded processor marketplace from 1971 to 2017 that shows nothing is as stable as we would like it to be when developing hardware.  For the selection process we have to decide on a 32 or 64 bit processor and which manufacturer will be manufacturing this processor for several years. Researching embedded processors on the Internet we see that there are many available, however when we review the Last-Time-Buy (LTB) we see that many are being discontinued by the end of 2018/2019.  That means that we would have to redesign the platform before we even get it on the market for a year.  Silicon rollover is one of the major concerns in the hardware development process.  If you are in the market for the long term you have to make long term decisions to insure cost effectiveness.  This is usually overlooked during the startup stages since the main objective is to get the product to market first and create the market need and identity.

The Processors Dilemma:
Embedded Processors vs CPUs vs Micro-Controllers vs System on a Chip (SoC)
There are many players in the Embedded Processor Units (EPU),  System on a Chip (SoC) and  Micro-Processor Controllers (MPC) market to choose from on a global level and only a few at most in the Central Processor Unit (CPU) market, it is acronym-alphabet soup city.  The real dilemma arises with Silicon rollover, discontinued parts, revisions that are not Plug'N'Play compatible which create a headache for the supply chain industry and a nightmare for the design engineer.  To add insult to injury manufacturers like any other business look at the bottom line and fail or just plain neglect to let the designer know when the Last-Time-Buy (LTB) date is and generally hide it from their roadmap's.  Forcing a Life-Time-Buy for any product is a very serious concern, not only for the expense for the LTB but the expense and resources required to redesign the product.  So how do we handle this conundrum? Answer: pick a stable processor, if there is one!  Manufactures of embedded components and tools are different from the standard commercial and consumer products.  Consumer products are designed to be replaced at the earliest tolerable life cycle, commercial products look at about two to three years or shortly after the instream revenue falls below the expected margin usually peaks out by 2 years.

Going back in time, the original Intel 80486 back in 1989 introduced the first processor incorporating the tightly woven pipeline architecture and remained in the embedded market for over 15 years before Intel officially stopped manufacturing the chip.  However, there are still a couple of manufacturers that still produce chip with the x86 pipeline process under one of the few remaining perpetual licenses.  The ones I found on the Internet that sell the Chip and not a fully assembled single board computer are AMD® Corporation GEODE™ Series, system on a chip with graphics engine,  ZFMicro™ Corporation  ZFx86™ which is a 100MHz 486DX pipeline processor with an 80 bit FPU core with no graphics engine and 33 MHz PCI BUS and IDE drive interface under 1 watt; both ZFx86 and GEODE series are SoC's.   The Microchip MIPS32/64®  processor line is a RISC (Reduced Instruction Set Computer) M-Class processor core and as of 2017 MIPS32/64®  processors are still being used for residential gateways, routers and other Android/Linux OS based embedded systems.  MIPS origin is formally from MIPS Technologies back in the early days.  From the history MIPS processors would be the likely choice for the IoT Core Platform.  MIPS architecture is still a pipeline architecture with some added features that add up to five additional cycles to complete the fetch and execution while balancing the System Clock to the Instruction Performance.

Which Embedded Processor To Choose? 
From Part 4 it becomes apparent that we will have to use some type of processor for the IoT Core Platform to handle the communication and security functions  Also since we are looking at both conventional AES256 as well as unconventional security methodologies for future devices we will also look at separate processor for handling the security functions.  The dual processor feature allows greater flexibility for future growth and separate processors allows the data flow to be encrypted separately for the normal Internet communications.  Selecting the right processor(s) for the platform will determine the longevity and QoS of the platform.   The objective is to be able to control all the central processor functions externally.  There are many CPU's that take all the control away from the designer such as Intel, AMD and others by incorporating an OC right inside the CPU chip at Power On Test (POT) and setsup all the driver connections to the peripherals.  This only allows the user to ride on top of this and thus allowing vulnerabilities since much of this core is Intellectual Property and protected by the manufacturer and they do have that right to protect their IT just as we are.  The difference is that if we add IP to the platform on top of someone else's IP we have no guarantee that we are the only one controlling access, basic security policy 101 especially if you are going to connect to a network in any way.

Putting our top level requirement on the table for a flexible IoT Core Platform that has reasonable RAM and FLASH memory to execute the simple to the complex application does present a challenge.  There are two schools of thought when developing a platform, the first is to reduce the chip count to the smallest possible number up front and struggle with the selection of embedded peripherals that can be shared; Second, start with a stand alone processor then add the memory and peripherals selected for a proof of design then start to reduce the design for cost savings.  There are pros & cons for both approaches.  Our approach in this series is to create a functional block diagram using single blocks for each function to get a top level (40,000 foot) envision of the all the functionality we would like for the platform.  From the functional block diagram we will look at how an embedded architecture will incorporate some of the blocks and build the system platform from there.

For those who prefer running an OS like Linux or others we will keep that in mind when selecting the embedded processor system. The embedded marketplace has caused a lot of work for the Linux development team that certifies Linux OS implementations with hundreds of embedded processors if any would be available today.  Remember if we just look at the core functions we could use other technology to add functionality later as long as we maintain control over the functional components for implementing security policies.

The Embedded Processor Selection:
We are not going to select an embedded processor at this time since we have a lot more to discuss.  Embedded processors incorporate many features for handling applications which makes the selection challenging.  The selection process for this series addresses several major features for the purpose of education from the beginner to the seasoned product developer to share knowledge in developing the IoT Core platform.

  • Long life cycle availability, at least five years with roadmap's for revisions and support. Typical for the embedded markets
  • Common assembly language across 32/64 bit platforms.
  • Processor speeds of 100 MHz minimum.
  • Full control over the boot up process of the CPU, Memory & Peripherals.
  • Free or low cost  IDE (Integrated Development Environment) platform if available.
  • Software: C Compilers &  Macro Assembler, Linux, WinCE etc. OS's supported.
  • Many application examples with source code for support.
  • Libraries available to reduce software development time.
  • Evaluation demo PCB for the selected embedded processor.
  • Selection of different configurations using the same IDE platform software.
  • Large selection of physical chip packaging, LQFP, TQFP, BGA - environment reliability.

The IoT Core Platform Peripheral Requirements:

Peripheral vs. Functions
At this time we let our imagination run free a bit and pick the main peripherals and functions we would like to have in the IoT Core Platform.  Table 7.0 Core Platform Peripherals and Functions, below list out these requirements and a short description of each.  From this list a functional block diagram may be created.  When we look at the functional block diagram it looks similar to the embedded processors with a bunch of peripherals available today.  We have a few choices at this point as to how we want to create the IoT Core Platform.  Selecting just a CPU core, Interrupt Controller, a DRAM controller and FPU would put us in the SBC (Single Board Computer) arena.  Our Intent here is to get a single chip that has many of the functions in the block diagram then add the ones that will be required per application.  That keeps the chip count down to a minimum.  However it would be great to have some type of evaluation demo board that we could test software and hardware for the peripherals we will be adding pending the application.

Peripheral Type

ISP Side     IoT Core Platform Peripherals   Description

Ethernet Controller  10/100/1Gbps

The ISP front end WAN that is connected to the Global Internet RJ45

WiFi  Controller 2.4GHz / 5GHz dual band

The ISP front end WAN that is connected via WiFi - IPv4 / IPv6

 

Peripheral Type

Local Area Network (LAN / ULA  Side     IoT Core Platform Peripherals   Description

Ethernet Controller  10/100/1Gbps

The local  LAN/ULA network connections that are for hard wired peripherals RJ45 connected  to the platform LAN / ULA local network

WiFi  Controller 2.4GHz / 5GHz dual band

This is for all the Local Wireless WiFi peripherals that are connected to the platform LAN / ULA local network

 

Peripheral Type

Local Area Network (LAN / ULA  Side     IoT Core Platform  Serial Control Peripherals Single Channel   Description

SPI (Serial Peripheral Interface)  

Standard Component and control interface. These are direct connect devices that are separate from any network or wireless protocols.

I²C (Industrial and HighSpeed ) topologies

Standard Component and control interface.  These are direct connect devices that are separate from any network or wireless protocols.

CAN (1 Wire / Differential)

Standard automotive network peripherals

RS-232/422 Serial I/O

General Purpose Serial I/O devices

   

Counter-Timer(s)

This is used for event triggering of peripherals

Real Time Clock Output

This is a separate output from the system clock that is programmable to a specific interval.

Watchdog Timer - Interval Programmable

This is a separate timer for the system integrity and insures that the system is running in the sequence programmed.

 

Peripheral Type

Local Area Network (LAN / ULA  Side     IoT Core Platform  Serial Control Peripherals  Multi-Channel  Description

Bluetooth 2.4GHz

These are for all the Local Bluetooth devices connected to the platform LAN / ULA local network

USB 3.x / 2.0 

Standard USB interface with High Speed option of 480MHz USB 2.0 minimum

Parallel BUS (16/32Bit Data) / (16 Bit Address)

This is a separate data bus for parallel type peripheral connections. It allows any type of direct connection to the platform. (Optional)

Analog Inputs 16 Bit  - 8 channels analog inputs

This is for monitoring environmental and platform parameters directly (0 ± 15Vdc standard

Analog Outputs 16 Bit -  8 Channels

This is for controlling and adjusting and calibrating sensors for environmental parameters directly (0 ± 15Vdc standard

RTD Analog Signal Conditioning Front End

This is for Platinum RTD's for very accurate temperature measurements. Separate peripheral for 8 channels

Thermocouple Analog Signal Conditioning

This is for standard thermocouples standard temperature measurements. Separate peripheral for 8 channels

 

Peripheral Type

Core Processor Section     IoT Core Platform   Description

32 bit pipeline MIPS M-Class Processor

MIPS M-Class processors allow same instruction set for both 32/64 bit. 32 bit is efficient for the IoT Core Platform.

FPU  Single/Double precision

Floating Point Processor unit - Single / Double 64 bit precision

Interrupt Controller - 8/16/32 channels

Interrupt controller to generate interrupt requests to the processor selected.

DMA controller for RAM Interface

Direct Memory Access controller to allow peripherals direct access to the RAM interface.

EEPROM interface for external parameter storage

This is a separate EEPROM storage area for platform parameters only accessible through the security interface Serial/Parallel NAND to handle up to 16 GigaBytes (128Gbits)

RAM interface for external data buffering

This is a separate interface that allows the connected peripherals direct high speed access to the memory for data collection.  Static RAM up to 32 MegaBytes

 

 

Table 7.0   Core IoT Platform Peripherals and Functions


Figure 7.2   Core IoT Platform Functional Block Diagram

OK, the functional block diagram shows many peripherals attached to the main bus, it is not very difficult to create a block diagram like this considering the number of features we would like to see in the IoT Core Platform.  The two items that should stand out are the 32 bit parallel interface controller and the Custom User Interface Controller.  If we were to remove all the other peripherals then the 32Bit Parallel Interface Controller and the Custom User Interface Controller would allow us to add just about any type of peripheral(s) that can be imagined within the boundaries of the processor.  

I am not a big fan of wireless in a process control area for many reasons that we will cover when we get into the security and software development parts of the series.  

This is the first conceptual block diagram presentation of the Core IoT Platform, as we continue the series we will apply any changes to the platform as required for the applications and optimization


Reference Links for Part 7:
The majority of Internet scheme and protocol information are from a few open public information sources on the net, IETF (Internet Engineering Task Force) RFC's that explain details on the application of the protocols used for both IPv4 and IPv6 as well as experimental protocols for the next generation Internet  and the Network Sorcery web site. The remaining of this series on the IoT platform will be from BASIL Networks MDM (Modular Design Methodology) applied with the Socratic teaching method.  Thank You - expand your horizon- Sal Tuzzo

Network Sorcery:  http://www.networksorcery.com
The Internet Engineering task Force:  IETF - RFC references
Wikipedia  https://en.wikipedia.org/wiki/Main_Page

The high level expert links for the CRC and Checksum are listed below; there are so many Internet references to this subject that listing them would take up several pages and is not the intent of this series.

PDF The iSCSI CRC32C Digest and the Simultaneous Multiply and Divide Algorithm January 30, 2002 Luben Tuikov & Vicente Cavannay 

PDF Cyclic Redundancy Code (CRC) Polynomial Selection For Embedded Networks 2004  Philip Koopman,  Tridib Chakravarty

PDF Performance of Checksums and CRCs over Real Data (1998)  Craig Partridge, Jim Hughes, Jonathan Stone

TEXT Computing the Internet Checksum RFC 1071


Part 8  Preliminary Outline:

  • Embedded Processor Technology, CPU Chips Technology,  FPGA System on a Chip - How much control do we really have?
  • Types of processors, MIPS, RISC, CISC, Single Core, Multi-core etc.
  • Boot up Initialization processes and Vulnerabilities using embedded processors and independent CPU chips.
  • Secure Boot processes and Embedded Cryptographic Firmware Engines - A Closer Look.
  • An introduction to support chips for processors,  NorthBridge (MCH, GMCH) /  SouthBridge support chips (ICH),  SoC Platform Controller Hub
  • Security protocols and how they play a roll in initialization.
  • Selection of the Core IoT Platform Processor(s) and peripherals.

Publishing this series on a website or reprinting is authorized by displaying the following, including the hyperlink to BASIL Networks, PLLC either at the beginning or end of each part.
BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-7 The Ethernet Protocol(s): Lets Sync Up- (November 23, 2017)

For Website Link: cut and past this code:

<p><a href="https://www.basilnetworks.com/Blog/index.php?op=ViewArticle&articleId=12&blogId=1" target="_blank"> BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-7 IPv4, IPv6 Protocols, Network Transport & Applications: <i>Continued (Sept 242017)</i></a></p>

 

Sal (JT) Tuzzo - Founder CEO/CTO BASIL Networks, PLLC.
Sal may be contacted directly through this sites Contact Form or
through LinkedIn

Comments

Add comment

Rest assured, your post or comment has been received, and is simply waiting to be approved. Comments and posts are moderated to prevent spam - this results in a slight delay until you see it posted. Please check back soon. Thank you!

Complete Captcha to add comment 1394089 -Please enter the code shown and click Send.

Registration is required to post

 
Powered by LifeType - Design by BalearWeb
Copyright© 1990-2017 BASIL Networks, PLLC. All rights reserved
webmaster