Designed & Made
in America (DMA)

BASIL Networks Blog BN'B | 11

23 Nov, 2017

Internet of Things (IoT) -Security, Privacy, Safety-Platform Development Project Part-7

Part 7: IPv4, IPv6, Network Protocols - Network, Transport & Application: Continued
The Ethernet Protocol(s) CRC-32 and Checksums

Design is a way of life, a point of view.  It involves the whole complex of visual communications: talent, creative ability, manual skill, and technical knowledge.  Aesthetics and economics, technology and psychology are intrinsically related to the process. - Paul Rand


Quick review to set the atmosphere for Part 7
From the previous Internet of Things Part-1 through Part- 6:

We presented the first part of the Ethernet Protocol hardware characteristics, the software frame structure and how it identifies devices on its network.
The Ethernet Physical Layer incorporates the following capabilities and features:

What we want to cover in Part 7:

The Checksum Algotiyhms and the Ethernet Protocol Cyclic Redundancy Check (CRC-32) 
In this part of the series we will present the error-detection section of the protocols for the Core IoT Platform, the checksum and the Ethernet protocol CRC-32.  We will start off with some ground floor history of error-detection in serial data streams and proceed to the  various algorithms that have been develop for the Internet.  We will then present an introduction with a bit of history of the embedded processor, microcontroller and CPU marketplace.  To close this part we will present an overview of the Core IoT Platform requirements in order to create an IoT Platform functional block diagram. The outline is listed below.

Enjoy the series.

Lets Get Started: A "BIT" of CRC and Checksum History
Serial data transfer from its conception to present day has the task of insuring that the data block of bits being transported does not contain any changed bits during transfer.  Before the Internet serial communications were part of the telecommunications and information theory, coding theory, cryptography and others.  What the innovators looked at was how to detect bit reversals in a block of bits and ignore the actual format of the data being transferred.  All that was of interested was that there were no bad bits in the block of bits period.  Well to look at this reality we have to look at how many bit reversals exists in the source block of bits and calculate a methodology to test for any reversals.  If there are no reversals then it is assumed that the block of bits was transferred correctly with no errors.  This simple explanation has created a "bit" of confusion in the serial digital world and still exists today.  Ok, the research of bit reversals leads us into the theory of how several checksum methodology evolved to and how to place them in the block of data bits for the best performance.  Checksum algorithms vary in the world of processors and programming, there are a set of simplified checksums for embedded processors that handle smaller data transfer sizes to the Internet that handle gigabits of data continually.

The object of the theory is to detect errors in transmission by using some sort of error detection code sequence which was introduce back in the 1940's by Richard W. Hamming who modernized the development of error-correcting codes.  This was fast evolving by the contribution of other theorists and ended up with the name Hamming Distance and so we have the introduction to checksums and CRC by way of Error-Detecting Codes and Correction incorporating Hamming Distance theory.

Hamming Distance:
The Hamming Distance (HD)  is defined as the number of transitions required to make two strings of identical length the same.  Hamming Distance is an interesting theory when applied to comparing a block of bits as we will see shortly.  It is also an interesting application when applied to cryptography and ciphers.  Hamming Distance is a starting point for cracking a cipher based on language, region, and bit changes within blocks, there are several other brute force methods as well which will be discussed in another blog on cryptography in the future.  OK, so the question, what does Hamming Distance have to do Checksums and CRCs?  The larger the Hamming Distance number means the more bit reversals within the block of bits being tested and the reference block of bits, the more difficult it is to be exact that a bit error will be detected.    Lets take both Checksum and CRC separately and look at the methodology and how they would detect bit errors.

There are many documents published detailing several different methodologies to test a block of bits, in fact so many that some sort of standard had to be set for the Internet in order to insure some type of data transfer integrity where all system use the same methodology at both ends to insure consistency throughout the networks.  As we stated in previous parts of this series, "as long as you follow the protocols in place the data will be routed source to destination however, there is no guarantee that the data will not contain bit errors "and" that is the main task of the CRC and Checksum algorithms.  The user data extraction is up to the application to encode/decode the users data which also should contain some sort or data integrity checking process as well.  So with that said we will look at the protocols being presented which are Ethernet, IP, TCP, and UDP and the checking methodologies used to insure data transfer integrity.   There is an intense amount of research information on serial data bit testing and grouping of bits into blocks of various sizes attempting to create the best process of identifying a single bit error in a large block of data during transfer.  We will give some references at the end of this part for reference, one could make a career in error-detecting-codes, that is not the intent here, just an overview with an understanding to implement the algorithms for our IoT Core Platform development.  There are so many web pages that give the source code in several languages for the Hamming distance function eliminating the need to present it here.

We often see in the protocol header used in the Internet fields labeled  "Checksum", Frame Check Sequence (FCS) , CRC and others, these fields may be a Checksum or CRC or some other error checking methodology unique to the protocol. To clear this up we will look at the checksum functions and the Cyclic Redundancy Check (CRC) algorithm which are the main functions used for many of the protocols, specifically for the initial IoT Core Platform.  We will return to the Checksums and CRC later in the series when we address the programming of selected protocols and the OSI model.

Hamming Distance And The Checksum:
Here we are now 20 years later into the 70's and still wondering how to best incorporate the error detecting and error correction codes for large amounts of data transfers.  Well lets do a slow walk first to see how this bit reversal and error detection evolved.  The Hamming Distance theory set the sites of looking at the reduction of bit reversals in a segmented block of bits for transferring data insuring that if a transfer error or bit reversal happened it would be detected.   This lead to the simplest proof of the Hamming Distance Theorem when it was implemented into a ASCII hex file data format representing bytes of data.  This simple checksum algorithm was introduced by Motorola and based on the 6800 microprocessor series and given the names SRECORD, SREC, S19, S28, S37 file format which was commonly used to program the flash memory, the loading format was not very fast but very efficient at the time.  Intel Corporation® reviewed and revised the file format for use on the 8008 CPU in the early 70's and eventually created a new specification for the Intel processor line in the late 80's and labeled it the IntelHex File Jan 6, 1988 for the entire x86 line of processors.  OK enough of ancient history, although roots are important, even though they are routinely ignored the tree would not have grown without them.

The IntelHex file format checksum is the interesting section of the IntelHex File format that we want to cover here which is defined as the sum of all the data bytes in the single line as shown in Figure 7.0 below.  The line of code has a maximum length of  [Start Code(S) + Byte Count + Address +  Record Type + Data ] bytes all represented in ASCII Hex format.  Each eight bit byte is represented by two ASCII hex characters (0-9-A-F)  for a maximum ASCII hex character line length of [1*2 + 2*2 +  1*2 + 256*2 ] =  1032 ASCII hex characters + 1*2  characters for the checksum which is attached to the line after calculation.

The checksum calculation is the two's compliment of the Least Significant Byte of the summation of ASCII Character bytes in the line less the Start Character, ASCII colon ":".  So how is this effective?  Lets look at Figure 7.1 The Hex Number Notation and the actual data transitions of the hex data 0-9, A-F  in binary format, they are 0x30 - 0x39 and 0x41-0x46 respectfully to binary string values are 00110000 - 00111001  and  01000001 - 01000110.  So as we see the number of transitions to make any two binary hex digit strings to be the same is always less than 5 transitions.  This is important for checksums since the lower HD number the better chance of catching a bit error in a sequence.  The upper case A-F  is used since bits 5, 6, & 7 never change leaving only 5 bits to test. Bit 4 gives a minimum HD of 1 and the detection of  a bit reversal would only require the checksums to be different.  This is the reason that the simple IntelHex File checksum is effective.  This checksum algorithm would have a greater opportunity to miss the detection of a bit reversal if the entire byte of 255 reversals were allowed.

The IntelHex File Format for transferring data is one of the "least efficient" methodologies and would totally burden the Internet, however for loading an embedded processor program memory, FPGA's, CPLD's EEPROM etc. it is efficient and accurate which is why several embedded processor manufacturers incorporated it into their Integrated Development Environment (IDE) tool and still widely used today.  Since the IntelHex File Format is only applied for single text line of characters with data blocks of 256 bytes [256x2 hex characters] or less makes this algorithm not applicable for Internet data transfer checking function that require large amounts of data transfers.

Figure 7.0  Intel Hex File Format Example

Figure 7.1  Hex Numbers 0-9 A-F in Binary Notation

Taking into to account what we are looking for is actual bit reversals from the transfer of data from source to destination through a medium.  So if we look at the data and ask the following questions while visualizing the data blocks shown in Figure 7.2.

Figure 7.2  Hex Numbers 0-9 A-F  Sum of Blocks Calculated Checksum

For this example we are only using the ASCII Hex byte for simplicity since the resultant G(x) size for the sum would be less than 16 bits and the highest bit change would be bit five would be 64 x 1032 = 66,048 which would require 17 bits sum register for G(x).  Looking at the bit patterns in Figure 7.1 and the blocks in 7.2, in order to get the same checksum from bit reversals two consecutive blocks would have to have a specific bit reversal that would subtract from one and add to the other to give the same sum.  The probability of this happening is very small due to the uniqueness of the ASCII Hex bit patterns.  The probability of detecting one to several bit reversals is very good.  This is why the ASCII Hex file format has a high reliability.  If all eight bits were to be used this would reduce the reliability and be difficult to handle if the total number of blocks were to increase.   This is the basics to understanding bit reversals and the front door of error-detecting, error-correcting code sequences.

Moving forward, to the late 70's an individual from Lawrence Livermore Labs credited for creating a flexible checksum algorithm that incorporates a variable block size and was given the name the Fletcher Checksum after its creator John G. Fletcher (1934-2012).  This added more credibility to the error-detection process and is used throughout the Internet today.  However, keep in mind that all checksum algorithms that use the sum algorithm process have their limitations since it is just a sum and XOR of bits that give a result on a block of bits.  Adding two byte/word bit strings is a simple non-cpu taxing process and is the fastest algorithm for obtaining a sum of a block of bits for a simple checksum algorithm however, as stated it does have its limitations.

The Fletcher Checksum implemented a Checksum Size to the process which allowed variations for different applications and improved performance. If the groups are sized properly depending on the Hamming Distance the probably of missing the detection of a bit reversal is reduced to a very usable state for transferring large amounts of data as well as loading embedded system memory that would be apparent over the Internet, however it is still questionable today of its performance with present Gigabit network speeds.  The Quality of Service (QoS) of the Internet is quite high and if the physical layer is properly installed the transfer errors are very low;.in a gigabit network even if the data resends a few packets it would not be realized over time.

CRC Versus Checksum:

Checksum Overview:
A Checksum is a simpler runtime calculation of a group of bits to determine if any errors occurred during a data transfer process.  There are several checksum functions or algorithms depending on the design goals and would usually work on small datum sizes.  Checksum algorithms are Parity byte or Parity word,  Modular Sum,  Position-Dependent.  Before the innovation of disk drives serial data transfer existed way back and used the simple audio cassette player - the Commodore Vic20 C2N-B Datasette.  The Datasette was so reliable that they were widely used in Europe while the USA was moving to the disk drive.  The first digital error checking was writing the data twice and checking it twice, not much for efficiency however very accurate.   This progressed to a faster method for transferring digital data that incorporated the simple checksum methodology.  The original simple checksum was created by Motorola for the 6800 and progressed to the Intel Corporation IntelHex File Format as stated earlier.  

Cyclic Redundancy Check CRC Overview:
The CRC (Cyclic Redundancy Check) algorithm was initially created in 1961 by W. Wesley Peterson.   The CRC size used in the Ethernet physical layer protocol is 32 bits (CRC-32) and was derived from the work of several researchers and was eventually published in 1975.  The CRC was created to work on blocks of binary bits, hence the CRC-dd where dd is the size of the binary block.  The CRC is an integer algorithm, no fractions or remainders, which implies that it is not exact and has "bit" limitations. In fact there has been a lot of research into the CRC algorithm(s) to select the optimized polynomial size and bit blocks for the Internet as well as other serial protocols.  The tradeoff for using a CRC or Checksum algorithm is that the data it is applied to is not changed in any way as with encryption and other algorithms that change and add data to the original data and increase or decrease the length.  Before we get into irregularities of the CRC lets focus on the protocols we want to present, the Ethernet Protocol which uses a CRC-32.

Hamming Distance, Polynomials And The CRC
Hamming Distance and the CRC polynomial methodology has been around for many years prior to the Internet that DARPA was developing in the 70's.  Today after all the research and papers published the conclusion is that there is no single CRC polynomial "silver bullet" that will yield the same performance for all the applications.  This has created some challenges for the Internet Standards Group to define a error-detection code standard for Internet traffic as well as create some confusion for hardware and software developers.   There still are different CRC size algorithms being used today for specific applications; it is important to keep in mind as we present this series that CRC algorithms are generally tailored for the best performance for the application.  Several reference links to CRC papers on various Hamming Distances and CRC sizes versus performance characteristics will be at the end of this part of the series.  We will revisit Hamming Distance, CRC, Checksums and other algorithms along with some core applications when we present the security and encryption sections of this series.  The paper presented at the International Conference on Dependable Systems and Networks covers the issue of CRC polynomials with data.   We will return to these algorithms when we address the protocol software implementation.

Cyclic Redundancy Code (CRC) Polynomial Selection For Embedded Networks 2004 

Philip Koopman
ECE Department & ICES
Carnegie Mellon University
Pittsburgh, PA, USA
Tridib Chakravarty
Pittsburgh, PA, USA

Hamming Distance, CRC, Checksum Summary:
Ok, to even attempt to put in ones own words the vast amount of research and experimental data performed over the years on the checksum vs CRC would be an exercise in futility.  So instead lets acknowledge the brilliant minds that already performed these tasks with precision.  Review at your leisure to understand the unique variations of the two algorithms and keep in mind that there are issues with many of the error-detection coding used on the Internet.

Performance of Checksums and CRCs over Real Data (1998)
Craig Partridge (Bolt Beranek and Newman, Inc),
Jim Hughes (Network Systems Corporation),
and Jonathan Stone (Stanford University)

Looking at systems in mathematics and definitions, multiplication is group addition and division is group subtraction; the basics of the checksum is group addition, we see that both CRC polynomial evaluation and checksum have similar properties.  The uniqueness of the CRC is that the polynomial allows a combination of groups because of the division and has a greater probability of detecting a bit reversal because of the Hamming Distance.  The conclusion is that error-detection for bit reversals during transmission will remain a challenging topic for one to create a fool proof methodology.  The interesting part of all this is the ability to detail the limitations of the algorithms and still be able to implement them and generally rely on their functionality as we have for the past 50 years and will continue until a better methodology in created.  We will be implementing various security protocols and policies during once the hardware has been defined for the Core IoT Platform.  This covers the initial presentation of the Internet IPv4 and IPv6 in general and gives us enough details to select the hardware platform. We will be implementing the Checksum and CRC in a software module for the first time through development.  Hardware implementation of CRC algorithms will be addressed after the initial Core IoT Platform has been through a POD (Proof of Design).

Introduction to the Embedded Processor and CPU Marketplace:

A Brief History of the Embedded and CPU Major Players
The $64,535 question of the day, which one should we pick?  Lets backup for a moment to look at the Embedded Processor and CPU Chip Marketplace and the timeline of how it evolved.  Since this marketplace is huge we will only cover some of the Mergers & Acquisitions of the major players in the marketplace to get a glimpse of the arena we are entering.  You may easily search the various manufacturers to see the transitions if you want to research this further.  OK, the major players back in time were Motorola®, Intel®, Cyrix®, AMD®, SMCC®, Microchip® and Texac Instruments®.

There are several other players that cross license cores and put their own name on them which we did not mention here for simplicity.  All of the above companies and several other younger players incorporate the ARM Cortex line of processors, since ARM allows the cross licensing of  the processor technology.  This allows each of these manufacturers to incorporate the ARM processor technology and incorporate their own unique interfaces and software development environment.  ARM also distributes its own development software as well as training for the processor line. OK, that is a brief history of the embedded processor marketplace from 1971 to 2017 that shows nothing is as stable as we would like it to be when developing hardware.  For the selection process we have to decide on a 32 or 64 bit processor and which manufacturer will be manufacturing this processor for several years. Researching embedded processors on the Internet we see that there are many available, however when we review the Last-Time-Buy (LTB) we see that many are being discontinued by the end of 2018/2019.  That means that we would have to redesign the platform before we even get it on the market for a year.  Silicon rollover is one of the major concerns in the hardware development process.  If you are in the market for the long term you have to make long term decisions to insure cost effectiveness.  This is usually overlooked during the startup stages since the main objective is to get the product to market first and create the market need and identity.

The Processors Dilemma:
Embedded Processors vs CPUs vs Micro-Controllers vs System on a Chip (SoC)
There are many players in the Embedded Processor Units (EPU),  System on a Chip (SoC) and  Micro-Processor Controllers (MPC) market to choose from on a global level and only a few at most in the Central Processor Unit (CPU) market, it is acronym-alphabet soup city.  The real dilemma arises with Silicon rollover, discontinued parts, revisions that are not Plug'N'Play compatible which create a headache for the supply chain industry and a nightmare for the design engineer.  To add insult to injury manufacturers like any other business look at the bottom line and fail or just plain neglect to let the designer know when the Last-Time-Buy (LTB) date is and generally hide it from their roadmap's.  Forcing a Life-Time-Buy for any product is a very serious concern, not only for the expense for the LTB but the expense and resources required to redesign the product.  So how do we handle this conundrum? Answer: pick a stable processor, if there is one!  Manufactures of embedded components and tools are different from the standard commercial and consumer products.  Consumer products are designed to be replaced at the earliest tolerable life cycle, commercial products look at about two to three years or shortly after the instream revenue falls below the expected margin usually peaks out by 2 years.

Going back in time, the original Intel 80486 back in 1989 introduced the first processor incorporating the tightly woven pipeline architecture and remained in the embedded market for over 15 years before Intel officially stopped manufacturing the chip.  However, there are still a couple of manufacturers that still produce chip with the x86 pipeline process under one of the few remaining perpetual licenses.  The ones I found on the Internet that sell the Chip and not a fully assembled single board computer are AMD® Corporation GEODE™ Series, system on a chip with graphics engine,  ZFMicro™ Corporation  ZFx86™ which is a 100MHz 486DX pipeline processor with an 80 bit FPU core with no graphics engine and 33 MHz PCI BUS and IDE drive interface under 1 watt; both ZFx86 and GEODE series are SoC's.   The Microchip MIPS32/64®  processor line is a RISC (Reduced Instruction Set Computer) M-Class processor core and as of 2017 MIPS32/64®  processors are still being used for residential gateways, routers and other Android/Linux OS based embedded systems.  MIPS origin is formally from MIPS Technologies back in the early days.  From the history MIPS processors would be the likely choice for the IoT Core Platform.  MIPS architecture is still a pipeline architecture with some added features that add up to five additional cycles to complete the fetch and execution while balancing the System Clock to the Instruction Performance.

Which Embedded Processor To Choose? 
From Part 4 it becomes apparent that we will have to use some type of processor for the IoT Core Platform to handle the communication and security functions  Also since we are looking at both conventional AES256 as well as unconventional security methodologies for future devices we will also look at separate processor for handling the security functions.  The dual processor feature allows greater flexibility for future growth and separate processors allows the data flow to be encrypted separately for the normal Internet communications.  Selecting the right processor(s) for the platform will determine the longevity and QoS of the platform.   The objective is to be able to control all the central processor functions externally.  There are many CPU's that take all the control away from the designer such as Intel, AMD and others by incorporating an OC right inside the CPU chip at Power On Test (POT) and setsup all the driver connections to the peripherals.  This only allows the user to ride on top of this and thus allowing vulnerabilities since much of this core is Intellectual Property and protected by the manufacturer and they do have that right to protect their IT just as we are.  The difference is that if we add IP to the platform on top of someone else's IP we have no guarantee that we are the only one controlling access, basic security policy 101 especially if you are going to connect to a network in any way.

Putting our top level requirement on the table for a flexible IoT Core Platform that has reasonable RAM and FLASH memory to execute the simple to the complex application does present a challenge.  There are two schools of thought when developing a platform, the first is to reduce the chip count to the smallest possible number up front and struggle with the selection of embedded peripherals that can be shared; Second, start with a stand alone processor then add the memory and peripherals selected for a proof of design then start to reduce the design for cost savings.  There are pros & cons for both approaches.  Our approach in this series is to create a functional block diagram using single blocks for each function to get a top level (40,000 foot) envision of the all the functionality we would like for the platform.  From the functional block diagram we will look at how an embedded architecture will incorporate some of the blocks and build the system platform from there.

For those who prefer running an OS like Linux or others we will keep that in mind when selecting the embedded processor system. The embedded marketplace has caused a lot of work for the Linux development team that certifies Linux OS implementations with hundreds of embedded processors if any would be available today.  Remember if we just look at the core functions we could use other technology to add functionality later as long as we maintain control over the functional components for implementing security policies.

The Embedded Processor Selection:
We are not going to select an embedded processor at this time since we have a lot more to discuss.  Embedded processors incorporate many features for handling applications which makes the selection challenging.  The selection process for this series addresses several major features for the purpose of education from the beginner to the seasoned product developer to share knowledge in developing the IoT Core platform.

  • Long life cycle availability, at least five years with roadmap's for revisions and support. Typical for the embedded markets
  • Common assembly language across 32/64 bit platforms.
  • Processor speeds of 100 MHz minimum.
  • Full control over the boot up process of the CPU, Memory & Peripherals.
  • Free or low cost  IDE (Integrated Development Environment) platform if available.
  • Software: C Compilers &  Macro Assembler, Linux, WinCE etc. OS's supported.
  • Many application examples with source code for support.
  • Libraries available to reduce software development time.
  • Evaluation demo PCB for the selected embedded processor.
  • Selection of different configurations using the same IDE platform software.
  • Large selection of physical chip packaging, LQFP, TQFP, BGA - environment reliability.

The IoT Core Platform Peripheral Requirements:

Peripheral vs. Functions
At this time we let our imagination run free a bit and pick the main peripherals and functions we would like to have in the IoT Core Platform.  Table 7.0 Core Platform Peripherals and Functions, below list out these requirements and a short description of each.  From this list a functional block diagram may be created.  When we look at the functional block diagram it looks similar to the embedded processors with a bunch of peripherals available today.  We have a few choices at this point as to how we want to create the IoT Core Platform.  Selecting just a CPU core, Interrupt Controller, a DRAM controller and FPU would put us in the SBC (Single Board Computer) arena.  Our Intent here is to get a single chip that has many of the functions in the block diagram then add the ones that will be required per application.  That keeps the chip count down to a minimum.  However it would be great to have some type of evaluation demo board that we could test software and hardware for the peripherals we will be adding pending the application.

Peripheral Type

ISP Side     IoT Core Platform Peripherals   Description

Ethernet Controller  10/100/1Gbps

The ISP front end WAN that is connected to the Global Internet RJ45

WiFi  Controller 2.4GHz / 5GHz dual band

The ISP front end WAN that is connected via WiFi - IPv4 / IPv6


Peripheral Type

Local Area Network (LAN / ULA  Side     IoT Core Platform Peripherals   Description

Ethernet Controller  10/100/1Gbps

The local  LAN/ULA network connections that are for hard wired peripherals RJ45 connected  to the platform LAN / ULA local network

WiFi  Controller 2.4GHz / 5GHz dual band

This is for all the Local Wireless WiFi peripherals that are connected to the platform LAN / ULA local network


Peripheral Type

Local Area Network (LAN / ULA  Side     IoT Core Platform  Serial Control Peripherals Single Channel   Description

SPI (Serial Peripheral Interface)  

Standard Component and control interface. These are direct connect devices that are separate from any network or wireless protocols.

I²C (Industrial and HighSpeed ) topologies

Standard Component and control interface.  These are direct connect devices that are separate from any network or wireless protocols.

CAN (1 Wire / Differential)

Standard automotive network peripherals

RS-232/422 Serial I/O

General Purpose Serial I/O devices



This is used for event triggering of peripherals

Real Time Clock Output

This is a separate output from the system clock that is programmable to a specific interval.

Watchdog Timer - Interval Programmable

This is a separate timer for the system integrity and insures that the system is running in the sequence programmed.


Peripheral Type

Local Area Network (LAN / ULA  Side     IoT Core Platform  Serial Control Peripherals  Multi-Channel  Description

Bluetooth 2.4GHz

These are for all the Local Bluetooth devices connected to the platform LAN / ULA local network

USB 3.x / 2.0 

Standard USB interface with High Speed option of 480MHz USB 2.0 minimum

Parallel BUS (16/32Bit Data) / (16 Bit Address)

This is a separate data bus for parallel type peripheral connections. It allows any type of direct connection to the platform. (Optional)

Analog Inputs 16 Bit  - 8 channels analog inputs

This is for monitoring environmental and platform parameters directly (0 ± 15Vdc standard

Analog Outputs 16 Bit -  8 Channels

This is for controlling and adjusting and calibrating sensors for environmental parameters directly (0 ± 15Vdc standard

RTD Analog Signal Conditioning Front End

This is for Platinum RTD's for very accurate temperature measurements. Separate peripheral for 8 channels

Thermocouple Analog Signal Conditioning

This is for standard thermocouples standard temperature measurements. Separate peripheral for 8 channels


Peripheral Type

Core Processor Section     IoT Core Platform   Description

32 bit pipeline MIPS M-Class Processor

MIPS M-Class processors allow same instruction set for both 32/64 bit. 32 bit is efficient for the IoT Core Platform.

FPU  Single/Double precision

Floating Point Processor unit - Single / Double 64 bit precision

Interrupt Controller - 8/16/32 channels

Interrupt controller to generate interrupt requests to the processor selected.

DMA controller for RAM Interface

Direct Memory Access controller to allow peripherals direct access to the RAM interface.

EEPROM interface for external parameter storage

This is a separate EEPROM storage area for platform parameters only accessible through the security interface Serial/Parallel NAND to handle up to 16 GigaBytes (128Gbits)

RAM interface for external data buffering

This is a separate interface that allows the connected peripherals direct high speed access to the memory for data collection.  Static RAM up to 32 MegaBytes



Table 7.0   Core IoT Platform Peripherals and Functions

Figure 7.2   Core IoT Platform Functional Block Diagram

OK, the functional block diagram shows many peripherals attached to the main bus, it is not very difficult to create a block diagram like this considering the number of features we would like to see in the IoT Core Platform.  The two items that should stand out are the 32 bit parallel interface controller and the Custom User Interface Controller.  If we were to remove all the other peripherals then the 32Bit Parallel Interface Controller and the Custom User Interface Controller would allow us to add just about any type of peripheral(s) that can be imagined within the boundaries of the processor.  

I am not a big fan of wireless in a process control area for many reasons that we will cover when we get into the security and software development parts of the series.  

This is the first conceptual block diagram presentation of the Core IoT Platform, as we continue the series we will apply any changes to the platform as required for the applications and optimization

Reference Links for Part 7:
The majority of Internet scheme and protocol information are from a few open public information sources on the net, IETF (Internet Engineering Task Force) RFC's that explain details on the application of the protocols used for both IPv4 and IPv6 as well as experimental protocols for the next generation Internet  and the Network Sorcery web site. The remaining of this series on the IoT platform will be from BASIL Networks MDM (Modular Design Methodology) applied with the Socratic teaching method.  Thank You - expand your horizon- Sal Tuzzo

Network Sorcery:
The Internet Engineering task Force:  IETF - RFC references

The high level expert links for the CRC and Checksum are listed below; there are so many Internet references to this subject that listing them would take up several pages and is not the intent of this series.

PDF The iSCSI CRC32C Digest and the Simultaneous Multiply and Divide Algorithm January 30, 2002 Luben Tuikov & Vicente Cavannay 

PDF Cyclic Redundancy Code (CRC) Polynomial Selection For Embedded Networks 2004  Philip Koopman,  Tridib Chakravarty

PDF Performance of Checksums and CRCs over Real Data (1998)  Craig Partridge, Jim Hughes, Jonathan Stone

TEXT Computing the Internet Checksum RFC 1071

Part 8  Preliminary Outline:

Part 6 Network Protocols - Network, Transport & Application -Continued -Ethernet Protocol (Sept 21, 2017)

Part 8 IoT Core Platform
- SoC Core Processor of Embedded Systems (Jan 12, 2018)


Publishing this series on a website or reprinting is authorized by displaying the following, including the hyperlink to BASIL Networks, PLLC either at the beginning or end of each part.
BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-7 The Ethernet Protocol(s): Lets Sync Up- (November 23, 2017)

For Website Link: cut and paste this code:

<p><a href="" target="_blank"> BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-7 IPv4, IPv6 Protocols, Network Transport & Applications: <i>Continued (Sept 242017)</i></a></p>



Sal (JT) Tuzzo - Founder CEO/CTO BASIL Networks, PLLC.
Sal may be contacted directly through this sites Contact Form or
through LinkedIn

24 Sep, 2017

Internet of Things (IoT) -Security, Privacy, Safety-Platform Development Project Part-6

Part 6: IPv4, IPv6, Protocols - Network, Transport & Application: Continued
The Ethernet Protocol(s), Lets Sync Up

"The more extensive a man’s knowledge of what has been done, the greater will be his power of knowing what to do". -Benjamin Disraeli


Quick review to set the atmosphere for Part 6

From the previous Internet of Things Part-1 through Part- 5:

What we want to cover in Part 6:  Connecting To The Ethernet Protocol

Lets Get Started: Some Serial Communications Basics
Hardware communications protocols are generally refereed to as a BUS Protocol like RS232, RS422, RS485, USB, are all hardware topologies used to transport data serially over a physical medium, each having their own fixed specifications.  Serial communication data hardware has been around for many years and active way back before the first computer system was placed on the market.  There are basically two types of serial communications, Asynchronous and Synchronous.

Asynchronous communication bus is where data is understood to be clocked at the same rate for devices at both ends of the bus, triggered by a detection of the first edge transition defined as the Start Bit and repeated for a defined number of bits then ended with a clock cycle defined as a Stop Bit.  The most common Asynchronous BUS is the RS232 Universal Asynchronous Receiver Transmitter - (UART).

Synchronous communication bus is a bus that sends its clock time interval on a separate wire to identify the time interval the data is valid on the data line, hence the sending of the data is set to a predetermined time interval that is synchronized by the clock interval transferred with the data.  Synchronous hardware protocols are I²C (Inter-IC ) BUS, Serial Peripheral Interface (SPI) and others that have a fixed clock wire transferred with the data wire to identify the data valid time interval.  Figure 6.0 shows the basic block format requirements for hardware protocols for serial data buses.

Figure 6.0  Serial BUS Basic Structure Block Diagram

The Ethernet BUS Physical Device Hardware:
The Ethernet hardware protocol bus transfers data from a unique source to a unique destination serially.  The Ethernet bus Network Interface Controller (NIC) has to configure the Data Transfer Rate (DTR), configure the Mode(Half/Full Duplex), Specify source device, Specify destination device, Detect corrupt data, resolve collisions and extract the data payload.  The collision part is easy since the Ethernet Switch handles that with only one device per  switch port, we will get to the Ethernet Switch shortly.  The Detect corrupt data issue is rare on a properly installed network however, if the data is corrupted the packet is just dropped and there is no way to know where it came from so the sender has no knowledge of it being corrupted either and just keeps sending data.  For high QoS (Quality of Service) networks there are feedback and data manipulations techniques used to insure data integrity; we will cover that in the software development part of the series.

The Ethernet Physical bus may be configured to operate in half or full duplex mode although half duplex is rarely used today. Half duplex was setup initially setup with coaxial cable mainly to avoid collisions and continued on with the introduction of the Ethernet Hub, since Ethernet Hubs are half duplex devices. The hub introduced the single port, single device concept and made it easier than running coaxial cable from device to device; the Hub has been replaced with the introduction of the Ethernet Switch.  The Ethernet switch is a more flexible device that operates in both half and full duplex mode among other unique features .that we will discuss later.

The Ethernet hardware network is the most common of the physical networks today and is used in the majority of businesses and home Internet connections and has several speeds of 10Meg bps,  100M bps,  1Gig bps and 10/25/40 Gig bps.  There are other Physical Layer protocols also in use today as we discussed in Part-5 Table 5.2, however we will focus on the Ethernet for this part of the series.  We will review the implementation process for other Physical Layer medium topologies when we address integrating different network medium topologies into the IoT Core Platform later in the series.

Moving forward, the Ethernet Physical Layer cable connections does not incorporate a physical clock line in the bus architecture which implies that  the bus in the asynchronous communications bus category.  The standard configuration today for the Ethernet bus is full duplex and the cables eight twisted pairs of the modular connector uses a differential signal twisted pair for the Transmit data and a differential signal twisted pair for the Receive data.  The remaining twisted pairs are used for various integrated features like PoE (Power over Ethernet) and other higher speed configurations.  So how does the controller configure itself to the different transmission speed ?   We will answer that question next.

The Ethernet Protocol Header(s): The Ethernet Physical Hardware Topology
OK, so if the Ethernet connector does not have a separate clock line and is a "Full Duplex" TX/RX differential pair set, how does it configure itself to the different 10/100/1Gps BASE-T etc. data speed being transported over the LAN bus?  The answer is the controller autonegotiates to the serial data stream data in the first eight octets(bytes).  We will show how this is done while we introduce the 802.3 Ethernet Header format.

The structured format of the Ethernet Physical Layer bus follows the basic serial bus format of a Preamble (Start ID) and Inter-Packet Gap (End Idle Time) format with a data payload in the middle.  The controller monitors the data transitions during the Preamble eight Octets(bytes) and if the transitions of the preamble are at stable intervals the configuration will use preamble intervals to set the controllers clock, mode etc. and be ready to receive a Start Frame Delimiter (SFD) byte eight and then begin the receiving of the data payload octets.  This configuration methodology categorizes the Ethernet bus as being an asynchronous autonegotiation bus configuration for half/full duplex mode as well as 10/100/1G BASE-T.  The Ethernet switch is an important device for this since it allows each port to be configured separately and the single port per device allows easy disconnect from the network without any loss of performance.

The details of the Preamble is a fixed pattern of alternating 1's and 0's for the first eight octets of the Ethernet header as shown in Figure 6.2a as 10101010-10101010-10101010-10101010-10101010-10101010-10101010- 10101011.   The last byte of the Preamble is the SFD (Start Frame Delimiter that ends with two 1's identifying the start of the data payload being transferred.  From this point the controller is expected to be asynchronously clocked, (destination data rate is auto-adjusted to the senders data rate) to start collecting octets from the frame.  If the controller is not configured it drops the entire frame and waits for the next frame after the Inter-Packet gap idle period of 12 octets.  The data is collected and stored starting at first octet after the SFD and ends when there is no more data after the checksum or for the maximum frame size and the bus becomes idle for 12 octets at which time the controller resets and is ready for the next header.  The 12 octet idle time is defined as no data transitions on both Transmit and Receiver pairs which signals the end of the frame.  The octet count is given as the first octet after the SFD plus the last octet of the Checksum and retained in the transfer data buffer to inform the connected API the number of octets it has to processes.

There are three unique structures for the Ethernet frame and they are shown in figures 6.1a, 6.1b and 6.1c.  The three frames have the same fields up to octet 21, then the frames in figure 6.1b and 6.1c incorporate new fields.  The field differences are the addition of the 802.2 protocol to identify pointers and inter-layer communications and is processed uniquely by the source/destination layers.  Remember back in the series the statement was made that the user data can be anything as long as the transfer point to point follow the available protocols and the user data encoder/decoder are a matched at both ends.  We will get into the inter-layer communications later in the series, for now the decoding after octet 21 is up to the Application Program Interface connected to the transfer to decode/encode the payload.  

Figure 6.1a  802.3 Standard Ethernet Physical Protocol Header Format

Figure 6.1b  802.3 + 802.2 Ethernet Physical Protocol Header Format
Source Service Access Point (SSAP)
Destination Service Access Point (DSAP)

Figure 6.1c  802.3 + 802.2 + SNAP  Ethernet Physical Protocol Header Format
Source Service Access Point (SSAP)
Destination Service Access Point (DSAP)
SubNetwork Access Protocol (SNAP) Control

Figure 6.1a shows the standard Ethernet II frame commonly used for the Three Way Handshake and other payloads where the entire MTU fits into a single Ethernet frame.  Figure 6.1b is the extended Ethernet Frame that is used for inter-layer status or network information protocols like as ICMP and others.  The extended control fields are the Source Service Access Point (SSAP) Destination Service Access Point (DSAP) for inter-layer communications.  Figure 6.1c is the extended Ethernet frame generally used for larger segmented payloads such as HTML, FTP and others.  The extended control fields are the Source Service Access Point (SSAP) Destination Service Access Point (DSAP) plus the Subnet Access Protocol (SNAP) control to keep track of the segmentation of the data transferred.  The headers shown in Figures 6.2a, 6.2b have a fixed format up to Octet 21, the variations in header structures are defined by octets 20 and 21 and determine the remaining structure, hence: the data/payload section.  Table 6.2 is a list of the variations for the Type/Length field to determine the full maximum length of the header.  The extended fields will be covered later in the series with the inter-layer communications and data segmentation, for this part of the series we are presenting the Ethernet format structure and device to device data transfer.

The Ethernet Cable Length Defined
There are many variations and opinions of what the maximum physical length of the Ethernet over twisted pair cable should be, however the variations are set by the cables category ID from Cat3 through Cat8 which all incorporate the same modular connector defined as an 8 Position 8 Contact (8P8C) connector.   The physical length of the cable is dependent on the category ID and drive capabilities of the hardware it is connected to.  The standard length of the cables for Cat3-Cat6  for their assigned speed is 100 meters, Cat8 is 30 meters for its 25-40Gbps BASE-T speeds and only operates in full-duplex mode.  Technically the full analytical derivation for cable length is beyond the scope of this series however, cable length parameters are derived from several physical characteristics that include, twisted pair capacitance per foot, shielding separate pairs or if only one shield, cross talk capacitance between pairs, cable resistance ohms per foot, number of strands in the standard topology or diameter for solid wire,  signal voltage swing peak to peak, and the source power available for driving the signal, the switching frequency of the signal-10Mbps, 100Mbps, 1Gbps etc. are the main parameters taken into consideration to determine the length.  The industries standard length of 100 meters (328 feet) between device to a switch port and down link switch port to an up link switch port  are confirmed by all equipment manufacturers at this time.  There exists custom equipment that allow much longer lengths however it would be easier, cheaper and better QoS to just add a switch or RJ45 active cable extender to obtain longer lengths.

Some Computer History: How The Bit Assignments Nomenclature Changed
Back in Part-4 IP Header Formats we mentioned the Bit assignment nomenclature being reversed so to put this in perspective we will take a trip down memory lane for us seniors and ancient history for the younger.  To start the Binary Base2 system has not changed and goes back to the 1600 if not further, however it was introduced to the digital world with a relay-based computer in the 1940's along with the nomenclature weighting of the base2 number system,  2n + 2n-1 +.....+ 21 + 20  where 20=1, 21=2, etc  as 1, 2, 4, 8, 16 etc. Left to Right Most Significant Bit(MSB 2n) to the LeastSignificant Bit (LSB 20) so the original nomenclature was weighted similar to the decimal number system right to left increasing number,  the bigger n is the larger the number is.  OK, now the change, when the major minicomputer was introduced in the late 1950's by Digital Equipment Corporation (1950-1990) known as DEC or Digital the bit assignment nomenclature "only" was reversed as  20 = highest bit number number and 2n = 1 the lowest bit number on the computer switch panel, however the binary base-2 number system did not charge it was just the way DEC's nomenclature was introduced.  In 1968 this changed again with the introduction of the 16 bit computer from one of DEC's digital engineers that left and started Data General Corporation (1968-2002).  The bit assignments nomenclature was changed back to the original base2 nomenclature for 15-00 where 15 = MSB ( 215) and 00=LSB (20 ) with the weighted decimal system where  20=1.  To add a bit to the confusion the computer word was segmented into three bit blocks  for an base8 (octal) notation.

Remember that the bit assignment nomenclature is only a nomenclature and may or may not represent the true form of the bits being transmitted.  For the Ethernet Frame Bits are placed on the bus the Least Significant Bit (Bit7) first.  This does not change the actual number it just brings it back into the standard serial methodology.  So to sum this role reversal, in the real digital world serial data is placed on the bus MSB first and shifted left so the bit notation is MSB(27)-LSB(20) left to right where bit 7 is the real MSB and the binary data is read and weighted the same way the decimal system is weighted.  In the Internet world because of the bit reversal the data is placed on the bus Least Significant Bit  (bit 7) first and the bit notation is LSB(20)-MSB(27) left to right.  Sort of a long way around for the bits to be read into the device and ends up in the real digital world weighted format the way it should.  This will have more meaning when we are in the hardware design section of the series which will show that many chip devices that support serial protocols have different MSB/LSB transfer formats.  Therefore the actual Ethernet Preamble + SFD real world data read by another device will be 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0xD5 hex format or 85 85 85 85 85 85 85 213 in decimal format.   The SFD 0xD5 byte signals the start of the octet data to be read.  This will all fall into place when we setup the header structure shown in Figure 6.2a and Figure 6.2b for programming the Ethernet Frame in the software section of the series.

Figure 6.2a  802.3 Standard Ethernet Header Frame Structure

Figure 6.2b  802.3 + 802.2  Ethernet Header Frame Structure
Source Service Access Point (SSAP)
Destination Service Access Point (DSAP)
SubNetwork Access Protocol (SNAP) Control

The Ethernet network topology changed the way we connect devices on the Local Area Network.  The old coaxial cable we just paralleled each computer to the same cable it was cumbersome to work with and the cable loading factors were difficult to troubleshoot among other difficulties.  With the RJ45 Cat5[6] twisted pair cables we use a multi-port Network Switch to connect devices single port - single device.  The variations in speed and mode hence, 10/100/1Gbps Base-T, full/half duplex NIC's are handled by the Network Switch where each device is connected to a single RJ45 port on the switch.  For each device the switch allows it to run in half or full duplex depending on the NIC and only sends the data to the destination port by keeping track of the MAC address for each device connected to the same LAN; half duplex is rarely used today in Local Area Networks.  We will cover switches in more detail later in the series.  For now the basics are, switches maximum speed defines network speed regardless of the NICs in the devices connected to the switch ports;  the devices maximum speed is set by the NIC regardless of the maximum speed of the switch.  On a multi-speed switch each port may run at the negotiated speed of the NIC on that port.

Will The Ethernet Physical Layer Protocol Please Stand Up:
As shown below in Table 6.0 the IEEE 802.xx Hardware protocol has been around for some time and have been updated to address new transmission demands.  For this part we will focus on the IEEE 802.3 Ethernet specification.  The light green background protocols are the ones we would like implemented into the IoT  Core Platform.  The IEEE 802.3 specifications have been upgraded over the years (1983 - 2017) and expectations up to 2019 to handle the variations in connectivity and speeds up to 40G bps.  The design methodology of the IoT Platform will incorporate the flexibility for the implimentation of additional hardware protocols as they are required for the application.

Active Working Groups


Description Note
IEEE 802.1 Higher Layer LAN Protocols (Bridging) active
IEEE 802.3 Ethernet  Original Specification 1980 - Standard in 1983 active
IEEE 802.11 Wireless LAN (WLAN) & Mesh (Wi-Fi certification) active
IEEE 802.13

Unused[2]  for Fast Ethernet development[3]

IEEE 802.15 Wireless PAN active
IEEE 802.15.1 Bluetooth certification active
IEEE 802.15.2 IEEE 802.15 and IEEE 802.11 coexistence  
IEEE 802.15.3 High-Rate wireless PAN (e.g., UWB, etc.)  
IEEE 802.15.4 Low-Rate wireless PAN (e.g., ZigBee, WirelessHART, MiWi, etc.)  
IEEE 802.15.5 Mesh networking for WPAN  
IEEE 802.15.6 Body area network  
IEEE 802.15.7 Visible light communications  
IEEE 802.16 Broadband Wireless Access (WiMAX certification)  
IEEE 802.16.1 Local Multipoint Distribution Service  
IEEE 802.16.2 Coexistence wireless access  
IEEE 802.17 Resilient packet ring hibernating
IEEE 802.18 Radio Regulatory TAG  
IEEE 802.19 Coexistence TAG  
IEEE 802.20 Mobile Broadband Wireless Access hibernating
IEEE 802.21 Media Independent Handoff  
IEEE 802.22 Wireless Regional Area Network  
IEEE 802.23 Emergency Services Working Group  
IEEE 802.24

Smart Grid TAG   - New (November, 2012)

IEEE 802.25 Omni-Range Area Network  

Inactive / Old Working Groups

IEEE 802.2 LLC disbanded
IEEE 802.4 Token bus disbanded
IEEE 802.5 Token ring MAC layer disbanded
IEEE 802.6 MANs (DQDB disbanded
IEEE 802.7 Broadband LAN using Coaxial Cable disbanded
IEEE 802.8 Fiber Optic TAG disbanded
IEEE 802.9 Integrated Services LAN (ISLAN or iso Ethernet) disbanded
IEEE 802.10 Interoperable LAN Security disbanded
IEEE 802.12 100BaseVG disbanded
IEEE 802.14 Cable modems disbanded

Table 6.0  The variations of the IEEE 802.xx Protocol Specifications

Ethernet Frame Fields Description
The Ethernet Frame Fields are listed in Table  6.1 below showing the standard 802.3 field definitions.  There exists an extension of 802.3 that adds an additional four octets that attaches some Subnet identifiers 802.1Q tags that support Virtual LANs (VLANs) over an Ethernet Network. We will address this during the software protocol implementation part of the series.  For now lets focus on the standard Ethernet 802.3 to set the ground floor of our understanding to grow on.  The two main fields that require some thought are the Type/Length and the Payload fields.   The Type/Length field (16 bits) 2 Octets has gone through many updates and additions over time as the list of field parameter values show in Table 6.1 below.

Octet [Fixed Fields] Size Name Description
00-06 56 bits
7 Octets
Preamble 10101010 10101010 10101010 10101010 10101010 10101010 10101010 Binary
0x55 0x55 0x55 0x55 0x55 0x55 0x55 hex
07 1 Octet Start Frame Delimiter SFD 10101011 Binary 0xD5 hex
08 - 13 6 Octets Destination MAC Address NIC MAC Address of Destination
14 - 19 6 Octets Source MAC Address NIC MAC Address of Source
20-21 Octets   Ethernet II type - This is for this series [+4] - 802.1Q if implemented + 802.2
22 1 Octet SSAP - Source Service Access Points LLC- Logic Link Control 802.2 Figure 6.1a
23 1 Octet DSAP - Destination Service Access Points LLC- Logic Link Control 802.2 Figure 6.1a
24 - 25 1 or 2 Octet Control LLC- Logic Link Control 802.2 Figure 6.1a
26 5 Octets SNAP - SubNetwork Access Protocol SNAP - Subnetwork Access Protocol  Figure 6.1b DSAP-SSAP
22-1521 or [27-1528] 46-1500 Octets Payload Data Variable Payload must be 46 bytes minimum up to 1500 Bytes Maximum
1522-1525 or [1529 -1532] 4 Octets Checksum Checksum of all bytes in the header
  12 Octets Inter-Packet Gap  - Idle Time  (End of Transfer ID) Not part of checksum - This is just Idle inter-Packet Idle Time to separate packets.

Table 6.1  Ethernet Structure Field Assignments IEEE 802.3

The parameter values listed in Table 6.2 only reflect the parameters we are currently interested in for the IoT Core Platform development.  Presently there are about 50 different Ethernet Type parameter ID that have been defined and available for implementation into the 802.3 protocol.  Depending on the IoT Core Platform applications we leave this open for future implementation.

Ethernet Type Protocol Description Ethernet Type Protocol Description

Internet Protocol version 4 (IPv4)


EtherCAT Protocol for Automation


Address Resolution Protocol (ARP)


GSE (Generic Substation Events) Management Services








Provider Bridging (IEEE 802.1ad)


Stream Reservation Protocol


MAC security (IEEE 802.1AE)


Reverse Address Resolution Protocol


Precision Time Protocol (PTP) over Ethernet (IEEE 1588)


VLAN-tagged frame (IEEE 802.1Q)


TTEthernet Protocol Control Frame (TTE)


QNX Qnet


Internet Protocol Version 6 (IPv6)


PPPoE Discovery Stage


Ethernet flow control


PPPoE Session Stage


Ethernet Slow Protocols


MPLS unicast


MPLS multicast


VLAN-tagged (IEEE 802.1Q) frame with double tagging

Table 6.2  Ethernet Structure Type/Length Field Assignments IEEE 802.3

If  DSAP and SSAP are both 0xAA then SNAP is active which incorporates an IEEE 802.2 attachment protocol which we will cover during the software protocol part of the series since it involves other layers of the OSI model.  The typical IPv4 Ethernet-II from Table 6.1 above uses 0x800 for the standard protocol header and 0x806 for ARP discussed in Part-3 and 4 of the series.  For this section lets just focus on the standard IPv4 802.3 and use 0x800 as the Type/Length to construct the Ethernet Header to talk to other devices on the network.  On the issue of MAC addresses which is how the Ethernet LAN communicates to devices the MAC is assigned

MAC address assignments for manufacturers desiring to incorporate Ethernet controllers in to their products should obtain a manufacturer ID from the IEEE Registration Authority.   Ethernet controller chip ICs purchased from chip manufacturers generally require the user to supply a MAC address in order to use them.  MAC addresses are unique on a network just as IP addresses are unique for the Internet as we discussed earlier in the series.  IPv4 as we discussed only require a MAC to talk to the LAN, however IPv6 uses the MAC address as part of the unique IP address.  The data/payload is the only area that decerns the IPv4 - IPv6 schemes and any additional protocols.

IoT Core Platform Network Switch Connection: The Ethernet RJ45 10/100/1Gbps BASE-T
The structure for the Ethernet header is constructed depending on the protocol format being used.  Since there are separations for IPv4 and IPv6 as well as SNAP we would be implementing these separately and will cover these in the protocol software section of the series.  At this time we would like to bring up how the IoT Core Platform is intended to be connected to the LAN for both IPv4 and IPv6.  The connection methodology is intended to be wired RJ45 twisted pair cable, Wireless WiFi and Bluetooth. through a multi-port switch.   This brings up the Switch technology today that has improved to the point that the LAN does not require a separate HUB or run in half duplex for an effective LAN.  Switches have several basic features that help control traffic throughout the network.  Since a single device is connected to a single port each switch port maintains a MAC address buffer and is able to connect only to the destination device without interfering with other ports.  This allows better control over collisions as well as less interaction with the router which increases throughput when accessing the Global Internet..  Figure 6.3 is a typical IoT Platform connected to a multi-port switch, each platform device has its own port on the LAN.  Switches have two categories, managed and unmanaged where the majority of home Internet networks use unmanaged switches.  The main difference is managed switches allows user control of network traffic ports configuration and the unmanaged is a predetermined fixed configuration for traffic flow.  Figure 6.3 shows a typical eight port switch connected to several devices.  We identified each of these devices MAC address to show how the typical switch handles traffic device to device.  We setup a source and destination MAC address and filled in a Ethernet Header as it would be extracted from a TCP header and sent to OSI Layer 2 Data Link.  The captured data shown in Figure 6.4a is the sender(outbound ) traffic data to the switch port 6 and Figure 6b is the destination( inbound) traffic data.  Since one of the features of the switch is to strore the IP and MAC addresses for eachj of the switches port, these will be listed with the captured data.  The switch actually decodes the destination MAC address and it opens on;y the destination port to send the data to.  Since this was from a TCP handshake protocol sequence the IP are listed as part of the capture however, only the MAC addresses are used in the actual device to device on a Ethernet LAN.

Figure 6.3  Physical Cat6e Cable Connection 8 Port Switch

Ethernet Communications Device to Device Captured on LAN
The LAN connected devices in IPv4 & IPv6 communicate via the MAC addresses, there are no IP addresses in the Ethernet Physical Header.  This means one can easily talk device to device just using the Ethernet Physical Layer Header and supply unique data for each device.  We will look at this in more detail when we cover LAN based Industrial Control Networks as the series moves forward.  Figure 6.4a and 6.4b are the captured Ethernet Frame outbound at Source to Inbound at Destination through the switch using a typical packet capture program.   As we see there are no IP addresses required to setup the Ethernet Frame and transfer data to other devices on the LAN.  The network used to collect this data was a 1Gbps BASE-T using an unmanaged switch.  The MAC addresses are assigned to Intel Corporation as are the Network Interface Controllers on two different systems.  The two devices send data and confirm the same data to insure a link.  The Type/Size is x800 which is Internet Protocol version 4 from the above table.  The captured Ethernet packets shown below Figures 6.4a abd 6.4b will not show the first 8 bytes or the IPG 12 bytes since they are the Identifiers for synchronize and end data collection and are not part of the frames device data. The size of the data payload is also tracked and displayed, the firewall on the system allowed this data to be collected from Source to Destination.

Figure 6.4a  Captured Ethernet Header Frame Outbound

For MTU's less than the 1500 octets/bytes the data fits into the Ethernet Frame and only requires a single transport cycle.  Notice that the Inbound and Outbound traffice the MAC addresses have been reversed.  This communications was a ping test for the server and workstation confirmation of connection.  The system use for this series series was assembled specifically for the development of this IoT Core Platform series for continuity.  The system configuration consists of an Intel Server dual Xeon 64 bit processors with Hyperthread running Redhat Enterprise Linux connected to two workstations two Windows XP-Pro x32 bit and a Windows 10 x64 bit system.  The IoT Platform series network is a stand alone Client / Server configuration that is not connected to the Internet.

Figure 6.4b  Captured Ethernet Header Frame Inbound

Summary for Part 6:

The Ethernet Protocol hardware characteristics incorporates the following capabilities and features:

Part 7 overview will cover a continuation of the Ethernet Frame
Now that the core structure of the Ethernet Physical protocol has been presented as a data transfer protocol we will present the following to finish up the Ethernet Physical protocol presentation.

Reference Links for Part 6:
The majority of Internet scheme and protocol information are from a few open public information sources on the net, IETF (Internet Engineering Task Force) RFC's that explain details on the application of the protocols used for both IPv4 and IPv6 as well as experimental protocols for the next generation Internet  and the Network Sorcery web site. The remaining of this series on the IoT platform will be from BASIL Networks MDM (Modular Design Methodology) applied with the Socratic teaching method.  Thank You - expand your horizon- Sal Tuzzo

Network Sorcery:
The Internet Engineering task Force:  IETF - RFC references

Part 5 Network Protocols - Network, Transport & Application -Continued (Aug 17, 2017)

Part 7 Network Protocols - Network, Transport & Application -Continued -The CRC-32 and Checksums (Nov 23, 2017)


Publishing this series on a website or reprinting is authorized by displaying the following, including the hyperlink to BASIL Networks, PLLC either at the beginning or end of each part.
BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-6 The Ethernet Protocol (s): Lets Sync Up- (Sept 22, 2017)

For Website Link: cut and past this code:

<p><a href="" target="_blank"> BASIL Networks, PLLC - Internet of Things (IoT) -Security, Privacy, Safety-The Information Playground Part-6 IPv4, IPv6 Protocols, Network Transport & Applications: <i>Continued (Sept 242017)</i></a></p>



Sal (JT) Tuzzo - Founder CEO/CTO BASIL Networks, PLLC.
Sal may be contacted directly through this sites Contact Form or
through LinkedIn

«Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14  Next»
Copyright© 1990-2019 BASIL Networks, PLLC. All rights reserved