SYMON MUTHEMBA http://symonmk.com Let's Get Technical Mon, 28 May 2018 19:46:35 +0000 en-GB hourly 1 https://wordpress.org/?v=4.9.6 https://i1.wp.com/symonmk.com/wp-content/uploads/2018/01/symon-3.png?fit=32%2C32 SYMON MUTHEMBA http://symonmk.com 32 32 141419617 FPGAs for Innovating Digital Communication Interfaces http://symonmk.com/fpgas-digital-communications/ http://symonmk.com/fpgas-digital-communications/#respond Mon, 28 May 2018 19:26:46 +0000 http://symonmk.com/?p=1090 It’s 11th May, and we just arrived at a client’s studio at 11:45pm. I assure my colleague that we’ll get it right this time. Our job that night was to link two studios over an Ethernet connection. Simple enough despite the fact that neither of us had done it before. We had gone through the manuals over and over again, 4 different manuals in fact, that’s why I felt certain, at least 80% certain it would be possible. In the end, we achieved connecting two LAWO audio engines over a VLAN network that could send and receive digital audio channels over MADI (AES 10). More on that later. After our 5 hour stint, I couldn’t help wonder how even such an interface was achieved. In this post I cover what it takes for hardware and communications engineers to prototype, test and innovate such interfaces using FPGAs.   Field Programmable Gate Arrays (FPGAs) FPGAs, Figure 1, are re-programmable silicon integrated circuits developed by Xilinx in 1984. FPGAs are what exist between ASICs (application specific integrated circuits) and general purpose processors. This means that FPGAs can be programmed to perform one complex task and later be fully reprogrammed to perform another. This flexibility is why FPGAs have become increasingly popular for prototyping and deploying custom functionality.   Programming for FPGAs can be done with low level (digital design) or high level (C code) programming environments. These software then compile the code into a bitstream or configuration file that reconfigures the internal circuitry of the FPGA. Some advantages of using FPGA technology over the common microprocessor include: High performance. As FPGAs are custom programmed chips for a specific functionality, data (signal) processing performance is high as applications have full control of processing cycles as well as inputs and outputs. Rapid prototyping. With the recent improvements in high level design, engineers spend less time developing and prototyping FPGA functionalities. Also, since FPGAs are re-programmable iteration is faster as compared to ASICs. Low costs. Consider ASICs, they’re manufactured once to handle specific tasks. If a customer’s requirements change they have to obtain a new chip (often a new device altogether) with more functionality. FPGAs recognize the need to change  the functionality of the chip as the user’s requirements change. Can be used long-term. In this post we are about to discuss FPGA use in digital communication interface design. However, we know that standards are ever changing with the market. Luckily, with FPGAs you can perform field updates years after the initial installation.   FPGA use in Digital Communications Applications for FPGAs span across different industries like medical electronics, aerospace and defense, broadcast and pro A/V, consumer electronics, wired and wireless communication and manufacturing. I take a special interest in communications and will cover that. You may be wondering, what was I dealing with that night? Who’s MADI? MADI is Multichannel Audio Digital Interface (AES 10) which you can read more about here. In summary, MADI is an interface that can carry multiple channels of digital audio (one interface port can carry 64 mono channels!) over coaxial or optical fiber up to 2km! What the exclamation points signify in the last sentence  is that MADI is a capable system for large pro A/V and broadcast applications. Its scalability has made it increasingly attractive in such applications. We carried the MADI channels over the RAVENNA (AES67) media-over-IP standard. RAVENNA is an open standard for carrying real-time media over an IP network. With this in mind, we see how such an interface and a network can be made possible with FPGA boards and a can-do attitude.   Implementation To understand how this can be implemented, we start with the understanding what is going on inside FPGAs. A single chip has three parts to it: input/output blocks, logic blocks and programmable routes. During development cycles, you are in full control of these 3 segments to configure to your specific application.   Then comes the interesting bit of (re)programming. In the not so distant past, FPGAs were mostly programmed using hardware description languages (HDL) such as VHDL and Verilog. That was much harder and not so friendly to most programmers. Only recently have development being made easier with the SDAccel development environment. With this, applications can be built in C/C++. A popular programming and design suite is the Xilinx ISE.   Now back to our work today. To start developing for a particular protocol, you need to identify all the technical specifications with that protocol. For MADI you are required to know the electrical characteristics and data organisation, Figure 3, of the protocol.   Additional circuitry should also be accounted in design as the final system will be a combination of input and output ports. Then reference the timing diagram to define the logical signal flow.   This simple tutorial shows how one can program an FPGA board to accept audio signals and transmit them over a MADI channel.   Your Future with FPGA As we develop more complex systems, we’ll paradoxically need simpler communication interfaces. FPGA based solutions work best when ideas are still experimental and budgets are limited. They provide a simple solution to the interface I described in the first chapter. (FPGAs are just one way they made it possible, in reality it could be with ASICs or DSPs) In the IoT industry, FPGAs are popular in interface design. Even in processing, FPGAs are currently being used to replace power hungry GPUs and inflexible ASICs. This leads to smaller, power efficient and future-proofed systems. (The IoT industry is shrouded in uncertainty due to continuous development of telecommunication technologies as I discussed before.)   When doing this research, I was humbled by the vast information that already existed on this topic. I’ll be sure to dive deeper and experiment with this. The learning curve seems pretty steep, but that’s why I always carry my harness. Badum tss!  Thank you if you have made it this far. I’ll be happy to know more about this topic from you so let’s […]

The post FPGAs for Innovating Digital Communication Interfaces appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/fpgas-digital-communications/feed/ 0 1090
The Ultimate Production and Transmission Broadcast Facility http://symonmk.com/ultimate-broadcast-facility/ http://symonmk.com/ultimate-broadcast-facility/#respond Sun, 13 May 2018 10:39:12 +0000 http://symonmk.com/?p=1069 Media broadcast is grossly underdeveloped in Kenya. Even with the seemingly sophistication of media broadcast in Europe and America, the Kenyan media experience is quite poor. This is due to the limited competition incumbents face that new players must aggressively work against to get a piece of the cake. Setting up a TV/radio production and transmission facility may seem like a daunting task but worry no more. This post talks about bringing together a cost effective, space efficient and future proofed broadcast facility that may be just what you need to set yourself up! Specifically, it outlines the general workflow of the entire system and the hardware and software that can be used for smooth operation.   Introduction A TV facility system integration project involves several stakeholders. From the owners to the day-to-day users. Special attention is required to fully capture their requirements and desires. The need to do is to properly plan your implementation of their ideas. Fortunately, I’ve been exposed to what this entails and can give recommendations. Broadcast equipment is quite expensive, you’re well advised to be selective. To start, have your considerations in mind: Available budget Deadlines and schedules Available skilled labor Available inputs and desired outputs Let’s imagine a scenario in which the budget is constrained, the timelines are short and the labor limited. However, this does not cap the stakeholders’ dreams, for they often fantasize about an ideal glorious future. The customer is always right. Thus, we consider, as much as possible, the maximum number of inputs and outputs that can be sustained by our system. Our goal now becomes to provide a reliable, inexpensive, fully operational production and transmission system that is also future-proof.   Hardware Setup To start off, understand the standards in use. In Kenya, SD (720×576) is the standard for DTT (Digital Terrestrial Television). SD will be our video output for DVB-T transmission. No such limitations exist with IP output streams and as such we can output full HD. Capturing physical video inputs can be done through SDI, HDMI, composite or component interfaces. We’ll mostly be dealing with SD-SDI sources in Kenya. However, since most hardware supports up to 3G-SDI (1080p60), future upgrades to HD is not a problem. Audio formats comprise of analog audio (Left, right) and digital audio (AES 3), MADI (Multichannel Audio Digital Interface) or audio over ethernet (AoE). This may be captured by an audio sound card. IP capture is the most versatile and friendly, as a variety of compression formats are commonly supported for input: AVC/H.264 Video, AAC Audio, MPEG-2 Video, MPEG-2 Transport Stream and MPEG Layer I/II/III audio. This can all be received over UDP, RTP and HLS protocols. In a TV studio, the core hardware is the playout engine and clients. This can just be a Windows (7+) PC with the playout software. It is the add-in cards that matter. The cards may be: Video capture/playback cards Audio cards Video encoder (GPU) cards Network interface cards   PC Setup To have all this in one system would be quite a stretch, we may need to separate the playout clients and the streaming server for more efficiency and scaling. Having a separate playout client and encoding/streaming server is important because of these reasons: Playout clients and live graphics generators are graphics intensive apps thus the PCs in use need external GPUs (graphics processing units). GPUs basically offer very high core counts (200+) at low processing speeds (~1 MHz). Efficient GPUs include the NVIDIA GeForce and Quadro cards. Encoding video is typically an extremely CPU intensive task. Especially if we want to encode multiple output streams. One could do well with an i7 7700K (4 cores @ 4.2 GHz) for one output stream. To support more streams beyond that multi-core processors are required. Intel Xeon and AMD Threadripper go up to 16 cores @ 3.4GHz. After this we can add broadcast specific hardware to our system.   Capture and Playback Video can be captured either as a physical stream, using SDI/HDMI capture cards like Figure 1, or as an IP stream, using NIC cards. Popular capture cards are offered by Blackmagic Design and Magewell. These cards are designed to work with a multitude of different applications across Mac, Windows and Linux workstations. The BNC ports are bi-directional such the the cards can support capture and playback.   Audio requires professional grade hardware as, just as we’ve discussed before, it is critical in broadcast. Sound cards are required to get the audio into the system and for output too. At the very least (going cheap) an external/USB sound card may be used. PCI sound cards are more expensive but much more reliable. Popular sound cards are offered by Focusrite and AudioScience. NIC cards support a wide number of features. Video and audio can be captured and streamed using streaming network protocols.  Commonly, HTTP Live (HLS), UDP, RTP, RTSP, HTTP and RTMP (pushed from Flash server) protocols can be captured or streamed through IP infrastructure.   Applications In Use The heart of any broadcast facility starts with the playout software. In the past, I discussed how to best choose the right playout for you. Playout system is an industry term used to describe the equipment, software and/or processes—typically within some kind of broadcast environment—responsible for “playing” source media and converting or rendering it into a form which may be “put to air”, or presented, for external use. OtsAV.com Just to mention, popular automation and playout systems that I have worked with include Cinegy Air, AVRA, DirEttore, VPlay and ImediaTouch. While a playout software may be useful to you sometimes all you want is recording and encoding or streaming software. Luckily, as the Internet becomes richer so does the availability of free open source broadcast software. I will talk about 2!   Butt: Broadcast Using This Tool This is a free audio streaming software that runs on Mac, Windows and Linux OS. It captures audio from your audio device and streams to a SHOUTcast or Icecast server. Butt and […]

The post The Ultimate Production and Transmission Broadcast Facility appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/ultimate-broadcast-facility/feed/ 0 1069
Off-Grid Communications For The Masses: Smart Metering http://symonmk.com/off-grid-communications-for-the-masses-smart-metering/ http://symonmk.com/off-grid-communications-for-the-masses-smart-metering/#comments Mon, 30 Apr 2018 20:50:28 +0000 http://symonmk.com/?p=1047 In East Africa, a large percent of the population still does not have access to electrical energy and its benefits. To address this, several companies have developed micro-grids to provide AC power to rural East Africa. In order to sustain these grids, a remote, robust communication system has to be developed for purposes of metering and billing. In this post, I propose several efficient designs of a communication system that could be used to monitor and manage off-grid customers. Specifically, it proposes the technologies that can be used, the hardware and software implementation of such as system and how it can make business sense addressing equipment and operation costs.   This post proposes a communications system that tackles the stated situation within the boundaries of the limitations set by a real-world scenario, i.e. budget, energy supply and manpower. A comprehensive approach to systems design is valuable to ensure the sustainability of such a project. Sections in this post will cover the hardware, communications channels and protocols, remote monitoring systems and software that can be used to solve the stated design issue. The goal is to provide micro-grid providers with a trustworthy system for off-grid power management as well as help the locals with a solution that sufficiently caters to their needs.   Hardware Implementation A reliable remote metering system has to have a few basic characteristics: A smart metering system that connects every household, enabling 2-way data transfer between the customer and utility provider A network technology to enable the 2-way communication (fixed wired or wireless) A software system that actively manages the billing system and analyses usage data   With a system defined as in Figure 1, we can start to see how we can bring together the hardware components.   Smart Metering Smart meters are already in the market, such as Hexing Electric HXE 110-KP single phase prepayment smart meter and ZTE ZX E211, Figure 2, single phase prepayment smart meter. These meters meet Standard Transfer Specification (STS) standards and are fit for our application. The ZX E211 is the preferred choice here as its supports a variety of communication protocols (RS485, M-BUS, ZIGBEE, RF-MESH, PLC and GPRS). We will see how these communication protocols will be used in this post. ZTE ZX E211 LoRa based meter is particularly useful in long distance communication and allows us to adjust several parameters such as the transmission rate and frequency. The main feature is its low power consumption with a transmit current of less than 90mA@ 17dBm, receive current less than 13mA and standby current less than 0.7 uA. Since data communication may occur only few times a day, a majority of the consumption will be the standby current. Depending on the data that is provided by this meter, or a comparable one in the market, we may choose to consider meters that do not conform to STS standards. This may help us with communication protocols unavailable to us but may limit us in scaling and future upgrades with the national grid. Fabrication of a communications device alongside the meter may be required to send more usage statistics and deliver the desired data. This can be used for analysis to improve the overall system. This can be covered in a future post with AVR, PIC or FPGA as the processing IC in our DIY smart meter.   Communication System This post will discuss two concepts of a smart meter communications in a rural area based on two assumptions: Location size – Are the residents physically close to each other or spread out? Terrain – Is the area flat or hilly? Dense vegetation cover or dry grassland? To meet the requirements of the location, I propose two systems that can be established. They are the RF-MESH network and RF-STAR network. Both networks rely on wireless channels to carry data.   RF-MESH Network This type of network allows for data transmission via other wireless devices via a mesh (chain) network using a low power transceiver radio. This network is suitable for close-knit residential areas with few obstacles and is cheap to implement and scale. The architecture consists of low power transceiver radios per every meter box and data concentrators as in Figure 3. A proposed transceiver is the Silicon Labs Si4463 chip that facilitates the RF communications link. This is a transceiver I’ve worked with before on a previous project. Schematics of the full transceiver system is covered here. It is a low power transceiver with up to 20dBm (100mW) transmitting output power and a receiving sensitivity of -117dBm. Its wireless frequency band is 433.4 – 473.0MHz, and up to 100 channels can be set up with a channel stepping of 400 kHz. A serial port baud rate of 2400bps allows for a baud rate in air of 5000bps and a wireless receiving sensitivity of -117dBm. This gives an operating range of 1000m at clear line of sight between modules under ideal conditions. A concentrator can then be installed somewhere central in the village to aggregate the data of multiple smart meters and one concentrator may support hundreds of smart meters. This system is immune to sudden channel blocking as communication can flow using alternative paths. The DRF1110N20-C concentrator works well with DRF1110N20-N network nodes on a sub 1GHz channel. The concentrator can then upload the received data to micro-grid databases at different times of the day depending on the availability of the data network.   RF-STAR Network This network type is of a point to multi-point (PtMP) configuration. This communication system is admittedly more expensive than the RF-MESH network but is suitable in hilly terrains with thick vegetation and obstacles. The architecture consists of high-power radio transceivers with a line-of-sight to an omni-directional antenna radio as in Figure 4.   To implement this system, a 2.4GHz ISM channel may be used. A clear line of sight from a transmitter antenna to the receiver should be established, I recently talked about the art of obtaining strong microwave links. The smart meter information […]

The post Off-Grid Communications For The Masses: Smart Metering appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/off-grid-communications-for-the-masses-smart-metering/feed/ 2 1047
Simplification in Design of Wireless Systems: 5 Useful Steps http://symonmk.com/design-wireless-systems/ http://symonmk.com/design-wireless-systems/#respond Fri, 20 Apr 2018 08:02:26 +0000 http://symonmk.com/?p=1005 As we all know, wireless is the preferred method of connectivity between most of our devices. This is going to take more precedence in coming years. The number of connected devices per person and the demand for fast, reliable content delivery within a network is rapidly increasing. Add that to the already ongoing craze of developing IoT devices and the super-scaling of server farms to support them. In my view, RF, DSP and embedded systems engineers will have a lot going on. This shift is largely dependent on the wireless systems we build. In this post, I try to figure out the best way forward in design of wireless systems.   The RF spectrum houses a number of wireless standards and medium in use today, these include WiFi, Bluetooth, FM broadcast, DVB-T, DAB, GSM, UTMS, LTE, WLAN, radar. The upcoming 5G standards are yet to be agreed upon as I illustrated here, but we can consider some technologies already in use today such as MIMO and MU-MIMO. Engineers responsible for the development of these systems are aware of the standards involved, however, they are required to understand a vast number of fields during the design phase that implementation takes a lot of time. This, of course, is uneconomical in the fast paced world we live in. Fortunately, engineers have figured out that the design process could be simplified into 5 major blocks: Modelling and simulation of digital, RF and antenna systems Optimization of design algorithms Automatic HDL and C code for hardware and software implementation Prototype design and testing with SDR hardware Iterative verification using model as a reference   Modelling and Simulation There are a number of softwares in use for modelling. In today’s case, we will consider MATLAB (free alternative GNU Octave). MATLAB is a renowned development kit for engineers and scientists. It is rare that you find something you can’t do with this software and its additional toolboxes. Must be why it costs a kidney, but it’s a good place to start. Simulink is an environment within MATLAB that you use to perform model-based design. I’ve developed  a simple communications link, Figure 1, that I can use as a basis of further developments. The model in Figure 1 is initiated by the following script:   The model above used the DSP System Toolbox and Communication System Toolbox. This model can be useful in the following ways: a starting point of system level design and verification a test-bench of design algorithms written in C language a point of generation of C or HDL code for use in DSP/FPGA implementation This model also allows us to simulate the results of our input and process variables to ensure we are getting the desired outputs.   Algorithm Design and Optimization Algorithms are the coded processes within a process block of a program. It defines what steps are in between the START and STOP operations of a process. Simple well known algorithms are flow control loops, error handling and on higher level languages we have object oriented programming (OOP). A while back I wrote an article touching on FFT algorithms. Many programming environments are set up with debugging features for your code. They analyze and give warnings of bad syntax and compile errors. This is particularly useful before running bad code into your hardware that may bring firmware failure. The best environments go a step further and allow you to optimize your code. You can set break points to see what happens when your program reaches a certain step, allowing you to tweak your variables accordingly. Very useful in precise calculations, characteristic of antenna design. A feature of MATLAB called Profiler allows you to run your algorithm while measuring its performance. It then generates a profile with details of the areas of your code that could use some improvement. This is based on the time the section took to run and how much of the processing resources it required.   HDL and C code Generation Hardware Description Language (HDL) and C/C++ language are pretty similar languages used in design and implementation of integrated circuits on supported microcontrollers, microprocessors or FPGA devices. They are the core of every embedded system, like in Figure 2, a Xilinx FPGA board. While developing complex wireless systems involving several devices, it is inefficient to separate simulated algorithms and IC programming. A software like MATLAB enables the automatic generation of HDL and C code using MATLAB Coder. To illustrate, we will generate C code from a Kalman filter algorithm. A Kalman filter is an optimal estimation algorithm used for parameter prediction. It is quite popular and used in the fields of vehicle navigation and guidance, computer vision  and wireless systems design. MathWorks provides a write up of the example in use. In that example kalmanfilter.m is my function file and ObjTrack.m is my algorithm which defines inputs, runs the Kalman filter and plots it in a graph, Figure 3.   Conversion involves using the MATLAB Coder. Add the function on the entry point file and define its inputs types after which go ahead and build the C code, Figure 4. The generated C code can be obtained from your MATLAB code directory.   SDR Hardware Prototypes Software-defined radios (SDRs) deserve a post of their own and thus will be briefly covered here. Basically these are computers whose components, traditionally implemented in hardware, are implemented through software. This means that the filters,  amplifiers, modulators/demodulators are implemented using programming languages. In the previous section we discussed how to perform code generation. What SDRs offer is the flexibility to test and implement wireless designs and architecture with the provision to add more features in future. SDRs are used in conjunction with FPGA, GPP (general purpose processors), DSP or ASIC (application specific ICs) to implement various wireless architectures. It is a low cost method that is becoming increasingly popular in wireless systems design.   Verification Finally, the system is rigorously verified using simulated and on-field test parameters to ensure the best product […]

The post Simplification in Design of Wireless Systems: 5 Useful Steps appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/design-wireless-systems/feed/ 0 1005
How Computers ‘See’ and Add Value to Your Media: An Intro to Computer Vision http://symonmk.com/intro-computer-vision/ http://symonmk.com/intro-computer-vision/#respond Tue, 10 Apr 2018 18:00:37 +0000 http://symonmk.com/?p=985 This decade has been defined by advancement in data based technologies and learning algorithms. Terms like AI and automation have been used extensively to explain current trends in just about all industries. They have also been used to spark debates over fears of massive unemployment and increased consumerism that these technologies may bring. It is of much importance that you, yes YOU, the reader, to check and understand how these technologies may transform your way of life in years to come. One you may or may not have heard of is Computer Vision, that is likely to transform my current area of work and in this post I look at what this means for you and I.   Okay, Fancy Term, But What Is It? Computer vision (CV) is a branch of computer science that deals with enabling computers to process digital visual data and perform certain computations to make decisions based on the data. In simpler terms the computers can see and respond to images and videos provided to them, live or recorded, with a high level of accuracy and understanding. Image processing algorithms are at the heart of this to analyse images and videos (videos are just images when taken frame by frame). However, computers can see more than just images of bananas, image processing algorithms can be used for thermal (infrared) imaging, medical (x-ray and CT) scanning, satellite imaging and other forms humans can’t detect. CV has proven incredibly important to some of the most talked about companies in the world. Tesla is using CV to control their driver-less cars while Google Photos has already categorized my photos in terms of people and places. These are just some ways CV is being used but the possibilities are endless.   Is Computer Vision Important? A study by Cisco revealed that by 2019, 80% of all Internet traffic will be video. We are a year away from that reality. Hmm. Maybe I should be making videos instead of bloggi… I digress. According to that study, there is an ongoing explosion of video content. Without CV algorithms most of the data that can be generated from the content will be wasted. In the media and entertainment world, useful information derived from videos can be used to more efficiently position and time adverts with sufficient knowledge that they will be seen/interacted with. Check out how TheTake is doing it in a very interesting way. CV has been used extensively in sports broadcast especially with tracking fast moving objects and object identification. Post-match analysis of sports videos, Figure 1, gave rise to richer sports commentaries, very useful for coaches and fans.   On a hardware level, CV is useful in the automotive industry (as discussed earlier with Tesla), manufacturing where quality assurance can be aided with CV; check out Sight Machine a company that uses CV and other AI techniques to improve manufacturing, farming industry, for this check out Prospera, to detect crop yields and many more. They may work hand in hand with IoT devices to deliver decisions over the Internet. One limitation with CV applications is poor quality images. However, we are seeing how that keeps changing year after year with cameras capable of taking higher resolution images, at a higher dynamic range. They even include processors that perform image stabilization, noise reduction and defect removal all while being smaller and more robust.   What Really Happens? So far I’ve mentioned image processing and algorithms, I’ll explain further. An image fed to a computer can be broken down to individual pixels. Each pixel is defined by a color or the chromaticity of the pixel. There could be several ways to represent color but a popular scheme is the RGB value that defines intensity of red, green or blue color as an integer between 0 and 255 e.g. (201, 250, 100) represents tennis ball green. To perform image analysis, you tell the computer what RGB value requires tracking. In our example, you feed it the RGB value of tennis ball green and images of a tennis court with an ongoing match. The scene is analysed pixel by pixel until it lands on the pixel whose RGB value has the lowest difference to the one provided. That covers the basics but in reality, things need to be more efficient than that. Analysis is better performed using kernels which analyse a patch of pixels and characterize them. Kernels can then be combined to characterize a combination of features  and with this complex images can be detected. Convolution algorithms can be further added to aid in detection, where a series inputs from an image can carry a specific weight (by multiplying the input value with the weight) and then added together. This is used to generate useful kernels to further analyse the images. Such is a convolutional neural network (CNN), which learns to generate useful kernels.   Beyond this, the CNNs may perform image processing in layers. Layer 1 may detect lines (1D), layer 2 may detect shapes (2D), layer 3 may detect shadows (3D) and so on. Usually, the greater the number of layers used the better the computer’s ability to accurately identify objects and make meaningful decisions. The use of a multitude of layers, as in Figure 2, gave rise to the term deep learning algorithms. This goes even further with stuff like Markov models coming into place to provide more accurate results.   Where CV is Best Applied CV is likely to revolutionize several fields and industries. We will begin to see smarter devices and robots using imaging to perform a variety of tasks. Drones equipped with cameras to give reports of drought and forest cover and immediately establish optimal irrigation schemes. CV experts and doctors could start collecting all imaging records for faster and more accurate diagnosis. The results of applied computer vision could massively reduce costs of items as manufacturing processes become more streamlined. The entertainment sector stands to make massive profits from applied CV. Imagine being able to read your audiences reactions and quickly adjust […]

The post How Computers ‘See’ and Add Value to Your Media: An Intro to Computer Vision appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/intro-computer-vision/feed/ 0 985
What a Systems Engineer Can Do For You: 7 Guiding Principles http://symonmk.com/what-systems-engineer-do/ http://symonmk.com/what-systems-engineer-do/#comments Thu, 29 Mar 2018 06:00:42 +0000 http://symonmk.com/?p=937 For several months now I have worked as a systems engineer in the broadcast field. Having applied myself as one during this period, I have realized there are certain best practices to consider while working as one. Research on this topic led me to understand the principles and processes guiding the work of systems engineers in all sorts of fields like manufacturing, defense, telecommunications, power systems. All follow standard models to execute their duties and in this post I shall elaborate what they are.   What is Systems Engineering? This has been defined for years as an interdisciplinary set of technical and managerial activities with the aim to bring together a functioning whole (system) with distinctive parts that solves a unique problem and meets a client’s requirements. To meet the said definition, an engineer has to employ something called systems thinking. This is a philosophical approach on design and implementation of functioning systems. The ability to see them as a sum of their parts and how causality applies among them. In addition, it requires you to think about the day to day use of the system, future improvements and upgrade-ability in order to provision and manage successfully. Fortunately, smarter people than I have been developing this idea for a while. The ISO/IEC/IEEE 15288:2015 defines the systems and software engineering – systems life cycle processes which establishes a common framework of processes and descriptions for describing the life cycle of systems created by humans. Such an example is the QFD House of Quality for Enterprise Product Development Processes as in Figure 1. It is from such work that some basic principles and models of systems engineering have been developed.     Systems Engineering Principles There are some basic procedures to undertake in order to deliver a well functioning system, these are: Systems requirements analysis Physical and functional design Effectiveness evaluation and decision System integration Simultaneous/concurrent engineering Verification and validation Support analysis and design This is quite an oversimplification and one can generate an even longer list, but those are the principles in discussion for now.   System Requirements Analysis This is the first stage of the process. You have landed a contract to set up a system with a degree of technical complexity that few can deliver. The requirements analysis is a detailed description of what the finished product should be and what problems it is supposed to solve for the user. This may be a documentation that the user develops alone or with the technical guidance that you offer. This analysis required an understanding of the following: The system requirements The user expectations, needs, wants of the final outcome Technical knowledge of feasible solutions Costs and risks involved The four items are developed in connection with each other for example a particular technology may meet the user needs but it may be too expensive to implement. It is at this stage that you breakdown the problem into sets of smaller well-defined problems, each with a different approach to execution example solving audio production, video production, transmission, streaming; these are 4 problems in a broadcast facility that a systems engineer may be tasked to solve. Strive to apply your technical understanding and design skills to ensure customer satisfaction at this level and avoid haphazard segmentation of the project just because different technologies/elements are involved. This is in order to develop an optimal system involving all of the parts avoiding developmental issues that bring increased costs and delayed schedules. The engineer’s knowledge and creativity is seen here, implementation can be seen as an artistic expression incorporating solutions developed from great ideas.   Physical and Functional Design The fun part of this entire process is trying to see how everything will come together. At the end of the day, components will have to be connected, software will have to be configured or coded and the project will come to an end. The physical design of the systems details the different physical components and how they will be linked together. Schematics and block diagrams are useful in this purpose. The physical design also takes into account the constraints in which the project is based on like the physical space, environmental factors and budget. In the real world, physical design leads to assembly and interconnection/linking of components. In contrast, the functional design is the logical framework of the system. It details how one output leads to another and how events are managed within the system. This is usually represented diagrammatically by a flowchart, logic circuit or signal flow graph. The functional design determines whether the proposed solution makes sense and the suggested components will perform as expected. The functional design is later used to configure (using hardware or software) the assembled components. An example of a functional and physical design can be seen in Figure 2, where we have an incoming signal and its desired output. The physical and logical designs can be analysed together using simulation software.   Effectiveness Evaluation and Decision This is mostly performed at the early stage with a series of meetings with the clients/users. Do the earlier analyses and design outcomes match with the customer’s expectations? This comparison is used to offer modifications and clarity about the designs as integrating prior to forming an understanding between the engineers and the clients leads to wasted time and money. Effectiveness evaluation is measuring the extent to which targets are being met, and detecting the factors that hinder or facilitate their realization. It also involves establishing cause-effect relationships about the extent to which a particular policy (or a set of policies) produces the desired outcome. businessdictionary.com   This evaluation allows the engineers to put all available alternatives on the table presented to the stakeholders, giving an elaborate outline on risks involved and expected outcome with each alternative. The engineer at this stage improves the suggested solutions using input from the stakeholders as they are most concerned with the overall benefit of the system.   System Integration The physical labor is at this point where you build […]

The post What a Systems Engineer Can Do For You: 7 Guiding Principles appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/what-systems-engineer-do/feed/ 2 937
Are We Ready For The Future? 4 Trends We Need To Look Out For http://symonmk.com/future-trends/ http://symonmk.com/future-trends/#respond Wed, 14 Mar 2018 19:25:57 +0000 http://symonmk.com/?p=896 Whether we like it or not, the future is already here with us. Hot terms like AI and machine learning have been with us for some time now. Automation is rapidly replacing manpower and only the owners of such technology seem to be benefiting. Consumption encouraged with the masses as they reap rewards. What does this mean for us in the technical landscape? Like our ancestors before us, we adapt. In this article we look at some trends in broadcast and media content delivery and their technical implications in engineering. Times have been changing and with new technology come new ways to do things. Recently, promising platforms have taken root and seem to be growing year by year. Just to highlight a few areas that have captured my interests, they are: OTT (over-the-top) services VOD (video-on-demand) services AR (augmented reality), VR (virtual reality) and interactive media Viewer data analytics We have interacted with some of these already with the likes of Youtube (AVOD – Advertising VOD) and Netflix (SVOD – Subscription VOD). AR and VR have featured in popular mobile and video games in the last few years. Viewer data analytics has extensively been used to recommend content based on your profile, geographical location, usage history and the complexity of the algorithms used increase each day.   OTT and VOD Services OTT services are those where audio and video content offered by the content creator goes directly to the viewers via the Internet. This means that conventional broadcast infrastructure used for terrestrial broadcast for TV and radio, as an example, are not used in this process. All that is required is a screen and an Internet connection, which as we have seen before will not be much of a problem in the coming years. VOD services, Figure 1, are those that offer audio and video content to the consumers at their convenience, hence the on-demand part. These services have been growing in Kenya with the initial embrace of Youtube then other SVOD services like ShowMax and Netflix entered the country and recently Viusasa, a Kenyan-owned SVOD platform was launched and has been popular since.   AR and VR Augmented reality is a video technology that has grown in popularity in the past years. It’s basically a concept that enhances the perception of reality by superimposing images, text or other graphics layered on live video captured by fixed or mobile cameras. AR has been increasingly used globally for video entertainment and gaming. In Kenya, AR offers opportunities for use in education with collaboration as a key application and in business with applications in e-commerce, real-estate, advertising and in the auto industry (Nyamwamu and Onsongo, 2016). Virtual reality is an immersive experience in a computer generated environment usually by means of a head gear, Figure 2, allowing the user to see the immersive content while everything else is blanked out. Globally, companies like Google and Facebook have invested large sums into this technology. Locally, we are seeing the likes of BlackRhino VR already competing in this space. The firm has been able to provide VR solutions for major corporations in the country notably Safaricom and the Kenya Wildlife Service. Increasing demand for VR technology could mean more space for other firms to develop.   Viewer Data Analytics and Big Data This is quite a complex topic and deep discussions on data can be had with my girlfriend Cate Gitau. In our focus today however, I’ll look at the impact of viewer data analytics and why it leads to greater user engagement and retention in media. In recent years, practices of data analysis have become even more advanced as it is ubiquitous. Media companies have embraced it to get the most out of their platforms. It involves the systematic analysis of the viewers patterns such as their watch/listen time, history, devices and other, maybe intrusive, areas that we clicked ‘Agree’ on the Terms and Conditions we should probably have read. This analysis is then used by the companies to present the content in such a way that is most appealing and most likely to keep us engaged. For a lot of these platforms, engagement time, is a key metric. I recently had a demonstration with Futuri Media on how they are using data to drive engagement for media broadcasters. However, this is not the only way big data can be used. To provide more meaningful content in our country, analysis of the different regions can be gathered to ensure better spread of infrastructure to lessen the burden of connectivity to majority of the population.   The Numbers We are likely to react when the money is on the table so lets get to it! According to PwC’s Entertainment and media outlook: 2017 – 2021 An African perspective report, the full spectrum of Kenya’s entertainment industry was worth USD 2.1 billion and is expected to reach USD 3.2 billion in 2021. Revenues from the Internet services discussed earlier are expected to rise at a compound annual growth rate (CAGR) of 10.5% to USD 1.0 billion by end of 2021 and by this period the Internet advertising budget will have doubled to USD 227 million, a CAGR of 13.6%. Interestingly, the report also details Kenya’s preference for PC and console gaming over casual/mobile gaming and this industry is set to grow to USD 104 million at a CAGR of 11.5%. This will also enhance the growth of e-sports gaming in the country. Revenues from e-sports will likely be from streaming advertising, consumer contribution, consumer ticket sales and sponsorship revenue. In South Africa, revenues from VR technology are expected to grow at a very steep CAGR of 72.6% to R422 million.   Technical Implications Darwinism is always a factor when it comes to who gets to thrive. The engineering community is faced with a lot of expectations in leading the way forward. As Marshall McLuhan put it in 1964, the medium is the message, we ought to find innovative ways to deliver content and create ways for advertisers to reach […]

The post Are We Ready For The Future? 4 Trends We Need To Look Out For appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/future-trends/feed/ 0 896
Easy Ways to Simulate the Strongest Microwave Links http://symonmk.com/strongest-microwave-links/ http://symonmk.com/strongest-microwave-links/#respond Tue, 27 Feb 2018 20:47:15 +0000 http://symonmk.com/?p=848 Having recently been involved with several microwave link installations and servicing, I have gathered several best practices to installing digital IP microwaves and obtaining the best PtP (point-to-point) links from them. A PtP link is simply a directional link between microwave antennas with a clear line of sight. During installation, several prior calculations have to be made to ensure the best possible link as well as mitigate the chances of link failure. With the recent development of online geographical maps, these calculations have been even more simplified and in this post we will dive into using these tools. In my field of work we apply microwave links in providing distribution links and contribution links. Distribution links are mainly the links between a studio and a transmitting site, such a link is mentioned here. Contribution links are the links between an OB (outside broadcast) site and their studio.   Path Profiles A path profile refers to a straight line cross-section of two points on the earth’s surface. This can be used in obtaining the clearance characteristics of two points. From secondary school geography, for those who can remember, this was obtained by drawing a straight line between two points on a topographical map (scale of 1:50000 for example), obtaining the altitudes of all the points on the line then drawing an altitude versus distance graph. Joining the points on the graph smoothly gives you the path profile required for your point to point connection. Beyond this an earth bulge calculation has to be performed on points on your profile close to the straight line of sight. This is to compensate for atmospheric refraction which causes variations of the atmospheric refractive index caused by the effective earth radius , these variations lead to bending of microwave rays. A value of 4/3 is usually taken to denote the change in atmospheric index with height due to the earth’s radius which may cause the microwave ray to bend downwards. Other changes in atmospheric refraction may cause the value of to drop to about 2/3 which may cause an upward bending of the ray. The earth bulge constraint helps you better plan the height to set your antennas with the calculation as:     where and are distances between the particular point on the path and the each end on the path in kilometers.   After this we obtain the radius of the first Fresnel Zone which to put in technically is the locus of all points surrounding a radio beam from which reflected rays would have a path length one half-wavelength greater than the direct ray but may be simply understood as the region where a direct line of sight is achieved with the strongest signal link. The first Fresnel zone may be calculated as follows:     where where and are the distances to each end of the path in kilometers, and F is the frequency in gigahertz.   A minimum 60% clearance criterion is also required. This is normally a clearance of 60% (0.6) over obstacles of the first Fresnel zone calculated above with equal to 2/3 . With this, we can now use the manufacturer’s data of our radios and antennas to determine the final parameters in our planning. These include: Antenna gain Branching losses at both ends of the link Feeder equipment losses   Alternatively… With all these in place you should be fine to gather your equipment and start right away. However, the process discussed may be too lengthy and prone to errors depending on your math skills. At the time of writing this post technology has made more complicated areas in life such as dating as simple as swiping right, so why not this too? Thankfully, a lot of manufacturers offer proprietary simulators to determine a clear PtP or even PtMP (point-to-multipoint) link from the comfort of your computer. One such manufacturer is Ubiquiti with their popular link simulator. Using Ubiquiti’s tool is fairly simple. I did a simulation of two points, Figure 1, between Chiromo area in Nairobi area and Limuru where transmitter sites for broadcast are usually found. To start off, obtain the coordinates of the two points. During site surveys I find it very useful using My GPS Coordinates which is a simple, free Android app which gives you your current coordinates. Having the coordinates of your access point (AP) and station (STA) we can now simulate link of the two sites. Here I have -1.275106,36.807504 as my access point and -1.127295,36.635714 as my station. In a previous installation, we used the Rocket M5 radios operating at 5GHz frequency using RocketDish antennas with a gain of 34dBi. Antennas with gains of 30dBi would also have sufficed but the higher gain antennas gave us better signal strengths. This information is filled on the section in Figure 2 together with the channel width. At this point you leave it up to the simulator to perform the calculations that give you details on the signal strength. If one has not been achieved between the two points the simulator will state that the link is obstructed. This can be rectified by adjusting the antenna height (to as high as is reasonable) or changing the station area by moving the antenna position to get a clear LoS. If this is still not achievable, consider setting up repeater stations to navigate around the obstructions. With a clear line of sight the next step is to check if the signal strengths are satisfactory. The higher the better. According to my simulation, Figure 3, -90dBm is weak while -60dBm is good. Aim for the strongest link possible to counter effects of fog, precipitation and changes in atmospheric gases. The same steps defined can be used for PtMP links. Conclusion This is a helpful skill I gathered from setting up microwave links and troubleshooting when failure occurs. To ensure the highest levels of availability in a year, the strongest possible links should be set at your client site. Understanding the important parameters helps you in prior planning before installation and the performance […]

The post Easy Ways to Simulate the Strongest Microwave Links appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/strongest-microwave-links/feed/ 0 848
Getting It Right With Audio Quality And Consistency http://symonmk.com/audio-quality-consistency/ http://symonmk.com/audio-quality-consistency/#respond Wed, 14 Feb 2018 07:00:09 +0000 http://symonmk.com/?p=826 Of the twenty one senses that we have, hearing plays a large role in how we experience the world we live in. When it comes to video, humans tend to accept the limitations of the current generation technology. Remember when monochrome TVs were widely adopted? Mom remembers. However, we always expect clean, crisp audio from our entertainment/news platforms. Bad sounding media is often unforgivable as compared to poor image quality. Herein lies the need to ensure consistent great quality audio reaches your listeners. Let’s check the considerations audio engineers in your plant can contribute to the highest possible audio quality and consistency. Audio media goes through two major processes during its creation, audio production and audio processing. Production involves the activities and equipment used to capture or create sounds. This includes audio design, mixing, editing, dubbing, applying various sound effects and balancing sources. Audio production is beyond the scope of this topic but will be referenced to as it comes before the audio processing chain. The audio processing chain refers to the activities and processes to give your audio a particular desired sound. That means that the sound from your production site has a particular mood and feel. This is achieved by the technical manipulation of the audio signal. Processing is based how you want your audio to impact your listening audience, an interesting scientific study called psychoacoustics.   Quality Audio engineers need to ensure the best signal quality of the audio that is produced from the production sites (they may be audio labs, recording studios, FM and TV studio etc) and this process starts from the infrastructure used to capture and transmit the sound. Isolation and acoustic treatment was covered in a previous post, beyond that we have the transmitting elements; cables and connectors. XLR cables should be fabricated well as and the audio cable should be of high quality. I’ve had instances of cables I bought having rusted sleeves! So be wary of those Luthuli Avenue stores and pick the right brand. High quality connectors should be used for the best sounding audio. The popular Neutrik connectors should suffice. High quality microphones for better sound capture should be used. However, great sounding audio doesn’t come cheap. Testing your connections can be quite easy as long as you do not have complicated cable paths. Also avoid running them alongside power cables to minimize interference. Test for any shorting of the cable elements (hot, cold and sleeve/ground) using a continuity test and resistance along the sleeve should be as low as possible. The rusted sleeve I described earlier had high resistance. Testing for audio in a large plant after all cables are terminated may be a grueling task therefore it is best to plan early on testing while terminating the connections to ensure you are satisfied at each step. Now check your equipment for any distortion from clipping before getting the signal to the audio processor, in an FM plant the standard equipment before the processor are: Microphone preamps Console summing amplifiers Communication devices such as phone systems, remote links Analog-to-digital converters Stereo profanity delay Computer sound cards   Consistency Audio quality MUST be ensured before processing as manipulating bad audio to give you desired results is a worthless effort. In the audio processing chain the sound engineer can perform a set of operations to fine-tune the signal. For best results use linear uncompressed audio formats such as WAV as compared to compressed formats like mp3. A digital sound processor may be added to your chain to perform the following operations: Multiband compression Stereo expansion Equalization Automatic gain control Multiband compression is a form of dynamic range compression. This range compression is performed to either amplify low levels or reduce higher levels. It is important, say in traffic, that the low levels does not get lost in the background, and high levels aren’t too uncomfortable for listeners. In the automobile, dynamic range cannot exceed 20dB without causing problems A multiband compressor checks the audio being fed to it and adds compression to only the parts of the signal that need compressing. This can allow the engineers to increase the loudness levels without much fear of distortion. Stereo expansion/widening is a technique used to expand your stereo image. Stereo image is the perceived spatial location of the sound source. Thus stereo expansion increases the perceived width of your audio. Panning is the most important technique when it comes to stereo expansion as it allows you to place instruments or vocals to as wide an area as desired. An extreme version of this method is binaural panning that emulates human hearing by allowing you to position the direction of a signal source so your ears perceive the sound as coming from either in front, behind, above, below, and to the left or right of the listening position when using a stereo output. Get a good set of headphones and enjoy this video. Equalization (EQ) is simply manipulating the different frequency components in your signal by use of an equalizer. It is important to note that for the reasons discussed earlier, dynamic range compression should come before EQ for the best perceived effect, otherwise it will be difficult to establish the effect of EQ. The equalizer is a circuit or DSP (digital signal processing) plug-in with linear filters. EQ is the way to give your audio a particular mood depending on how you play around with the low, mid and high frequencies. Automatic gain control is usually the final step to the output. In electronics, it is a closed-loop circuit that provides a feedback loops allowing for a controlled output despite variable input amplitude. This is used to ensure consistent volume of the audio signal. From Figure 2, the signal to be gain controlled goes to a diode and capacitor, which produce a peak-following DC voltage.   Again, the equipment following the processing chain in an FM plant can also affect the quality of the audio. They should also be checked for proper […]

The post Getting It Right With Audio Quality And Consistency appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/audio-quality-consistency/feed/ 0 826
5G and the War for Supremacy: The 4 Key Technologies Involved http://symonmk.com/5g-4-key-technologies/ http://symonmk.com/5g-4-key-technologies/#respond Wed, 31 Jan 2018 07:00:19 +0000 http://symonmk.com/?p=766 Fifty years from now, when you’re aged and wrinkled (assuming we won’t have discovered the cure for aging) you’ll be sure to tell your grand-kids of the war for the rise of the 5G cellular network from 2016 – 2020. You’ll explain to them how the differences in spectrum efficiency, power efficiency and performance in connectivity were used to decide the ultimate victor. The challengers being MU-MIMO, D2D, NOMA and mmWave technologies. How the battles were fought, not on the ground or sea, but through the electromagnetic spectrum led by top research scientists all over the globe. You’ll remember to tell them that the puppet-masters were the telcos giants while us, the wee subscribers, could only hope for the best outcome. Better yet, direct them here for a simplified description of the technologies currently involved in the 5G revolution. Silly intros aside. I recently attended a lecture on Key Wireless Access Technologies in 5G and IoT Systems held on 15th January 2018 at Strathmore University’s Transcentury Auditorium. The event was organized by  IEEE ComSoc in conjunction with the university. The  IEEE Distinguished Prof. Rose Qingyang Hu, Figure 1, delivered the lecture on her ongoing research on the next generation of wireless communication. Her research focuses on the network design and optimization schemes, the Internet of Things, cloud and fog computing, multimedia QoS/QoE, wireless system modelling and performance analysis. In this post I share my insights on what I learned. Just as in the previous fourth generation, 4G, the search for a stable standard to be used for cellular communication involves a conflict between newly discovered or improved technologies. 4G technologies included LTE (Long Term Evolution), WiMAX (Worldwide Inter-operability for Microwave Access) and UMB (Ultra Mobile Broadband). As we now know, LTE has become the preferred 4G technology and has been widely implemented and accepted. In Nairobi, increased 4G coverage has been going on for about 2 years now. However, as user requirements have increased, a new standard of communication needs to be established. 5G promises to be the answer to the limitations with the current modes of communication. The 5G model of communication especially focuses on handling IoT (Internet of Things) devices. IoT devices broadly refer to the increasingly ubiquitous devices connected to the internet for general (smartphones, tablets) or specific (autonomous cars, wearables) applications. A quick example would be a smart stapler that connects to an app on your phone to tell you how many staples are left, an extremely useful device in my opinion. Let’s explore these competing technologies in 5G.   MU-MIMO This stands for multi-user – multiple input, multiple output. MIMO systems are communication systems involving signal transmission with multiple antennas at the source and multiple antennas at the destination. MIMO systems have been in use in existing 4G networks, however to handle the requirements of 5G, MU-MIMO has been proposed to bring additional improvements. In MU-MIMO, MIMO is performed simultaneously on n number of user equipment (UEs) as in Figure 2. In MIMO we have a channel matrix H for M transmit antennas and N receivers. In order for this to be possible, beamforming, which involves space division beam direction, separates the receivers. In 4G this exists but its performance is limited. 5G will be set to push this to its limit, however it is only easy said as some factors make it challenging to even consider. The factors include: Number of transmitting and receiving antennas Coverage and the number of UEs to be supported per antenna Precoding scheme to be used Precoding schemes involve a matrix ‘code’ that are sent with the message so that the receiver can achieve ‘pre-knowledge’ of the channel. Useful especially in MU-MIMO systems to reduce corruption as well as optimize message reception by the receivers. Such schemes include matched filter precoding, zero-forcing (ZF) precoding, and transmit Wiener precoding. The Signal to Interference plus Noise ratio for the ith user served by one MU-MIMO base transceiver is:     where: is the precoding vector for the ith user is the channel vector for the ith user is the variance of the complex circular zero mean white Gaussian noise at the nth user There’s a great YouTube lecture on MIMO and MU-MIMO calculus.   D2D Device-to-Device communication is another technology associated with the rise of 5G networks. It basically means the direct communication of devices with each other. UEs involved in D2D will have to be at close proximity so as radio communication can be sustained. This will greatly reduce the load on the base station transceivers or access points. Existing technologies already enhance this form of communication such as Bluetooth and WiFi-Direct. It is up to cellular networks to leverage such concepts for the increasing number of communicating devices. Devices use existing cellular infrastructure while switching to D2D when location proximity is sufficient as in Figure 3. Researchers have deduced that there are several advantages to this scheme including: Ultra-low latency in communication as the signal path is greatly minimized Load on core network is reduced, increasing spectral efficiency Supports a variety of emerging applications such as Machine-to-Machine (M2M) and context aware applications D2D communication has already been established as part of the 4G-LTE standard defined by the Third Generation Partnership Project (3GPP) Release 12. D2D looks very promising as some of its features have been in use for quite a while now. Some challenges, especially in security and pricing for service providers are yet to be fully solved.   NOMA Non-orthogonal multiple access (NOMA) schemes have become recently popular in achieving spectral efficiency, becoming attractive for the 5G networks. Currently up to 4G, cellular networks have been using orthogonal multiple access (OMA) schemes. They include the frequency division multiple access (FDMA), time division multiple access (TDMA) and code division multiple access (CDMA). NOMA has been recently proposed as it meets the requirements of 5G networks unattainable using OMA. NOMA can be achieved by two techniques of power-domain and code-domain. In the power-domain NOMA scheme, superposition coding at the transmitter and successive interference […]

The post 5G and the War for Supremacy: The 4 Key Technologies Involved appeared first on SYMON MUTHEMBA.

]]>
http://symonmk.com/5g-4-key-technologies/feed/ 0 766