2013年12月16日 星期一

Comprehensive customization for network appliances: meet our rackmount and micro box!

acrosser Technology, a world-leading network communication designer and manufacturer, introduces two network appliances that deliver great performance and protection while simplifying your network. Each product has its own target market and appeals to a unique audience.

Acrosser
’s ANR-IB75N1/A/B serves as an integrated Unified Threat Management (UTM) device that covers all of your networking security needs. Featuring a 3rd generation Intel Core i processor, increased processing throughput is easily made. For integration with information security systems, the device also features functions such as anti-virus, anti-spam, fire wall, intrusion detection, VPN and web filtering, in order to provide complete solutions to meet the demands of various applications.

Key features of the ANR-IB75N1/A/B include:
‧Support for LGA1155 Intel® Core ™ i7/i5/i3 processor / Pentium CPU
‧Intel B75 Chipset
‧2 x DDRIII DIMM, up to 16GB memory.
‧2 x Intel 82576EB Fiber ports
‧8 x Intel 82574L 10/100/1000Mbps ports
‧Two pairs LAN ports support bypass feature (LAN 1/2 + LAN 3/4)
‧LAN bypass can be controlled by BIOS and Jumper
‧CF socket, 2 x 2.5” HDD, 1 x SATA III, 1 x SATA II
‧Console, VGA (pinhead), 2 x USB 3.0 (2 x external)
‧Support boot from LAN, console redirection
‧Equipped with 80 Plus Bronze PSU to decrease CO2 dissipation and protect our environment
‧LCM module to provide user-friendly interface
‧Standard 1U rackmount size

As for our micro box, the AND-D525N2 provides more possibilities for different applications due to its small form factor (234mm*165mm*44mm). Aside from its space-saving design, the other 3 major features of the AND-D525N2 are its high performance, low power consumption and competitive price. Please send us your inquiry via our website (http://www.acrosser.com/inquiry.html), or simply contact your nearest local sales location for further information.
Key features of the AND-D525N2 include:
‧Intel Atom D525 1.86GHz
‧Intel ICH8M Chipset
‧x DDR3 SO-DIMM up to 4GB
‧1 x 2.5 inch HDD Bay, 1 x CF socket
‧4 x GbE LAN, Realtek 8111E
‧2 x USB2.0
‧2 x SATA II
‧1 x Console
‧1 x MiniPCIe socket

Besides In addition to these two models, Acrosser also provides a wide selection of network security hardware. With more than 26 years of rich industry experience, Acrosser has the ODM/OEM ability to carry out customized solutions, shortening customers’ time-to-market and creating numerous profits.

For all networking appliances product, please visit:
http://www.acrosser.com/Products/Networking-Appliance.html

Product Information – ANR-IB75N1/A/B:
http://www.acrosser.com/Products/Networking-Appliance/Rackmount/ANR-IB75N1/A/B/Networking-Appliance-ANR-IB75N1/A/B.html

Product Information – AND-D525N2:
http://www.acrosser.com/Products/Networking-Appliance/MicroBox/AND-D525N2/ATOM-D525-AND-D525N2.html

Contact us:
http://www.acrosser.com/inquiry.html

2013年10月1日 星期二

Acrosser's gratitude to all visitors at MIMS!



Большое спасибо! Acrosser would dedicate its warmest gratitude to TAITRA and participants at the Moscow International Motor Show 2013(MIMS)! Our winning product AR-V6100FL not only earned the spotlight, but also good feedback comments from our industry partners. Hopefully we will see you soon next year!

2013年9月9日 星期一

Fanless Mini-ITX mainboard with Intel Atom Processor


Acrosser Technology Co. Ltd, a global professional industrial and embedded computer provider, announces the new Mini-ITX mainboard, AMB-D255T3, which carries the Intel dual- core 1.86GHz Atom Processor D2550. AMB-D255T3 features onboard graphics via VGA and HDMI, DDR3 SO-DIMM support, PCI slot, mSATA socket with SATA & USB signals, and ATX connector for easy power in. AMB-D255T3 also provides complete I/O such as 6 x COM ports, 6 x USB2.0 ports, 2 x GbE RJ-45 ports, and 2 x SATA port.

for more information, please visit:
http://www.acrosser.com/News-Newsletter/62.html


2013年8月19日 星期一

In-vehicle pc exhibit in MIMS MOSCOW!


Acrosser built a comprehensive power management subsystem solution, allowing users to select the best setting for the power management mode to meet specific application demands. The efficiency of heat dissipation also contributes to its high performance under rugged automotive environments. Another fascinating feature of AR-V6100FL is its smart power management function.

As for the show, Moscow International Motor Show 2013(MIMS) is regarded highly in the automotive industry in Russia. Last year, the exhibitors consisted of 1,379 companies from 35 countries and 15,717 guests from 52 countries participated in this event. 99.6% of visitors were industry professionals.

for more info, please go to the website:
http://www.acrosser.com/News-Newsletter/61.html

2013年8月6日 星期二

Levels for performance check




Common to all the embedded computer performance levels of the new boards and modules based on the AMD Embedded G-Series platform are their discrete-level graphics capabilities. Providing support for the latest DirectXR 11 API, they enhance all conventional graphics-intensive small-form-factor applications.



refer to: http://embedded-computing.com/white-papers/white-small-form-factor-sff-designs-2/

2013年7月30日 星期二

In-vehicle information sharing


Most engineers and system integrators find it troublesome when installing car computers in their business vehicle. How often did hardware configuration or software programming take away from their business? Above all these worries, establishing steady power management becomes the most important issue before integrating the entire system. Acrosser’s In-Vehicle computer offers 6 stunning power management traits to overcome these difficulties.

ACROSSER Technology has provided a complete product line for In-Vehicle computers. The product line also gained more attention after winning the 21th Taiwan Excellence Award with 2 outstanding In-Vehicle computers: AR-V6005FL and AR-V6100FL. Acrosser also released its latest in-vehicle computer, AIV-HM76V0FL during late 2012. The company pride itself in offering not just products, but solutions. Please contact ACROSSER Technology for further consultations, volume quotes, or any other questions.

Product Information:
AIV-HM76V0FL

AR-V6005FL

AR-V6100FL

Award Information:

Contact us:

2013年7月17日 星期三

Introducing 2 Mini-ITX mainboards



With a total board height less than 20mm, the slim fit feature of AMB-D255T1 makes it a perfect application almost everywhere. With single layer I/O ports and external +12V DC power input, AMB-D255T1 can easily be equipped even in limited spaces like digital signage, POS or thin client systems. Also, the supporting video source includes both VGA and HDMI outputs to cater to a variety of needs. Many digital signage partners have showed great interests toward AMB-D255T1 for their business sector. AMB-D255T1 has one DDR3 SO-DIMM which supports up to 4GB DDR3 memory, mSATA socket with USB signals and SIM slot, and a DC jack for easy power in. For customers that are taking their entire system to the next level, AMB-D255T1 provides one PCI slot and one Mini PCIe expansion slot with a SIM card socket for further improvement. The mini PCIe expansion allows mSATA to function together with the system or multi module choices for USB signals module installation.( mSATA storage, Wi-Fi module, or 3G/4G telecommunication)

The key features of the AMB-D255T1 include:
.Intel Atom D2550 1.86GHz
.1 x DDR3 SO-DIMM up to 4GB
.1 x VGA
.1 x HDMI
.1 x 24-bit LVDS
.6 x USB2.0
.4 x COM
.1 x GbE (Realtek RTL8105E)
.1 x PS/2 KB/MS
.1 x PCI slot
.1 x MiniPCIe slot for mSATA and USB device
.1 x SATA with power connector
.8-bit GPIO

AMB-QM77T1 is dedicated to multiple applications, such as industrial automations, kiosks, digital signages, and ATM machines. Supporting 3rd generation Intel core i processor, AMB-QM77T1 features an integrated GPU to support the following graphic libraries: DirectX11, OpenGL4.0 and OpenCL1.1. As for numbers of output, a maximum of 3 independent displays are supplied, which is a perfect solution for gaming/multimedia business. In addition, 4 USB3.0 and 2 SATA III connectors result in high data transmission.

2013年6月25日 星期二

About credit card sized SBC

Panel PC, Embedded pc, Industrial PC

The initial goal in creating the Raspberry Pi credit card sized, Linux-based Single Board Computer (SBC) – targeted primarily at education – was to develop a response to the decline of students engaging with computer science and related engineering disciplines. Our desire was to reverse the trend of children becoming consumers rather than creators. The following case study follows the hardware development process from an early failure, initial prototypes, and through to the finished production design.




refer to :http://embedded-computing.com/articles/case-card-sized-sbc/

2013年6月18日 星期二

MSC presents a Starter Kit

Panel PC,  Embedded pc, Industrial PC

A new starter kit for COM Express(tm) modules with AMD Embedded R-Series Accelerated Processing Unit (APU) is now available. The intelligent starter kit MSC C6-SK-A7-T6T2 contains a COM Express(tm) Type 6 baseboard, an active heat sink with fan and two DDR3 memory modules.


................




refer to:
http://smallformfactors.com/news/msc-kit-com-expresstm-type-modules/#at_pco=cfd-1.0

2013年5月7日 星期二

See you at ESEC Japan 2013 !





ACROSSER Technology announces our participation in 2013 the Embedded Systems Expo and Conference (ESEC) from May 8th to the 10th. The event will take place at the Tokyo International Exhibition Center in Tokyo, Japan. We warmly invite all customers to come and meet us at the west hall, booth number: WEST 10-61.

Embedded PC, Panel PC, in vehicle pc



At the 2013 ESEC, Acrosser will highlight its latest endeavors on 2 major applications: networking and gaming. For networking, the latest Rackmount product from Acrosser, the ANR-IB75N1, will be on display during the entire event. As for gaming applications, Acrosser will exhibit its new All-in-One Gaming BoardAMB-A55EG1. The board features great computing and graphic performance, and high compatibility on multiple operation systems. In addition, Acrosser also stresses its focus on other product lines, including Single Board Computers and In Vehicle Computer AIV-HM76V0FL.

We look forward to turning your dreams into reality at the 2013 ESEC!
We cordially invite you to visit our booth and discover our outstanding products!

New Product information:

(Networking Appliance)
ANR-IB75N1/A/B
Embedded PC, Panel PC, in vehicle pc

http://www.acrosser.com/Products/Networking-Appliance/Rackmount/ANR-IB75N1/A/B/Networking-Appliance-ANR-IB75N1/A/B.html



(Gaming Platform)
AMB-A55EG1


Embedded PC, Panel PC, in vehicle pc


http://www.acrosser.com/Products/Gaming-Platform/All-in-One-Gaming-Board/AMB-A55EG1/AMD-Embedded-G-Series-AMB-A55EG1.html

Contact:
http://www.acrosser.com/inquiry.html

2013年5月1日 星期三

About Static analysis...

Panel PC,  Embedded PC, Industrial PC

When it comes to software development, the old adage is best spun in a slightly different way: better "early" than never. Accordingly, static analysis can help those developing in Java to stay one step ahead of potential coding problems.

Today’s software development teams are under immense pressure; the market demands high-quality, secure releases at a constantly increasing pace while security threats become more and more sophisticated. Considering the high cost of product failures and security breaches, it is more important than ever to address these risks throughout the software development process. Potential problems need to be spotted early to prevent release delays or, worse, post-release failures.

...

refer to: 
http://embedded-computing.com/articles/static-helps-manage-risk-java/ 

2013年4月23日 星期二

2013 the Embedded Systems Expo and Conference (ESEC) in Tokyo


2013 the Embedded Systems Expo and Conference (ESEC) from May 8th to the 10th. The event will take place at the Tokyo International Exhibition Center in Tokyo, Japan. We warmly invite all customers to come and meet us at the west hall, booth number: WEST 10-61.

2013年4月16日 星期二

Next-generation FPGAs

Panel PC, Embedded PC, Industrial PC
FPGAs have become some of the most important drivers for development of leading edge semiconductor technology. The complexity of programmable devices, and their integration of diverse high-performance functions, provides excellent vehicles for testing new processes. It’s no accident that Intel has selected Achronix and Tabula, both makers of programmable devices, as the only partners that have been granted access to their 22 nm 3D Tri-Gate (FinFET) process. In February, Intel also announced an agreement with Altera, which will enable the company to manufacture FPGAs using their next-generation 14 nm Tri-Gate process.


refer to :

http://dsp-fpga.com/articles/advances-in-eda-design-methodologies-led-by-next-generation-fpgas/

2013年4月9日 星期二

The Analog Front End (AFE), allowing the connection of the sensor to the digital world of the MCU



Embedded PC, in vehicle pc, Single Board Computer

Many of today's embedded systems incorporate multiple analog sensors that make devices more intelligent, and provide users with an array of information resulting in improved efficiency or added convenience. The Analog Front End (AFE), allowing the connection of the sensor to the digital world of the MCU, is often an assumed "burden" in designing sensor interface circuits. However, the latest concept in a configurable AFE, integrated into a single package, is helping systems designers overcome sensor integration challenges associated with tuning and sensor drift, thereby reducing time to market. The following discussion examines how the versatility of such a technology allows the designer to tune and debug AFE characteristics on the fly, automate trimming and adjust for sensor drift, and add scalability to support multiple sensor types with a single platform.


The ubiquitous use of sensors in our smart devices – from cell phones to industrial equipment and even medical devices – has increased the need for more intelligent sensor technologies that are more versatile, lower overall costs, and require fewer resources to develop and maintain.
Most analog sensor systems comprise three key elements: the analog sensor that measures a specific form of energy, the micro controller (MCU) that processes the digital equivalent of the sensor’s signal, and between them is the Analog Front End (AFE) system (Figure 1). The AFE receives the sensor’s signal and converts/transforms it for the MCU to use, as in most cases the sensor output signals cannot be directly interfaced to an MCU.
Embedded PC, in vehicle pc, Single Board Computer

Figure 1: The Analog Front End (AFE) converts and conditions analog sensor signals for use by the MCU.


The challenge associated with current AFE design approaches is the time-consuming trial-and-error tuning process, and the lack of flexibility and scalability to support multiple sensors from a single AFE. Moreover, many AFEs do not account for sensor drift or adjust for sensor trimming during production, which directly reduces the quality of the sensor. However, new fully configurable AFE technology is enabling designers to overcome these hurdles.
1.The importance of the AFE

2.Challenges to AFE designs

3.Let’s examine each of these challenges.
4.Configurable AFE eases calibration trial and error

5.Configurable AFE provides scalability

6.A software-supported design approach


7.Simplifying the burden of AFE designs



2013年3月24日 星期日

Open source drives innovation about automation


The speed of innovation in automotive IVI is making a lot of heads turn. No question, Linux OS and Android are the engines for change.

Panel PC, Embedded pc, Industrial PC

The open source software movement has forever transformed the mobile device landscape. Consumers are able to do things today that 10 years ago were unimaginable. Just when smartphone and tablet users are comfortable using their devices in their daily lives, another industry is about to be transformed. The technology enabled by open source in this industry might be even more impressive than what we’ve just experienced in the smartphone industry.
The industry is automotive, and already open source software has made significant inroads in how both driver and passenger interact within the automobile. Open source stalwarts Linuxand Google are making significant contributions not only in the user/driver experience, but also in safety-critical operations, vehicle-to-vehicle communications, and automobile-to-cloud interactions.

Initially, automotive OEMs turned to open source to keep costs down and open up the supply chain. In the past, Tier 1 suppliers and developers of In-Vehicle Infotainment (IVI) systems would treat an infotainment center as a “black box,” comprised mostly of proprietary software components and dedicated hardware. The OEM was not allowed to probe inside, and had no ability to “mix and match” the component parts. The results were sometimes subquality systems in which the automotive OEM had no say, and no ability to maintain. With the advent of open source, developers are now not only empowered to cut software development costs, but they also have control of the IVI system they want to design for a specified niche. Open source software, primarily Linux and to some extent Android, comprises open and “free” software operating platforms or systems. What makes Linux so special are the many communities of dedicated developers around the world constantly updating the Linux kernel. While there are many Linux versions, owned by a range of open source communities and commercial organizations, Android is owned and managed exclusively by Google.
To understand the automotive IVI space, it’s best to look at the technology enabled by Linux and what Android’s done to further advance automotive multimedia technology.

1.Linux OS – untapped potential at every turn
2.Android apps hit the road
3.Linux and Android driving together?
4.Exciting times ahead  
 .... 



refer:http://embedded-computing.com/articles/automotive-source-drives-innovation/


2013年3月11日 星期一

Performance management: A new dimension in operating systems

Given the increased complexity of processors and applications, the current generation of Operating Systems (OSs) focuses mostly on software integrity while partially neglecting the need to extract maximum performance out of the existing hardware.

Panel PC, Embedded PC, Industrial PC

 

Processors perform as well as OSs allow them to. A computing platform,  or otherwise, consists of not only physical resources – memory, CPU cores, peripherals, and buses – managed with some success by resource partitioning (virtualization), but also performance resources such as CPU cycles, clock speed, memory and I/O bandwidth, and main/cache memory space. These resources are managed by ancient methods like priority or time slices or not managed at all. As a result, processors are underutilized and consume too much energy, robbing them of their true performance potential.
Most existing management schemes are fragmented. CPU cycles are managed by priorities and temporal isolation, meaning applications that need to finish in a preset amount of time are reserved that time, whether they actually need it or not. Because execution time is not safely predictable due to cache misses, miss speculation, and I/O blocking, the reserved time is typically longer than it needs to be. To ensure that the modem stack in a smartphone receives enough CPU cycles to carry on a call, other applications might be restricted to not run concurrently. This explains why some users of an unnamed brand handset complain that when the phone rings, GPS drops.
Separate from this, power management has recently received a great deal of interest. Notice the “separate” characterization. Most deployed solutions are good at detecting idle times, use modes with slow system response, or particular applications where the CPU can run at lower clock speeds and thus save energy. For example, Intel came up with Hurry Up and Get Idle (HUGI). To understand HUGI, consider this analogy: Someone can use an Indy car at full speed to reach a destination and then park it, but perhaps using a Prius to get there just in time would be more practical. Which do you think uses less gas? Power management based on use modes has too coarse a granularity to effectively mine all energy reduction opportunities all the time.
Ideally, developers want to vary the clock speed/voltage to match the instantaneous workload, but that cannot be done by merely focusing on the running application. Developers might be able to determine minimum clock speed for an application to finish on time, but can they slow down the clock not knowing how other applications waiting to run will be affected if they are delayed? Managing tasks and clock speed (power) separately cannot lead to optimum energy consumption. The winning method will simultaneously manage/optimize all performance resources, but at a minimum, manage the clock speed and task scheduling. Imagine the task scheduler being the trip planner and the clock manager as the car driver. If the car slows down, the trip has to be re-planned. The driver might have to slow down because of bad road conditions (cache misses) or stop at a railroad barrier (barrier in multithreading, blocked on buffer empty due to insufficiently allocated I/O bandwidth, and so on). Applications that exhibit data-dependent execution time also present a problem, as the timing of when they finish isn’t known until they finish. What clock speed should be allocated for these applications in advance?
An advanced performance management solution
One example of managing performance resources is VirtualMetrix Performance Management (PerfMan), which controls all performance resources by a parametrically driven algorithm. The software schedules tasks, changes clock speed, determines idle periods, and allocates I/O bandwidth and cache space based on performance data such as bandwidth consumed and instructions retired. This approach (diagrammed in Figure 1) solves the fragmentation problem and can lead to optimum resource allocation, even accounting for the unpredictability of the execution speed of modern processors and data-dependent applications.
Panel PC, Embedded PC, Industrial PC
Figure 1: PerfMan controls all performance resources using a parametrically driven algorithm, leading to optimum resource allocation.
The patent-pending work performed allocation algorithm uses a closed-loop method that makes allocation decisions by comparing work completed with work still to be performed, expressed in any of the measurable performance quantities the system offers. For example, if the application is a video player or communication protocol that fills a buffer, PerfMan can keep track of the buffer fill level and determine the clock speed and time to run so that the buffer is filled just in time. The time to finish will inevitably vary, so the decision is cyclically updated. In many cases, buffers are overfilled to prevent blocking on buffer empty, which can lead to timing violations. PerfMan is capable of precise performance allocation, keeping buffering to a minimum and reducing memory footprint. The algorithm can handle hard, soft, and non-real-time applications mixed together.
If the application execution graph is quantified into simple performance parameters and the deadlines are known when they matter, the algorithm will dynamically schedule to meet deadlines just in time. Even non-real-time applications need some performance allocation to avoid indefinite postponement. Allocating the minimum processor resources an application needs increases system utilization, resulting in a higher possible workload. The method does not rely on strict priorities, although they can be used. The priority or order in execution is the direct result of the urgency the application exhibits while waiting its turn to run, which is a function of the basic work to be performed/worked completed paradigm.
Extending to more dimensions
If tasks are ready to run in existing OSs, they will run, but do they need to? Can they be delayed (forced idling) if the OS knows it will not affect their operation?
Knowing the timing of every task and whether it is running or waiting to run with respect to its progress toward completion allows the software to automatically determine the minimum clock speed and runtime. Thus everything completes on time under all load conditions. Matching clock speed to the instantaneous workload does not mean the clock speed is always minimized. The goal of low energy consumption sometimes calls for a burst of high speed followed by idle, as in Intel’s HUGI. But even then, there is no benefit in running faster than the optimum utilization (executed operations per unit of time) would indicate. Fast clocking while waiting for memory operations to complete does not save energy.
The algorithm’s mantra of “highest utilization/workload at the lowest energy consumption” is largely accomplished with a closed-loop algorithm managing all performance resources.
In multicoresystems, a balanced load, low multithreading barrier latency, and the lowest overall energy consumption cannot be achieved simultaneously. To resolve this, PerfMan can be configured to optimize one or several performance attributes. If minimum energy consumption is the goal, an unbalanced system with some cores that are highly loaded and others that are empty and thus shut down might offer the lowest energy consumption at the expense of longer execution latency and overall lower performance.
Accelerating threads to reduce barrier latency can also lead to higher energy consumption. However, meeting deadlines (hard or soft) overrides all other considerations. The precise closed-loop-based performance resource allocation algorithm can safely maintain a higher workload level, which in turn, allows pushing the core consolidation further than possible with existing methods and thus achieving higher energy reduction.
Implementation on VMX Linux
PerfMan has been implemented as a thin kernel (sdKernel) running independently of the resident OS. It has been ported to Linux 2.6.29 (VMX Linux), as shown in Figure 2. An Androidport is nearing completion. The software takes over Linux task scheduling and interworks with the existing power management infrastructure. A separate version of the sdKernel provides virtualization and supports hard real-time tasks in a POSIX-compliant environment. Scheduling/context switching is at the submicrosecond level on many platforms, but because most Linux system calls are too slow for hard real-time applications, the sdKernel provides APIs for basic peripherals, timers, and other resources.
Panel PC, Embedded PC, Industrial PC
Figure 2: In a Linux implementation, PerfMan takes over Linux task scheduling and interworks with the existing power management infrastructure.

By monitoring performance, the software can detect unusual execution patterns that predict an upcoming OS panic and crash. In such cases, the sdKernel will notify mission-critical applications to stop using Linux system calls and temporarily switch over to sdKernel APIs (safe mode) while Linux is being rebooted.
VMX Linux supports a mix of real and non-real-time applications with efficient performance isolation while minimizing energy consumption. It can also provide hardware isolation/security and safe crash landing.
Benchmarks show the results
The energy consumption, measured in real time using a VMX-designed energy meter, was accumulated for the system and correlated to individual applications. A media player application (video and audio) was run on an OMAP35xx BeagleBoard first using standard Linux 2.6.29 (Figure 3 red graph) and then VMX Linux (Figure 3 blue graph).
Panel PC, Embedded PC, Industrial PC
Figure 3: Using VMX Linux on an OMAP35xx BeagleBoard achieves a 95 percent average load that finishes just in time.

Performance compliance (Perf Compl graph) shows how close the application tasks come to finish on time (center line). Below the line indicates deadline violations. Notice that with VMX Linux, a 95 percent average load is achieved with no prebuffering and no deadline violations, but it gets close. The total board energy consumption for the 46 seconds of video dropped from 68.7 W*sec to 27.6 W*sec with VMX Linux. The displayed data represents averages over a preset interval. As an additional bonus, when Linux is purposely crashed, the video disappears but the music plays on in safe mode with no audible glitches.
In short, the implementation creates a new approach to performance management with exciting results.

refer:

2013年3月4日 星期一

The need for embedded virtualization

Panel PC, Embedded pc, Industrial PC

Virtualization means different things to users with different types of applications. Most forms of virtualization employed in IT server environments aren't of interest to embedded system developers because they don't ensure that processing of time-critical tasks is deterministic. Instead, the way for single and multiprocessor platforms to support multiple operating environments while maintaining real-time responsiveness is to functionally partition processor resources so that they are controlled by specific operating environments, which run directly on the processor silicon rather than on virtual machine implementations.

refer
http://embedded-computing.com/articles/the-multiprocessor-multi-os-systems/#utm_source=Multicore%2Bmenu&utm_medium=text%2Blink&utm_campaign=articles

2013年2月24日 星期日

How to Choose a processor ?


The days are over when selecting a processor was a relatively simple task, in light of today’s converged processing paradigm. But examining a few key considerations can ease the decision-making process.

Panel PC, Embedded computer, Industrial PC

Selecting an embedded processor used to be a pretty straightforward task. Of course, this was back in “the old days,” when the focus was on a limited set of functions, user interface and connectivity didn’t matter too much, and power consumption wasn’t such an overarching issue. In today’s realm of converged processing, where a single device can perform control, signal processing, and application-level tasks, there’s a lot more to consider (Figure 1). While there are too many aspects of the processor selection process to detail here, let’s examine some of the more prominent areas that system designers must consider.
Panel PC, Embedded computer, Industrial PC
Figure 1: Today’s converged processing paradigm makes selecting a processor a more complex decision than ever.
 
 
 
 
 
Processor performance
System designers reflexively note the processing speed of a device as a major indicator of its performance. This is not a bad start, but it’s an incomplete assessment. It is clearly important to evaluate the number of instructions a processor can perform each second, but also to assess the number of operations accomplished in each core clock cycle and the efficiency of the computation units. And it is no longer uncommon to employ processors with multiple cores as a way of greatly extending the computational capabilities of the device (especially in the case of homogenous cores) or clearly demarcating the control processing from the signal processing activity (often with heterogenous cores).
Hardware acceleration
Of course, it’s not just about the processor core(s). For execution of well-specified functionality, a hardware accelerator is almost always the most power-efficient method to perform the function it was designed to accelerate. One area that can make the difference in using the accelerator is how friendly it is to use in a software algorithm. For full-algorithm-type accelerators, such as an H.264 encoder, there usually is not an issue because it’s substantially self-contained. However, for kernel-type accelerators like an FFT, it can be more challenging to use an accelerator within a larger algorithm. Take a look at how the hardware function performs and how it needs to be configured.
Bandwidth requirements
Bandwidth estimation is a process that’s easy to oversimplify, sometimes with unfortunate results. All individual data flows in the system must be summed (with directionality and time window taken into account) to ensure that the core is capable of completing its data processing within the allotted window, and that the various processor buses are not overloaded, leading to data corruption or system failure. For example, for a video decoder, designers need to first account for reading the data that needs to be decoded. Then, it is necessary to incorporate the many data passes required to create the decoded frame sequence. This may involve multiple buffer transfers between internal and external memories. Finally, designers must account for the streaming of the display buffer to the output device.
After all data flows are considered, the overall system budget needs to be constructed. This budget is influenced by several factors, including DRAM access patterns (and resulting performance degradations), internal bus arbitration, memory latencies, and so on.
Power management
The ability to throttle power consumption to a level commensurate with temporal operating requirements is crucial to preserving battery life, as well as overall energy costs in mains-powered systems. Processors can offer a wide range of options for optimizing an application’s power profile. One such feature is dynamic power management – the ability to adjust core frequency and operating voltage to meet a certain performance level. Another is the availability of multiple power modes that turn off various unneeded resources, including memories and peripherals, during certain time intervals. System wakeup (through general-purpose I/O, a real-time clock, or another stimulus) is an integral part of this power mode control. Yet another degree of flexibility in power management is the presence of multiple voltage domains for core, I/O, and memories, allowing different system components to operate at lower voltages when practical.
Security needs
Over the past several years, processor security has become increasingly important. Whether or not such a scheme is a baseline requirement of a system, it is essential to view the security question from multiple vantage points before deciding on the final direction. Security needs usually take the form of platform protection, IP security, or data security – or some combination of all three.
Platform protection is needed to ensure that only authenticated code is run in the application. In other words, must “rogue code” be actively prevented from running? By “rogue code,” we refer to a program that tries to access protected information on the processor, or “hijack” the processor and gain control of the larger system. Platform protection can be implemented with a variety of techniques, and there are always trade-offs to consider in the selection. As with any trade-off, there is a cost implication as the protection levels increase. Another important consideration is the ease-of-use of the overall security scheme, both in development and in production.
The ability to authenticate code is also critical to securing IP and data. IP security requires a way to either encrypt the code image brought into the processor for execution, or to store this IP internal to the processor through embedded flash or an internal ROM inaccessible through external mechanisms. Some form of data security is required to ensure that data enters and exits the system without being compromised. In some cases, especially in lower-end microcontrollers, security may be handled completely with embedded flash, but on higher-end processors, where the application is loaded in through a boot loader, the scheme may be more complex.

Safety and fault tolerance
There are many applications where safety is clearly a main concern, for example, an automotive driver assistance system or a closed-loop power control system. However, currently designers of other not-so-obvious applications are starting to care more about increasing levels of operational robustness. This is especially true as processors are built in smaller silicon geometries, such as 28 nm or 40 nm, for example, where soft errors in memory can impact operations because of naturally occurring events, including alpha and gamma particles. During the processor selection process, it’s important to examine how a processor handles these types of errors, as well as how it responds to unexpected events in general. What steps can it take when an error occurs? How does it signal to other system components that something has gone wrong?
Debugging capabilities
As applications become more complex, so does the development process. Shortcuts that worked in the past might not work when the number of processor and application subcomponents has grown exponentially. Consider the system-level debug of a large software-based system that uses an operating system or real-time kernel. Do the processor and its tool chain have a way to examine the processor state without impacting the application? Is it possible to profile and trace where the processor has been, or to trap on all events of interest? All these questions, and many more, should be answered before becoming comfortable with the level of debugging available.
System cost
At times, system designers focus on the processor price tag instead of the overall system design cost. It is imperative to take into account not only the device cost itself, but also cost of the supporting circuitry required – level translators, interface chips, glue logic, and so on. Also, package options play a vital role: One processor’s package might allow a four-layer board design, while another’s may necessitate an expensive six- or eight-layer board because of routing challenges. Finally, don’t overlook the value of extra processing headroom that can allow for future expandability without causing an expensive processor change or board spin.
Signal chain
One final note: Processor selection should occur in tandem with a study of a system’s signal chain requirements. Does the processor vendor also sell peripherals that connect to the processor? It is often advantageous to buy multiple system components from the same vendor – for interoperability, customer support, and overall pricing benefits.
Ready to choose a processor?
As mentioned, there are many other facets to consider during the processor selection phase, but the considerations described here should provide a good basis for embarking on this crucial process. Vendors such as Analog Devices offer a wide range of processors and other components that meet the described selection criteria.



Refer: http://embedded-computing.com/articles/choosing-processor-a-multifaceted-process/#at_pco=cfd-1.0