Tumgik
#rather than integrating with custom hardware and physical inputs
vorbisx · 10 months
Text
Replacing physical buttons and controls with touchscreens also means removing accessibility features. Physical buttons can be textured or have Braille and can be located by touch and don't need to be pressed with a bare finger. Touchscreens usually require precise taps and hand-eye coordination for the same task.
Many point-of-sale machines now are essentially just a smartphone with a card reader attached and the interface. The control layout can change at a moment's notice and there are no physical boundaries between buttons. With a keypad-style machine, the buttons are always in the same place and can be located by touch, especially since the middle button has a raised ridge on it.
Buttons can also be located by touch without activating them, which enables a "locate then press" style of interaction which is not possible on touchscreens, where even light touches will register as presses and the buttons must be located visually rather than by touch.
When elevator or door controls are replaced by touch screens, will existing accessibility features be preserved, or will some people no longer be able to use those controls?
Who is allowed to control the physical world, and who is making that decision?
47K notes · View notes
albertacash · 2 years
Text
How a Retail Cash Register Helps Your Restaurant?
The cash is kept in a drawer at the bottom of the retail cash register, which is a feature. After you input the amount of the things you've purchased, this gadget will automatically print a receipt and record any cash transactions made in accordance with the manual sales transactions made at the point of sale.
It is a very powerful tool that aids in the sales process for any organization. Additionally, the cash register will give the consumer a receipt that includes information about the purchase. As it has a drawer, it is perfect for securing cash. Cash registers come in many varieties, but they all generally have the same function.
What is a POS system?
A POS cash register software may perform all the functions of a restaurant cash register and much more. The ability to install them on a range of devices and form factors is one of its upgraded features, along with advanced data management and integrations. The technical requirements of the modern restaurant revolve around a POS system. Additionally, restaurant registers are no longer adequate.
Its capacity to gather sizable data sets and integrate technologies used for various restaurant management operations is its main distinction from restaurant cash registers. Restaurant POS systems fall into one of two hardware categories: stationary or mobile. The majority of restaurants use stationary POS systems, which typically use a combination of the following hardware:
A touch-screen punching order terminal
An apparatus for running POS software
A physical server that houses all the data
Credit and debit card readers are additional payment equipment.
Benefits of POS Systems
Creating detailed consumer profiles
You should be able to gather, manage, and track client information with the aid of a POS register system. With access to this information, store employees can better understand the clients they serve, which can encourage repeat business and improve your loyalty and retention marketing campaigns.
Your POS system figures out the whole cost
Your POS system computes the final price once each item is added to the customer's cart, including any applicable sales tax, and then changes your inventory count to reflect the products that have been sold. Store employees can now also apply discounts or promotional codes.
Gathering and real-time visualization of sales data
Your reporting and analytics tools should receive data from each transaction that passes through your POS system. Your POS system should make it simple to access metrics. You should be able to receive a complete perspective of your brand's sales and be able to filter by sales channel rather than viewing ecommerce and store sales data in separate platforms.
Conclusion
Today's customers are starting to demand a variety of services from eateries. They anticipate being able to purchase from your restaurant online, reserve a table online, receive customized offers and invitations, and receive speedier service than ever. In ways that a retail cash register never can, a POS enables you to keep up with rising client demands. Especially contemporary subscription-based POS systems, where the software receives constant back-end updates.
0 notes
notquiteapex · 2 years
Text
3DS capture cards are a thing but they're still very underground. Why?
I love my Nintendo 3DS. I am not a corporate shill. I promise.
Seriously though, my 3DS has some really cool games and its a shame that streaming/recording footage is such a hassle and limited by so many things. I'd prefer to stream off real hardware than an emulator for a variety of reasons including but not limited to the (current) amount of games that Citra can't run without major issues in some capacity. So with that off the table, what are my options?
Well, hardware capture is expensive right now. Not accounting for *gestures vaguely at everything involving the supply chain from 2020 to 2022* most places local to the US and UK make you pay out of pocket for a new system with a capture device installed or by sending yours in to have it built in. This is great for those who want to build their own because of budgetary reasons or just for the sake of starting their own business in this landscape. It makes pretty good money! I'm currently eyeing Delfino Customs for some systems, and they make bank!
Turns out, the reason they do this is because someone in Japan provides the boards and ships them overseas for up to a whopping $130! The seller goes by Optimize, their site (in English) can be found here. I'm going to be looking specifically at the New 3DS XL Capture kit, called New-SPA3.
Tumblr media
For the sake of documentation, the big chip on the bottom is a XILINX Spartan-3A FPGA (XC3S50A), package type is VQG100 (very thin flat pack), speed grade 4, temp range 0-85 degrees C. On top is a Cypress EZ-USB FX2LP (CY7C68013A) microcontroller, package type appears to be -56-LTXC.
The rest of the stuff on the green board is just a standard Micro-USB-B connector and passives like diodes, capacitors, resistors, and maybe a power regulator or two. The orange paper thing is also a circuit board, the intent is you cut it in a specific way to connect the board to specific electrical pads on the 3DS's circuit board, and the material the orange is made out of is flexible allowing for unique positioning and reaching.
Tumblr media
It's definitely a complex piece of tech! The FPGA takes in the video signals from the 3DS, processes them a bit, spits them out to the USB chip which sends them to a PC to be displayed. That's oversimplifying it just a teensy bit.
Unfortunately, due to the FPGA being in use instead of some kind of dedicated circuitry, we can't actually see how the data is being taken in and processed. FPGA's are field programmable gate arrays, meaning they are chips that are given a bit of code that manipulates how their logic works on a physical level, rather than just electronically processing signals. This is all done inside the chip, instead of some dedicated chip that does a specific function or a portion of a circuit board with known parts and gates. Unless we have a preprogrammed FPGA chip on hand and are able to test every single input and what its output would be (or even better, the source code for the FPGA from Optimize), we cannot possibly know how the FPGA takes the input and gives some output.
Luckily, we can make some educated guesses without ever actually owning a board, but rather looking at the software needed to use it!
Tumblr media
Enter non-standard, a Japanese maker of stuff. They created the program that interfaces with the 3DS capture cards that are installed by almost all services. The program in question is called `nonstd 3line Differential Signal viewer`, or just n3DSview, referencing the creator's name, the n3DS, and the method of transporting color data from a circuit board to a screen. Basically, differential signaling uses two wires for one bit of data, flipping one on and the other off to represent a one, then the reverse for a zero. This is done for signal integrity, and the 3 pairs mentioned represent red, green, and blue. (Note: this may not be how the video transmit system works for 3DS, this is just what the name is implying)
Now, USB is a complicated thing. When something wants to be transmitted over it in a unique way that isn't defined by some standard (so it's non-standard, hah!) (also, some standard interfaces include things like keyboards, mice, midi devices, and storage devices), one must create and define a device driver. Luckily the software comes with the necessary driver, so let's take a look.
Tumblr media
Yep, just as expected, this software relies on the onboard USB chip from Cypress to handle communications. The driver simply provides Windows a name to a face-- er, USB connection. The other files with the driver are the actual communication binaries that Windows uses to understand the incoming and outgoing signals. Unfortunately, due to the nature of the files being bytes and bytes of unintelligible compiled code, we're a bit stuck. We don't actually know how the device is communicating. Lucky for us, Raspberry Pi might have the answer
Tumblr media
Linux, like Windows, may need drivers. A Raspberry Pi runs Linux (usually) and needs to be able to communicate with devices just like Windows does. Thus, this file called a shared object file (which is like a Windows DLL) is provided along with the Raspberry Pi version of n3DSview. It contains code that can be used by any program compiled against it to interface with the USB device. It gives us great insight into how the USB chip communicates: through its dedicated FIFO processing!
Tumblr media
The image above is from the reference manual for the USB chip (this document also describes the FIFO specs), which describes all of the chips functions and features. FIFO is short for "first in, first out". This means that the data that gets put in first to the FIFO will be the first to get out. Think of it like a line in an amusement park. First arrivals get to get on the ride first, and a line will build up. The data works just like that, ensuring that it gets sent and processed in the correct order.
So we know that the chip is sending processed data in a FIFO manner, but we don't actually know how the signals from the 3DS are processed by the FPGA before they get sent to the USB chip. So we're a bit stuck for now, but we have a lot of great info from just looking at pictures and text. The next step would be to potentially decompile the RPi binary and see how it processes the input from USB, but that'll be for another time.
3 notes · View notes
greatreviewreview · 3 years
Text
What is a combiner box?
The role of the combiner box is to bring the output of several solar strings together. Daniel Sherwood, director of product management at SolarBOS, explained that each string conductor lands on a fuse terminal and the output of the fused inputs are combined onto a single conductor that connects the box to the inverter. “This is a combiner box at its most basic, but once you have one in your solar project, there are additional features typically integrated into the box,” he said. Disconnect switches, monitoring equipment and remote rapid shutdown devices are examples of additional equipment.
Solar combiner boxes also consolidate incoming power into one main feed that distributes to a solar inverter, added Patrick Kane, product manager at Eaton. This saves labor and material costs through wire reductions. “Solar combiner boxes are engineered to provide overcurrent and overvoltage protection to enhance inverter protection and reliability,” he said.
“If a project only has two or three strings, like a typical home, a solar combiner box isn’t required. Rather, you’ll attach the string directly to an inverter,” Sherwood said. “It is only for larger projects, anywhere from four to 4,000 strings that combiner boxes become necessary.” However, combiner boxes can have advantages in projects of all sizes. In residential applications, combiner boxes can bring a small number of strings to a central location for easy installation, disconnect and maintenance. In commercial applications, differently sized combiner boxes are often used to capture power from unorthodox layouts of varying building types. For utility-scale projects, combiner boxes allow site designers to maximize power and reduce material and labor costs by distributing the combined connections.
The combiner box should reside between the solar modules and inverter. When optimally positioned in the array, it can limit power loss. Position can also be important to price. “Location is highly important because a combiner in a non-optimal location may potentially increase DC BOS costs from losses in voltage and power,” Kane explained. “It only constitutes a few cents per watt, but it’s important to get right,” Sherwood agreed.
Little maintenance is required for combiner boxes. “The environment and frequency of use should determine the levels of maintenance,” Kane explained. “It is a good idea to inspect them periodically for leaks or loose connections, but if a combiner box is installed properly it should continue to function for the lifetime of the solar project,” Sherwood added.
The quality of the combiner box is the most important consideration when selecting one, especially since it’s the first piece of equipment connected to the output of the solar modules. “Combiner boxes are not expensive compared to other equipment in a solar project, but a faulty AC combiner box can fail in a dramatic way, involving shooting flames and smoke,” Sherwood warned. “All should be third-party certified to conform to UL1741, the relevant standard for this type of equipment,” Sherwood said. Also be sure to pick a combiner box that meets the technical requirements for your project.
A new trend is the incorporation of a whip: a length of wire with a solar connector on the end. “Rather than a contractor drilling holes in the combiner box and installing fittings in the field, we install whips at the factory that allow the installer to simply connect the output conductors to the box using a mating solar connector,” Sherwood explained. “It’s as easy as plugging in a toaster.”
This year arc-fault protection and remote rapid shutdown devices are more popular than ever, due to recent changes in the National Electrical Code that require them in many solar applications. “New technologies and components are driven by the NEC changes, as well as the desire for enhanced energy efficiency and reduction of labor costs,” Kane said. Some of these new components include: higher voltage components, integral mounting hardware and custom grounding options.
A photovoltaic power plant is a photovoltaic power generation system that uses the sun's light energy and electronic components such as crystalline silicon panels and inverters to generate electricity that is connected to the power grid and delivers power to the grid.
The PV power generation system consists of solar arrays, battery packs, charge/discharge controllers, inverters, AC power distribution cabinets, solar tracking control systems, and other equipment. The PV power generation system usually consists of PV modules, inverters, PV distribution boxes, meters, and power grids, and the distribution boxes, although the surge protective device is not accounting for a high percentage of the total system cost, but play an important role in the PV power generation system.
solar system
Photovoltaic Applications
Solar energy is an inexhaustible source of renewable energy, which is sufficiently clean, absolutely safe, relatively widespread, really long-lived and maintenance-free, resource-sufficient and potentially economical, etc., and plays an important role in long-term energy strategies. Photovoltaic power plants are currently among the most strongly encouraged green power development energy projects. The use of limited resources to achieve the maximum amount of energy for human use.
Power Distribution Boxes in Photovoltaic Power Plants
Photovoltaic power generation is developing rapidly in China, and the application of photovoltaic in all aspects of life has been reflected. Household photovoltaic power stations, large photovoltaic ground power stations, photovoltaic buildings, photovoltaic street lights, photovoltaic traffic lights, photovoltaic caravans, photovoltaic electric vehicles, photovoltaic carports, and more and more electricity-using environments have a photovoltaic figure, in which the photovoltaic box plays a vital role in the entire photovoltaic system, the main functions of which are as follows:
Combiner box
1. Power isolation function
Switchboards require a physical isolation device that allows the circuit to have a sharp break-point for the safety of personnel in service and maintenance situations. This device is called an isolation switch, which could ensure safety while the electricity has a short circuit, and the current increases and cause the cut off of the isolation switch.
2、Short circuit protection function
Once a short-circuit occurs in a circuit, it will cause a sudden increase in current, which will burn out the appliance and generate a lot of heat, making it susceptible to fires, with serious consequences. Therefore, when a short circuit occurs, a device is needed to cut off the power. The air switch is the device, which occupies an important role in the PV box. When you choose the air switch, please try to use reliable products and choose reliable manufacturers to ensure safety.
3、Energy measurement
Generally, PV energy meters are installed together with the DC surge protective device. There are also some places where the meter is separate from the distribution box. You can choose according to the local power supply department.
4、Over-voltage protection
The majority of distributed PV power generation is connected to the rural power grid. One of the characteristics of rural grids is that they are unstable, it is common that the power outages usually, and the voltage fluctuates greatly. In this case, the PV distribution box has a device indispensable, that is, an under-voltage protector. In the case of over-or under-voltage, it acts as a circuit breaker to protect other components. Under-voltage protection in the entire photovoltaic box, is more prone to failure of accessories, even if maintenance and replacement are required.
5、Lightning protection
Lightning is also a common natural phenomenon in life, especially in the summer, the thunderstorms are frequent happened and many accidents happened to people, Lightning is a scary thing in life, and there are lightning protectors in many places. There is also an important lightning protection device in the distribution box - the surge protector. A surge protector, also called a lightning protector, is an electronic device that provides safety protection for different electronic equipment, instruments, and communication lines. When the electrical circuit or communication lines because of external interference in the sudden generation of current or voltage, the surge protector can be turned on in a very short period of time to shunt, thereby reducing the circuit damage to other equipment, is an indispensable component.
6、About the cabinet
Generally speaking, the design service life of photovoltaic power generation is 25 years, so the distribution box should be used for 25 years, requiring a waterproof and dust-proof function. With the development of society and the progress of the times, the original distribution box made of cheap tin was not up to the requirements. Now the power supply bureau generally requires to use the of galvanized plate spray plastic, iron plate spray plastic, stainless steel, plastic steel, and other materials of the distribution box. Otherwise, you will forbid to the parallel network, in order for your safety, choose the appropriate box is very important.
0 notes
mikegranich87 · 3 years
Text
Loupedeck Live is a compelling alternative to Elgato's Stream Deck
Life’s too short to drag a mouse more than three inches or remember elaborate keyboard combinations to get things done. This is 2021 and you can have a pretty, dedicated button for almost any task if you want. And if you partake in anything creative, or like to stream, there’s a very good chance that you do. Loupedeck makes control surfaces with many such buttons with a particular focus on creatives. Its latest model is the “Live” ($245) and it’s pitched almost squarely against Elgato’s popular Stream Deck ($150). Both have their own strengths, and I’ve been using them side by side for some time now. But which one have I been reaching for the most? And does the Loupedeck Live do enough to command almost a hundred more dollars?
First, we should go into what the Loupedeck Live actually is and why it might be useful. In short, it’s a PC or Mac control surface covered in configurable buttons and dials. The buttons have mini LCD displays on them so you can easily see what each does with either text, an icon or even a photo. Behind the scenes is a companion app, which is where you’ll customize what each button or dial does. Many popular applications are natively supported (Windows, MacOS, Photoshop, OBS and many more). But if the software you use supports keyboard shortcuts, you can control it with the Live.
So far, so Stream Deck? Well, kinda. The two are undeniably very similar, but there are some important differences. For one, the Stream Deck’s only input type is a button; Live has rotary dials too. This makes Loupedeck’s offering much more appealing for tasks like controlling volume, scrolling through a list or scrubbing a video and so on. But there are also some UI differences that give them both a very different workflow, too.
Hardware
James Trew / Engadget
Like Elgato, Loupedeck currently offers three different models. With the Stream Deck, the difference between versions is all about how many buttons there are (6, 15 and 32). The different Loupedecks are physically distinct and lend themselves to certain tasks. The Loupedeck CT, for example, has a girthy dial in the middle for those that work with video. The Loupedeck+ offers faders and transport controls and the Live is the smallest of the family with a focus on streaming and general creativity.
At a more superficial level, both the Stream Deck and the Live look pretty cool on your desk, which clearly is vitally important. Elgato decided to make its hardware with a fixed cable, whereas Loupedecks have a removable USB-C connection. I wouldn’t normally bother to mention this, but it’s worth noting as that means you can use your own (longer/shorter) lead to avoid cable spaghetti. You can also unplug it and use it to charge something else if needed. Minor, but helpful functionality if your workspace is littered with things that need topping off on the reg like mine is.
Clearly, one of the main advantages with the Live will be those rotary dials. If you work with audio or image editing at all, they are going to be much more useful than a plain ol’ button for many tasks. For example, I wanted to set up some controls for stereo panning in Ableton Live. On the Stream Deck I need to employ two buttons to get the setup I wanted: pan left one step / pan right one step and it takes a lot of presses to move from one extreme to the other. With the Live, I can simply assign it to one of the rotaries (clicking it will reset to center). From there, I can dial in the exact amount of panning I want in one deft movement.
That’s a very simple example, but if you imagine using the Live with something like Photoshop for adjusting Levels, you can see how having several rotaries might suddenly become incredibly useful.
Another practical difference between these two devices is the action on the buttons. On the Stream Deck, each one is like a clear Jolly Rancher with a bright display behind it. The buttons have a satisfying “click” to them and are easy to find without really looking. The Live, on the other hand, feels more like someone placed a divider over a touchscreen. That’s to say, the buttons don’t have any action/movement at all, instead delivering somewhat less satisfying vibrations to let you know you’ve pressed them.
Software
James Trew / Engadget
The real difference between these two, though, is the workflow. I had been using the Stream Deck for a couple of months before the Loupedeck Live. The Stream Deck is, at its core, a “launcher.” Assign a button to a task and it’ll do that task on demand. You can nest multiple tasks under folders to expand your options nearly endlessly, but the general interface remains fixed. So, if you wanted to control Ableton and Photoshop, for example, you might have a top-level button for each. That button would then link through to a subfolder of actions and/or more subfolders (one for editing, one for exporting actions and so on). These buttons remain fixed no matter what application you are using at a given moment.
With Loupedeck, it’s all about dynamic profiles. That’s to say, if I am working in Ableton, the Loupedeck will automatically switch to that profile and all the buttons and rotaries will change to whatever I have assigned them to for Ableton. If I then jump into Photoshop, all the controls will change to match that software, too. Or put another way, the Stream Deck is very “trigger” based (launch this, do this key command). The Loupedeck is more task-related, with pages, profiles and workspaces for whatever app is active. The net result is, once you have things customized to just how you want them, the Loupedeck Live is much more adaptive to your workflow as it “follows” you around and has more breadth of actions available at any one time. But at first, I was trying to make it simply launch things and found that harder than it was on a Stream Deck until I figured out how to work with it.
This “dynamic” mode can also be turned off if you prefer to keep the same controls available to you at any one time, but for that you can also assign set custom “workspaces” to any of the seven circular buttons along the bottom — so if you want your Photoshop profile to open with the app, but also have some basic system/trigger controls available, they can just be one button push away.
This approach definitely makes the Loupedeck feel more tightly integrated to whatever you’re doing “right now” rather than a nifty launcher, but it also takes a bit to get your head around how it wants to do things. At least in my experience. With the Stream Deck I was able to get under its skin in a day, I am still reading up on what the Live can do after some weeks, and need to keep reminding myself how to make certain changes. As a reverse example, launching an app is something Stream Deck was born to do. With a Loupedeck, you have to create a custom action and then assign that to a profile you can access at any time (i.e. a custom workspace) or add that action to various different profiles where you want it to be available.
Both do offer the option for macros/multi-actions and work in very similar ways in that regard. If, say, you want to create a shortcut to resize and then save an image, you can do so with either by creating a list of actions to be carried out in order. You can add a delay between each step and include text entry, keyboard shortcuts and running apps — all of which allows you to cook up some pretty clever “recipes.” Sometimes it takes a bit of trial and error to get things right, but once you do it can simplify otherwise fairly lengthy/mundane tasks.
James Trew / Engadget
Where the Stream Deck takes things a little further is with third-party plug-ins. These are usually more complex than tasks you create yourself (and require some programming to create). But thanks to Elgato’s active community, there are already quite a few on offer and the number is growing every day. Some of them are simple: I can have a dynamic weather widget displayed on one of the keys, others are more practical — I use one that switches my audio output between my headphones and my PC’s built-in speakers. Some of my colleagues speak highly of a Spotify controller and the Hue lights integration — both of which came from the Stream Deck community.
Loupedeck offers a way to export (and thus share) profiles, but as far as I can tell right now, there’s no way to do anything more complex than what you can do with custom controls — if that were to change in the future that could really enhance the functionality considerably.
Beyond the hardware controls and the user interface, it’s worth mentioning that both the Live and the Stream Deck have native support for specific apps. “Native” means that the companion software already has a list of drop and drag controls for select apps. Elgato’s controller, unsurprisingly, has a strong focus on things like OBS/Streamlabs, Twitch and, of course, the company’s own game capture software and lights along with some social tools and audio/soundboard features (for intro music or effects).
The Loupedeck Live also offers native controls for OBS/Streamlabs (but not Twitch) but tends to skew toward things like After Effects, Audition, Premier Pro and so on. The list of native apps supported is actually quite extensive and many more (like Davinci or iZotope RX) are available to download. If streaming is your main thing, Elgato’s solution is affordable and definitely more streamlined for that. The Loupedeck, however, is going to be more useful for a lot of other things — it’ll help with streaming, but also help you design the logo for your channel.
So which?
At this point, you can probably guess what the wrap-up is. Elgato’s Stream Deck offers less functionality overall but that can be greatly expanded as the number of plugins continues to grow. But likewise, it’ll always be somewhat limited by its singular input method (buttons). The Loupedeck Live is much more ambitious, but with that, trades off some of the simplicity. If you were looking for something that can take care of simple tasks and skews toward gaming or podcasting, save yourself the $100 and go with a Stream Deck, but if you want something that can pick up the slack for multiple desktop apps and tools, you probably want to pat your pockets a little more for the Loupedeck Live.
from Mike Granich https://www.engadget.com/loupedeck-live-versus-the-stream-deck-170020781.html?src=rss
0 notes
evbexconsulting · 3 years
Text
Facility Management Technology a key Enabler for the Fourth Industrial Revolution
Tumblr media
We are in the middle of the fourth industrial revolution which is changing the way we work and live. This potential productivity revolution brings up the convergence of the right tools and technology that empower transformative areas and comes hot on its heels with spawned automation, cyber-physical systems and big data.
The recent technological leaps and intrusion of smart technologies have aspired a growing age of millennials. With the evolving workforce, thriving in a ‘real-time, anytime and anywhere’ economy, business culture is transforming under their influence.
The digital maturity honing the existing business processes, penetration of technology in the traditional businesses and increased number of millennials in the workforce, altogether is creating a competitive environment among the businesses along with the necessity to incorporate technological facets in the processes to enhance productivity and improve user-experiences.
Keeping up with the demands of advancements in multiple sites in a portfolio requires the expertise of additional tools, control, intelligence and strategic implementation of the resources at hand.
Thus, to thrive in the fiercely growing and competitive global market, organisations will continue to outsource facility management services on a large scale to transform the way organisations utilise their resources and professionals, creating a win-win for employees as well as businesses in the space. Both in-house and outsourced facility management is growing exponentially with a total market for facilities management to grow from 1.24 trillion USD (in 2019) to a predicted number of 1.62 trillion USD by 2027
The facilities management industry has seen itself rising from an under-optimised resource to a fundamental support that enriches the interaction with the facility, delights the stakeholders and intelligently empowers the businesses.
What is digital transformation and why it is necessary for facility management?
Digital Transformation is the application of digital capabilities to processes, products, and assets to improve efficiency, enhance customer value, manage risk, and uncover new monetization opportunities. — Bill Schmarzo, CTO of Dell EMC Services
In the business context, data-driven digital transformation refers to the exploitation of real-time data collection, analysis and prediction of issues to constantly add value to the companies’ offering and to stay competitive. This will ultimately optimise performance within the organisation, maximise resources, flexibility and help companies make timely and informed decisions.
With the technological disruption in almost every business field, digital transformation in the FM industry has become equally essential. First because of the conscious change in perspective involving every stakeholder in the effort to add value and innovation to the ecosystem and second to potentially handle the broader needs of the business.
The current legacy models of FM operations have considerable budget constraints which limits innovation and efficiency. However, by utilising the business automation techniques we can explore the untapped data to obtain more predictable and contextual insights, making more of the already available assets. The utilisation of smarter technology like IoT, ML and AI makes the existing systems smarter, rather than replacing them with a new generation of inputs eventually contributing to effective cost-cutting.
“People actually remember your service a lot longer than the price.” Technology-driven efficiency results in holistic and strategic FM operation along with elevating asset performance, commercial gains, improving customer service and satisfaction.
After a considerable background of how the FM industry is currently witnessing the paradigm shifts in its operation and transformational value, we should consider the positives that are lining up altogether and seamlessly contributing to upgrading FM from a passive cost centre to a value-driven essential investment.
Procure operational leads
Traditional reliability of hardware and manual work is now proving to be inflexible and rigid because of decreased optimisation and difficult management. The idea is to set up smart workflows that track asset performance, predict and prevent peculiarities and proactively manage the facilities.
With Big Data Analytics, Artificial Intelligence and Machine Learning, a new layer of empowered functionality will be added enabling predictive intervention and intelligent insights in the business operations. These will help to make informed purchase decisions along with continuous speculation on investments.
Continuous optimisation
FM operations can achieve solid productivity and acceleration in the ongoing tasks by eliminating inefficient paperwork and manual work. Machine Learning-driven performance analytics and custom KPIs enable data-led decision making, drive predictive models of operations and pinpoint abnormal asset anomalies. This facilitates a quicker and responsive approach to problem-solving.
Data-driven decision making and predictive analysis
Being confronted with data and operating through technology is pushing facility management to new waters. Actionable insights derived from the IoT devices and other software when combined and processed can aid in optimising assets, workforce and sustainability. Several time-series ML models can derive a new stream of information about potential cost optimisation, inventory management and consumption patterns. Outcomes include reduced overall operational cost and minimised asset downtime.
Stay on top of your portfolio
IoT enabled smart buildings and offices, optimised workplaces and Computer Vision Technologies allows making operations coherent across distributed sites and eases down tracking of assets, equipment, workflow systems and buildings — which were conventionally quite complicated and fault-prone. A fully centralised system that can solely but effectively respond to energy, security and operational data on a single platform will require a less dispersed workforce. Such a unified central source provides a bird-eye view of the entire organisation and makes monitoring easy.
With proper exploratory data analysis on asset and budget, repair or updation decisions related to the equipment can be quickly taken, that too possible in accordance with the budget and maintenance data. This helps monitor the machinery uptime and downtime and increase the life cycle of the equipment.
Blockchain Improvements
“The economy will undergo a radical shift as new, blockchain-based, sources of influence and control emerge. Blockchain has the potential to change the way facilities are managed, ranging from work order tracking to preventive maintenance to life cycle assessments”. ~ IFMA Facility Management Journal
It offers a streamlined way to store and access secure data. It functions as a cloud-based, permanent, digitally secure ledger among the parties, and can reduce complexities of contract management, work processing order and processing of payments. All the contractual processes are recorded in real-time and it induces automation with the capability of self-executing workflows.
Co-exist with your problems and ensure predictive maintenance
Where everything is dynamic and real-time operations are now a necessity, how the solutions employed can be static?
Early fault detection is critical for smooth customer experience and business functioning. The earlier fault was reported after the harm was done. The engineer would react immediately leading to haphazard and sometimes inconvenient situations. But now, the technology applied in buildings is coming pre-installed with sensors and IoT based services come to the rescue. The digital information collected can determine whether the device is working as expected. And in case of anomaly, early detection of a fault or replacement reminders eases out the whole process.
Owing to the occurrence of the COVID-19 pandemic, ensuring safety standards and employing mitigation techniques in the workplace becomes quite necessary. Using IoT and AI, social distancing among the employees, regulating the planned access to different assets within a building, organisations of meetings by keeping the health standards in mind, attendance trackers using computer vision technology, automatically generated digital passes etc. are some of how organisations have seamlessly adopted the measures to minimise human contact and intervention.
Also, thermal cameras for temperature screening, built-in room sensors to track human activity, face-mask detection and optimised parking spaces are widely used to enhance employee safety and experience in the office.
The fourth industrial revolution is advancing faster and is more revolutionary than any to date. Access to previously unimaginable amounts of data, utilisation of technology in every chore and value-driven approaches adopted by the FM industry is increasing the control over any other operational task, providing sustainable solutions and saving money.
The technologically disruptive paradigm of facility management has transformed the conventional way of the utilisation of professionals, assets and money. It has now taken a centre stage and produced unprecedented outcomes. This has always been the industry that identified bottlenecks in the business processes and created solutions in response.
The consensus emerging for technology predicts bigger shifts and includes increased support for research and development, regulations for emerging technologies and devices, value-driven approaches of FM and their accessibility to businesses of all sizes.
The IoT-AI driven FM industry can’t blaze forward without the incorporation of a skilled workforce that has a strong background in STEM areas like programming, data analysis, cybersecurity, AI and ML. Big changes are required to keep the innovations alive and like many other fields, the FM industry is also calling for the adoption of new technologies and an IT skilled workforce.
Integrated Facility Management is expected to increase its share of the total addressable FM market from 10.3% in 2018 to 13.9% by 2025, while the total market is forecast to grow from $819.53 billion to $945.11 billion during the same period at a compound annual growth rate (CAGR) of 2.1%. — Frost and Sullivan
FM is the main driver of the commercial value of properties by providing a faster response, smarter solutions and better management strategies. Now, every stakeholder in the associated industry stands to benefit, customers will get amazing services and experiences, owners will achieve greater profits and above all, Facility Managers are the key enablers.
For More Information, checkout the link: www.evbex.com
0 notes
workrockin · 5 years
Text
Understanding MIMO
Tumblr media
I don't know about you but sometimes technology makes me feel stupid. Usually you'll find me in a cheerful mood happily trotting along somewhere in a grass field, but every once in a while, a heavy feeling takes hold and it makes me sit and reflect upon the choices that I have made, leading me up to the present point in my life. Anyone who knows me can testify that I'm not the one to waste a bright sunny day in useless contemplation. You can imagine then the force of this indescribable feeling that took a free spirited creature like myself, who would rather spend its time chasing the wind, to chain itself and think. Imagine that. Sit and think on a bright sunny day! But one must make best of ones circumstances.
The other day I was reading about MIMO. Just scrolling through a Wikipedia article. I can't say how I got on to it. I don't remember what prompted me to open the page. All I remember is that it was there. It was interesting read. Multi this. Many that. etc etc.
I'm not all that much into reading, as you probably would have guessed. It does not fit my character, you see. I'm someone who prefers activity, due to my natural outgoing, adventurous nature. I've learnt though my own experience and through the experience of my ancestors that its of no use fighting your own nature. It is a battle that you can't win.
As you can understand therefore, I rarely read. Unless it is to pass time at work. Even then I prefer to study the habits and inclinations of cats on various online encyclopedias. Especially preferring vivid portraits on imagur and short documentaries on youtube that aid in understanding to big walls of text.
But something drew me to that particular subject that day. I don't know what it was. I'm not a painter who can describe his feelings through his art. Nor am I a poet who can write a song about it. I'm unable to find the reason behind my decision to read about that particular subject of which I had no particular knowledge. Maybe it was fate. Maybe something else. All that matters now is that I read about MIMO. And today a similar feeling compels me to talk about it.
MheeeeeeMhawwwww
MIMO stands for Multipe Input and Multiple output. MIMO is applied to make a radio link more robust by increasing the number of transmitting and receiving antennas. The goal of MIMO is to increase the network robustness and capacity. MIMO takes several forms
In one kind of MIMO Multiple can antennas transmit among different paths. Multiple receivers can accept those signals. Each one on a different path. This is the basis of Beamforming. Not all of the receivers will receive the best signal so we must design a way for the receiver to determine the best signal. This is done with the help of precoding. For the purpose of this discussion we don't need to understand precoding.
In another type of MIMO instead of sending multiple signals you simply send one signal but split it into multiple streams. Each one of those streams are transmitted over a different antenna. To the receiver it looks like each one of these signals have arrived at a different channel and thus network capacity is increased....... At least in theory.
Finally there is Diveristy Coding. Diversity coding is best understood as spray and pray technique. In this type of MIMO a single stream is transmitted many times by multiple antennas hoping that at least one of them will arrive at their destination unharmed. Desperation is palpable in this one.
MU-MIMO is MIMO with multi user capabilities. In other words you can MIMO with several users at once.
Reading this post left me with several questions. After collecting my thoughts I shared them with one of my smarter friends to understand what he had to say about the subject. My mom used to tell me that it is always good to take advice from someone who is brainier and more knowledgeable than you are. Although you must trust in your own ability its never a bad idea to seek some guidance. Those who know me can tell you that I have always been an obedient child. It should come as no surprise then that I'd listen to her advice.
I took my queries to my friend and this is what he had to say. I paraphrase because I forgot to carry my notepad with me to jot down his wise words.
THUS HE BEGAN
Before we can understand MIMO we must understand wireless networks. A wireless network is a physical network with no wires. When a wireless communication channel is established between two devices it is equivalent to connecting the two devices with a physical wire.
Of course the appeal of wireless networks is that you don't have to invest in huge infrastructural projects to increase the connectivity in a region.
The downside is that since wireless signals can't be directed as well as wired signals there is a lot of signal loss. That leads to a degraded quality of service. Which leads to unsatisfied customers.
Therefore effort has to be spent to get wireless signal as close to the quality of wired signal as possible. If one has the inclination to study it can take an entire lifetime to understand the clever techniques invented to make this possible. Life is short. What we need know is a gist of the matter. Here it is:-
We want to efficiently utilize the communication channel. We do this by multiplexing. 
We want to make the networks more robust. We do this by adding redundant network stations.
MIMO is one such technique invented. However MIMO has several shortcomings.
It requires a dedicated hardware in base station and client machine. Existing machines will not work. 
It requires complex receiver hardware to assemble the signals.
It does not guarantee a better QOS or signal to noise ratio. MIMO in essence is simply a way of increasing the chance of getting a good quality signal by adopting a brute force approach.
The goal of MIMO is to increase total network throughput ( Individual speeds will not be increased, however more user will have access to a uniform speed ) . Bu then why do we need MIMO at all? There are alternatives where these goals can be achieved cheaply.
Most of the MIMO technology can be easily replicated using cheap inter-operable components available in the market right now. Especially in the home networking context.
In fact the principle behind MIMO is best realized when you set up cheap multiple access points rather than highly optimized individual devices. Quantity is greater than quality in network coverage. All the time.
Consider Beamforming. By definition beamforming is the technique of emitting same signals from different antennas in a way that they all add up to for the best signal at the receiver.
But this can be easily replicated by having multiple base stations each of them emitting a signal. All client device has to do is choose the best one?
"But its costly to set up multiple base stations you say?"
I ask different questions.
What is the cost of having multiple antennas on a single station and a receiver? Exactly how is a single station with multiple antennas cheaper from multiple stations with single antenna? What happens when the receiver is in motion?
In the last case we discover that beamforming actually reduces to the same old radio signals that we're used to. And we have spent all this money on equipment that only works when the client and the base station positions are well defined? Static, in other words. Really? Why not go with a Point to Point connection then? Nothing can beat that in terms of quality.
There is one very big advantage of multiple stations. It is redundancy. No single point of failure. That is something that is well worth the investment.
Next consider spatial multiplexing. Here you divide a single data channel into multiple channel streams to trick the receiver into believing that its getting data from multiple channels at a higher speed. It does increase the network capacity. But how many client devices can actually use this technology?
Let us assume that we actually get the client devices to work with this technology. What then? How should the application layer adapt to this change? How should we design our you tube app for example, when the last 30 min of a 90 min movie have arrived successfully but the stream transmitting the first 60 min is lost?
Whatever you gain on base station efficiency you loose at the client reception. As a user I feel no better or even worse as compared to old network.
Data integrity is just as valuable as data speed. I can understand a congested network giving me slower speeds. But I can't tolerate a fast network giving me data out of place. It may work slowly but it should always be right.
A well designed redundant network will always outperform an "intelligent" auto adjusting technology. Interference exists and we have means to work around it.
Like any breakthrough in communication technology to fully gain the advantages of MIMO we need compatible client as well as server station hardware. This means that we need to invest in new infrastructure. Do the gains provided by MIMO justify the investment? I'll leave that to you to decide.
With these parting notes my friend signed off. Having nothing further to add to the discussion I must take my leave as well. I hope that my ramblings have been of use to a few people who like me often find themselves at a loss in the ever changing, ever expanding , progressive world of communication technology. I'm lucky to have a friend who has a certain interest in these things. Though I can't for the life of me understand how any one can be bothered to read about electronics and radios when you can study cats instead. But no one can fight their nature, I suppose.
If you'd like answers to your wifi problems don't hesitate to send an email to
write to us on our tumblr page
[https://workrockin.tumblr.com/ask]
tweet
[https://twitter.com/workrockin]
connect with us on linkedin
[https://www.linkedin.com/in/workrock-careers-21b3a2186/]
0 notes
rosponseai · 5 years
Link
If you’re running a call center, you are familiar with how crucial call center technology is in improving customer experience. Without the right tools and technology foundation to support the call center infrastructure, your call center is incapable. Let’s discuss a little bit more about call center technology in contemporary call centers. What is Call Center Technology? Every Call Center requires the right technology to power principal operations, boost customer experience and cut expenses. This is what encompasses call center technology in a call center – what makes up and complements the underlying infrastructure. Difference Between Call Center Technology and Call Center Infrastructure Most people can’t differentiate between call center technology and call center infrastructure. While they are very alike in nature, and are often used interchangeably. In this article, we are going to try and define both terms as well as point out how dissimilar they are. Call Center Infrastructure refers to the set of software, hardware, and network components enabling a call center to operate effectively, like LAN Network and VOIP Telephony. In contrast, call center technology typically refers to the numerous sets of technologies employed to enhance customer experience and operations in a call center, like automatic call distributor, intelligent contact center routing and so on. Technology Essentials in a Call Center When it comes to call center technologies, the following are some important technologies you must have in your call center: Customer Relationship Management (CRM) With a call center, your staff are going to come into contact with a wide variety of callers. Managing them effectively with a CRM tool is one of the best ways to engage with customers on a regular basis. Some of you may ask why? This is because your marketing & operations teams are going to require to access the contact data on a daily basis. If you’re searching for the most effective CRM to handle your customer experience, then one that has a fully-encompassing 360-degree customer view to gain a complete view of all customer engagements is preferable. Automatic Call Distributor (ACD) Automatic Call Distributor, or ACD, handles all the incoming calls that a call center gets and initiate pre-set rules to forward it to the most suitable agent. The most progressive form of modern contact center routing is Intelligent skills-based routing. Call Center Routing can significantly improve first call response, or the rate at which questions are responded the first time. According to SQM Group, a 1% improvement in first Call Response converts to $276,000 in yearly operational savings for the typical call center. Predictive Dialer A predictive dialer is used for automating outbound calls, where a system automatically dials a distinct set of numbers while calculating which call center agent will be available to receive the call when it successfully connects. This is one aspect of the call center dialer technology. Computer Telephony Integration (CTI) Computer Telephony Integration, or CTI, enables call center staff to manage their call dialing without having to handle physical telephones. There are many benefits of using computer telephony integration in a call center. Self Service Nowadays, customers aren’t looking for assistance with their issues, they want to find solutions for their problems themselves. With the support of intelligent self-service options like artificial intelligence IVR or chatbots, customers can solve their problems easily without much effort. According to The Harris Poll, almost 50% of customers with texting capabilities would favor pressing a button to start a text conversation instantly, rather than waiting on hold to talk with an agent. Case Management A case management system is used to professionally manage customer inquiries via ticket management system to log the problem at each stage along the progress in real-time. Having a customer experience platform with integrated case management system can hugely enhance customer satisfaction in a call center. Artificial Intelligence Artificial Intelligence, Machine Learning and Big Data are all hot topics in the call center industry. Besides that, AI can also be used to enhance customer experience. Possessing the capacity to extract beneficial business intelligence from big sets of data instantaneous can offer crucial insights for customer experience perfection. Social Media Sentiment Analysis can be integrated to comprehend the visitor’s emotion at the exact stage of the customer journey. In less than a year, the consumer will be able to maintain 85% of the relationship with a business without engaging with a real-live agent according to Gartner. Social Media Rather than call a representative for a question, consumers would rather go online on social media channels, like Facebook or Twitter, to obtain real-time data about their issues. It is much simpler to confirm if a website is down by checking on social media, rather than giving a call center representative a call. In a tougher customer service circumstances, customers can use social media to notify the company about their query by leaving their Order ID. There is no doubt that there is a growing role of social media in business. If you want to attain an ideal customer-agent engagement in your call center then having an effective Customer Relationship Management (CRM) can go a long way in making this a reality. Real-Time Analytics & Reporting Without real-time inputs from a reliable reporting tool, you’re left to believe your hard work is hopefully paying off. With data coming in from all touch points, agent, and team – your staff is in a much better position to service consumers effectively by examining and interpreting incoming data. Most customer experience software has real-time customer experience analytics to comprehend and enhance customer experience in their industry. Call Center Technology – Where Does Your Business Stand? Since call center technology is still evolving and improving every day, it is difficult to pin-point precisely where a business stands in terms of call center technology. Do you require more progressive call center technology to power your business to greater heights, or just a trivial system upgrade to address a departmental problem? In any case, you should start with performing a customer experience assessment before doing anything to ensure that the call center software you chose for your business is effective and falls within your budget.
0 notes
cdrforea · 4 years
Text
SimpliSafe Home Security System Review: Beautiful -- and Brains, Too!
New Post has been published on https://bestedevices.com/simplisafe-home-security-system-review-beautiful-and-brains-too.html
SimpliSafe Home Security System Review: Beautiful -- and Brains, Too!
"Stylish hardware, a range of sensors and home security monitoring make SimpliSafe a top choice."
Beautifully designed base station fits seamlessly into your home
Setting and forgetting the installation means that you will only be interrupted if there is a problem
The affordable home surveillance service offers comprehensive coverage
No contracts
Limited smartphone integration ex works
Home Monitoring Service subscription required
If you're concerned about home security but don't have the time (or inclination) to explore the myriad of products available, a one-box solution like SimpliSafe Protect can calm your fears. This starter kit is part of a wider selection of smart home security lines that include alarms, sensors, cameras and more, and offer professional surveillance support 24/7.
The security of smart homes is booming with only 20 percent of the currently protected Americans: Amazon is pushing for 360-degree smart home security with the recent takeover of Ring via Smart Doorbells and is competing with Nest Secure from Google and newcomers like Abode, for example. SimpliSafe is not (yet) a well-known name, but the company has been active in the home security field for more than ten years. The company is valued at over $ 1 billion after a recent capital injection.
With the company's third generation system on the shelves of your local big box store, there's no better time to try SimpliSafe.
Home security reinvented with beautiful hardware
In 2018, safe, intelligent, and simply are no longer the differentiators for smart home devices that they used to be. Now consumers are also demanding style. Inexpensive Kickstarter kits may have been good enough in 2014, but optimized systems like Nest Secure have improved the game thanks to beautifully designed sensors and curved keyboards.
Terry Walsh / Digital Trends
In response, SimpliSafe worked with the design gurus IDEO to reinvent security hardware for smart homes – with elegant results. The SimpliSafe base station is the star of the show and resembles the fruits of a one-night stand between Amazon Echo and Google Home. Equipped with a high-quality loudspeaker, a 95 dB siren and free cellular and Wi-Fi connectivity, the powerful hub blends wonderfully into the background. However, when asked to act, you and your intruders know it's there. An integrated emergency power supply offers continuous protection in the event of a power failure.
While other components don't have the same visual impact, exploring the beautifully presented starter kit shows a generous selection of smart home sensors and controllers. You can customize the kit contents when ordering your system so prices vary depending on your requirements. Otherwise, choose a pre-configured bento box of your choice.
The fruits of a one-night stand between Amazon Echo and Google Home
For comparison purposes, a custom selection of the SimpliSafe kit that exactly fits the Ring Protect system for $ 199 (base station, keyboard, an input sensor, and a motion detector) costs $ 229 – a little more expensive, but as we'll discuss in a moment , SimpliSafes hardware is more advanced.
The base station is accompanied by a large, wall-mounted keyboard that can be used to activate and deactivate the system. In contrast to Nest and Ring, the SimpliSafe keyboard has a monochrome display and is used both for system configuration and for arming. This new device is more compact than the previous generation and has a larger, brighter screen, illuminated buttons and a larger signal range. Soft-touch plastics and sleek lines ensure a good match with the base station, but clicking controls (where you press to navigate the sides of the screen) and annoying beeps that accompany each command make the experience cheaper. An oversized keychain can also be included to switch protection.
Terry Walsh / Digital Trends
Four entrance sensors for windows and doors are neat enough, but they lack the finesse and first-class finish of Nest Secure. A motion sensor fits perfectly in a corner between the wall and ceiling (or it can sit on a desktop). An independent panic alarm, a water leak sensor and a freeze sensor are also included in the scope of delivery. For the sake of simplicity, most sensors have a self-adhesive back, but screws are also included for permanent installation.
To ward off potential intruders, a large garden sign and window stickers with SimpliSafes 24-hour surveillance protection are also included. When you add an expanded selection that includes smoke and CO2 alarms, surveillance cameras and an upcoming smart doorbell and lock, SimpliSafe is one of the most comprehensive security systems on the market today.
Voice-controlled configuration is easy once you get used to a physical keyboard
Given the number of components included, installing the system is expected to take some time, although you will find that the starter kit is partially pre-configured for you upon arrival. Of course, you need to invest some time to figure out the best placement for the various sensors and other triggers in your kit. After the physical installation, simply press a button on each component to register. You will then be asked to enter a personalized name for this component on the keyboard. A included test mode allows you to go through the house again by pressing each button to check that all devices are working.
A lack of ready-to-use smartphone integration is a real disappointment.
SimpliSafe appears to connect to the cellular network automatically, but in the past, lack of signal strength has left users somewhat in the dark. Wi-Fi support has been added to this latest generation. So if you remember, you will find that connecting the system to your home network is easy enough.
The process is easy once you get used to using a physical keyboard instead of a smartphone app. The voice announcements via the high quality loudspeaker of the base station ensure that you are never lost.
It blends beautifully into the background, but you have to register for all functions
Once you are ready to go, you will find that SimpliSafe is largely invisible unless there is a problem. This is a big plus for us. Large "Home" and "Away" buttons on the keyboard and key chain make it easy to activate and deactivate the system. We found the sensors to be robust and very responsive. Activating the base station alarm and keyboard notifications took less than a second.
At 1,000 feet, SimpliSafe offers a better sensor range than nest and ring and at the same time outperforms the competition with a battery life of five to seven years. However, a lack of ready-to-use smartphone integration is a big disappointment. SimpliSafe offers both a web app and a mobile app. However, without subscribing to the company's subscription service for $ 15 a month, only basic features are included, such as: B. Activation of home surveillance, camera surveillance and account management.
As soon as the service is active, remote notifications and app control as well as monitoring of people are opened around the clock. When your alarm goes off, your base station will notify the surveillance center that will contact you (and other specific family members or friends). False alarms can be easily deactivated with a safe word, otherwise the police or fire brigade will be sent to your home.
As another sweetener, subscribing enables a growing number of third-party device integrations, including August Smart Locks, Amazon Alexa, Google Assistant, the Nest Learning Thermostat, and more.
We love SimpliSafe's low price and comprehensive home security, but would rather have remote notifications and system access available without having to pay for full home surveillance.
Warranty information
SimpliSafe is covered by a three-year limited warranty, well ahead of Nest Secure (two years) and Ring Alarm (one year).
Our opinion
If you want to protect your home with a remote monitoring service, choosing SimpliSafe makes a lot of sense. An affordable service subscription with 24-hour monitoring, remote access, powerful sensors and elegant security hardware is a convincing combination.
SimpliSafe certainly offers real value and performance, even without home surveillance, but the lack of remote app access and notifications weakens the offering. However, such a comprehensive selection of sensors and supporting hardware makes SimpliSafe a fantastic choice for 24-hour house protection.
Is there a better alternative?
At $ 499, Nest Secure offers great hardware, but it can't keep up with the price and doesn't have the multitude of security sensors offered by SimpliSafe. Ring Alarm is a more compelling competitor at $ 199. Here too, the breadth of SimpliSafe is lacking, but Ring is rapidly expanding its ecosystem for security hardware and making it an observable ecosystem. A comprehensive selection of sensors, supporting hardware and monitoring services make SimpliSafe a fantastic choice for 24-hour house protection.
How long it will take?
SimpliSafe is one of the original pioneers of smart home security, and a substantial capital injection means that it will continue for some time. Buy with confidence.
Should you buy it
Good-looking hardware, a comprehensive selection of sensors and supportive monitoring services make SimpliSafe a fantastic choice for 24-hour house protection. Put simply, SimpliSafe is one of the best home security systems on the market.
Updated September 19, 2018 to find that SimpliSafe now works with Google Assistant.
Editor's recommendations
0 notes
terabitweb · 5 years
Text
Original Post from Rapid7 Author: Aaron Sawitsky
In a recent webcast, our panel of cybersecurity experts discussed all things cloud security, including cloud security best practices, how to avoid common security pitfalls in cloud environments, and how to work with DevOps to get the most out of your organization’s cloud investment.
In this blog post, we’ll share some of our experts’ insights into protecting your cloud environment:
Cloud security requires a new mindset
Our security panelists—Rapid7’s Aaron Sawitsky, Bulut Ersavas, Josh Frantz, and Tyler Schmidtke and Scott Ward of AWS—said that moving to the cloud requires security teams to develop some new ways of thinking. For security professionals accustomed to seeing and touching physical hardware in a data center, working with cloud environments can be a big adjustment. In order to take full advantage of the benefits of cloud, you’ll have to adapt your organization and your team’s skill sets to fit into your new reality.
There are some special considerations when it comes to the cloud. One difference is that for a cloud environment, the responsibility for security is shared between the cloud customer and the cloud provider. Although the details change depending on the provider, they are generally responsible for securing the underlying infrastructure of the cloud, while the customer is responsible for securing anything they put in that cloud environment.
This arrangement can be highly beneficial, as it gives your organization the opportunity to let security team members who would normally be tasked with infrastructure security focus on new projects. However, it’s also important that everyone at your organization is familiar with exactly what the cloud provider is responsible for keeping secure and what responsibilities still rest on your shoulders. More than a few incidents have occurred because someone incorrectly assumed that the cloud provider was taking care of all security considerations.
Another unique aspect of the cloud is the ease with which new assets can be deployed. In a cloud environment, a developer can deploy new infrastructure with the click of a mouse. As a result, the security team has far less oversight of cloud assets and less input into how they are configured. This can lead to misconfigurations, which are a leading cause of security incidents in cloud environments. At the same time, ease of deployment is a key benefit of the cloud, so security teams need to find a way to minimize the risk of misconfigurations, while still supporting easy deployments.
Related: See how our vulnerability management solutions can help you understand the vulnerabilities and misconfigurations present in your cloud environments
When moving to the cloud, you also have to think about the lifespan of assets. The cloud lets you spin up short-lived virtual instances, which can present challenges if your security team isn’t used to monitoring those assets in real-time. Keep in mind that if you only scan for vulnerabilities every week or every month, you might completely miss an instance that your DevOps team spins up for just a few days. Therefore, if you want to maintain an up-to-date picture of your cloud environment, you will need to use new tools and techniques.
Cloud security strategies and pitfalls
So, how do security teams evolve to better rise to cloud challenges? First, our experts discussed threats to cloud environments and the areas where security teams often go wrong. One of the largest factors in many data breaches is configuration vulnerabilities. Your cloud provider probably offers a variety of controls for your environment. Make sure you take the time to assess these controls and identify the ones that will provide the biggest security benefits. Guidelines such as the CIS Benchmarks for AWS, Azure, and GCP can be a great help when it comes to learning about best practices for configuring the controls in your platform(s).
All the experts on our panel agreed that defining baselines is crucial. Identify what measures should always be in place to effectively minimize risk. Once you’ve defined a baseline, our experts recommended implementing guardrails that ensure all new cloud assets conform to your baseline. This can be done using a tool from your cloud provider, such as AWS Config. You can also give developers templates for properly configured infrastructure using tools like Terraform or AWS CloudFormation. You can even go one step further and automate deployment of new cloud assets with all appropriate configurations applied using tools like Chef or Puppet. This will allow you to easily scale your cloud environment in a secure manner. Another benefit of automating the process is that you minimize the chance of human error.
Visibility is essential to protecting your cloud environment. People in your organization may spin up new instances in different regions, create new networks, launch new services, or even create brand-new AWS accounts. Whatever tools you’re using for visibility and vulnerability assessment need to have a broad-enough scope to take in this entire landscape. They should also have the flexibility to assess asset types beyond traditional VMs. Perhaps most importantly, the tools you’re using for visibility must also have the ability to detect assets that are misconfigured. Even if you define and enforce baseline configurations, misconfigurations can be introduced after deployment. Your security team needs the ability to know when this happens so that they can fix the issue and educate the appropriate employees on what risks they unintentionally introduced with their configuration settings.
DevOps and security culture
In cloud environments, security teams run the risk of stifling innovation if they try to replicate the processes used for on-premises networks and directly control the deployment of new infrastructure or software. By delaying deployments to conduct manual security assessments, your security team can defeat some of the core purposes of using cloud resources: speed, efficiency, and agility. The panelists suggested that moving to a cloud environment provides a great opportunity for security professionals to instead integrate themselves into the DevOps process, transforming it into DevSecOps. This means that security becomes a part of the testing process that occurs before any deployment. Rather than security being a standalone assessment that occurs outside the regular workflow that developers use, security issues are caught during pre-deployment testing and addressed like any other bug.
As our experts pointed out, everyone in the organization wants to do what’s best for the business. It’s important for each team to empathize with each other’s viewpoint and learn together. Security shouldn’t be trying to punish development for unsafe practices. Instead, try sitting down with developers to go through an audit log together. Paint them a picture of what could happen to the entire enterprise if best practices aren’t followed.
Cloud migration and hybrid environments
Most organizations don’t move all of their assets from on-premises to the cloud at once, and in fact, our experts recommended a crawl, walk, run approach when it comes to cloud migrations. That means you’ll end up running both types of environments simultaneously (maybe temporarily or maybe permanently).
Some businesses have completely separate security teams for on-premises and cloud—a solution that our experts don’t recommend. There are many best practices that are similar for both environments, and the teams will need to communicate often regarding emerging threats that need to be addressed across both environments.
When migrating, it’s important to make sure you have a holistic view and don’t lose sight of securing legacy systems as you move to new platforms. And for monitoring and threat assessment, consider solutions that are capable of bridging the divide. Learn more about how Rapid7 InsightVM and InsightIDR allow you to manage risk for both on-premises and cloud environments, all in one place.
#gallery-0-5 { margin: auto; } #gallery-0-5 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-5 img { border: 2px solid #cfcfcf; } #gallery-0-5 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Aaron Sawitsky Cloud Security Fundamentals: Strategies to Secure Cloud Environments Original Post from Rapid7 Author: Aaron Sawitsky In a recent webcast, our panel of cybersecurity experts discussed all things cloud security, including cloud security best practices, how to avoid common security pitfalls in cloud environments, and how to work with DevOps to get the most out of your organization’s cloud investment.
0 notes
blackcatblog-blog · 5 years
Text
Tute 8
PERSISTENT DATA
The opposite of dynamic—it doesn’t change and is not accessed very frequently.
Core information, also known as dimensional information in data warehousing. Demographics of entities—customers, suppliers,orders.
Master data that’s stable.
Data that exists from one instance to another. Data that exists across time independent of the systems that created it. Now there’s always a secondary use for data, so there’s more persistent data. A persistent copy may be made or it may be aggregated. The idea of persistence is becoming more fluid.
Stored in actual format and stays there versus in-memory where you have it once, close the file and it’s gone. You can retrieve persistent data again and again. Data that’s written to the disc; however, the speed of the discs is a bottleneck for the database. Trying to move to memory because it’s 16X faster.
Every client has their own threshold for criticality (e.g. financial services don’t want to lose any debits or credits). Now, with much more data from machines and sensors, there is greater transactionality. The meta-data is as important as Now, with much more data from machines and sensors, there is greater transactionality. The meta-data is as important as the data itself. Meta-data must be transactional.
Non-volatile. Persists in the face of a power outage.
Any data stored in a way that it stays stored for an extended period versus in-memory data. Stored in the system modeled and structured to endure power outages. Data doesn’t change at all.
Data considered durable at rest with the coming and going of hardware and devices. There’s a persistence layer at which you hold your data at risk.Data that is set and recoverable whether in flash or memory backed.
With persistent data, there is reasonable confidence that changes will not be lostand the data will be available later. Depending on the requirements, in-cloud or in-memory systems can qualify. We care most about the “data” part. If it’s data, we want to enable customers to read, query, transform, write, add-value, etc.
A way to persist data to disk or storage. Multiple options to do so with one replica across data centers in any combination with and without persistence. Snapshot data to disk or snapshot changes. Write to disk every one second or every write. Users can choose between all options. Persistence is part of a high availability suite which provides replication and instant failover. Registered over multiple clouds. Host thousands of instances over multiple data centers with only two node failures per day. Users can choose between multiple data centers and multiple geographies. We are the company behind Redis. Others treat as a cache and not a database. Multiple nodes – data written to disks. You can’t do that with regular open source. If you don’t do high availability, like recommended, you can lose your data.
Anything that goes to a relational or NoSQL database in between.
DATA
All information systems require the input of data in order to perform organizational activities. Data, as described by Stair and Reynolds (2006), is made up of raw facts such as employee information, wages, and hours worked, barcode numbers, tracking numbers or sale numbers.
The scope of data collected depends on what information needs to be extrapolated for maximum efficiency. Kumar and Palvia (2001) state that: “ Data plays a vital role in organizations, and in recent years companies have recognized the significance of corporate data as an organizational asset” (¶ 4). Raw data on it’s own however, has no representational value (Stair and Reynolds, 2006). Data is collected in order to create information and knowledge about particular subjects that interest any given organization in order for that organization to make better management decisions.
DATABASE
A database (DB), in the most general sense, is an organized collection of data. More specifically, a database is an electronic system that allows data to be easily accessed, manipulated and updated.
In other words, a database is used by an organization as a method of storing, managing and retrieving information.
Modern databases are managed using a database management system (DBMS).
DATABASE SERVER
The term database server may refer to both hardware and software used to run a database, according to the context. As software, a database server is the back-end portion of a database application, following the traditional client-server model. This back-end portion is sometimes called the instance. It may also refer to the physical computer used to host the database. When mentioned in this context, the database server is typically a dedicated higher-end computer that hosts the database.
Note that the database server is independent of the database architecture. Relational databases, flat files, non-relational databases: all these architectures can be accommodated on database servers.
DATABASE MANAGEMENT SYSTEM
Database Management System (also known as DBMS) is a software for storing and retrieving users’ data by considering appropriate security measures. It allows users to create their own databases as per their requirement.
It consists of a group of programs which manipulate the database and provide an interface between the database. It includes the user of the database and other application programs.
The DBMS accepts the request for data from an application and instructs the operating system to provide the specific data.
In large systems, a DBMS helps users and other third-party software to store and retrieve data.
DBMS vs FILES
Multi-user access-It does not support multi-user access
Design to fulfill the need for small and large businesses-It is only limited to smaller DBMS system.
Remove redundancy and Integrity-Redundancy and Integrity issues
Expensive. But in the long term Total Cost of Ownership is cheap-It’s cheaper
Easy to implement complicated transactions-No support for complicated transactions
Pros of the File System
Performance can be better than when you do it in a database. To justify this, if you store large files in DB, then it may slow down the performance because a simple query to retrieve the list of files or filename will also load the file data if you used Select * in your query. In a files ystem, accessing a file is quite simple and light weight.
Saving the files and downloading them in the file system is much simpler than it is in a database since a simple “Save As” function will help you out. Downloading can be done by addressing a URL with the location of the saved file.
Migrating the data is an easy process. You can just copy and paste the folder to your desired destination while ensuring that write permissions are provided to your destination.
It’s cost effective in most cases to expand your web server rather than pay for certain databases.
It’s easy to migrate it to cloud storage i.e. Amazon S3, CDNs, etc. in the future.
Cons of the File System
Loosely packed. There are no ACID (Atomicity, Consistency, Isolation, Durability) operations in relational mapping, which means there is no guarantee. Consider a scenario in which your files are deleted from the location manually or by some hacking dudes. You might not know whether the file exists or not. Painful, right?
Low security. Since your files can be saved in a folder where you should have provided write permissions, it is prone to safety issues and invites trouble, like hacking. It’s best to avoid saving in the file system if you cannot afford to compromise in terms of security.
Pros of Database
ACID consistency, which includes a rollback of an update that is complicated when files are stored outside the database.
Files will be in sync with the database and cannot be orphaned, which gives you the upper hand in tracking transactions.
Backups automatically include file binaries.
It’s more secure than saving in a file system.
Cons of Database
You may have to convert the files to blob in order to store them in the database.
Database backups will be more hefty and heavy.
Memory is ineffective. Often, RDBMSs are RAM-driven, so all data has to go to RAM first. Yeah, that’s right. Have you ever thought about what happens when an RDBMS has to find and sort data? RDBMS tracks each data page — even the lowest amount of data read and written — and it has to track if it’s in-memory or if it’s on-disk, if it’s indexed or if it’s sorted physically etc.
TYPES OF DATABASE
Depending upon the usage requirements, there are following types of databases available in the market:
Centralised database.
Distributed database.
Personal database.
End-user database.
Commercial database.
NoSQL database.
Operational database.
Relational database.
Cloud database.
Object-oriented database.
Graph database.
But we ca conseder the following databases as main databases.
1.Relational Database
The relational database is the most common and widely used database out of all. A relational database stores different
data in the form of a data table.
2.Operational Database
Operational database, which has garnered huge popularity from different organizations, generally includes customer database, inventory database, and personal database.
3.Data Warehouse
There are many organizations that need to keep all their important data for a long span of time. This is where the importance of the data warehouse comes into play.
4.Distributed Database
As its name suggests, the distributed databases are meant for those organizations that have different workplace venues and need to have different databases for each location.
5.End-user Database
To meet the needs of the end-users of an organization, the end-user database is used.
key terms of different types of database users
application programmer: user who implements specific application programs to access the stored data
application user: accesses an existing application program to perform daily tasks.
database administrator (DBA): responsible for authorizing access to the database, monitoring its use and managing all the resources to support the use of the entire database system
end user: people whose jobs require access to a database for querying, updating and generating reports
sophisticated user: those who use other methods, other than the application program, to access the database
STATEMENT VS PREPARED STATEMENT VS CALLABLE STATEMENT IN JAVA
JDBC API  provides 3 different interfaces to execute different SQL Queries. They are:
Statement: Statement interface is used to execute normal SQL Queries.
PreparedStatement: It is used to execute dynamic or parametrized SQL Queries.
CallableStatement: It is used to execute the Stored Procedure.
STATEMENT
In JDBC Statement is an Interface. By using Statement object we can send our SQL Query to Database. At the time of creating a Statement object, we are not required to provide any Query. Statement object can work only for static query.
PREPARED STATEMENT
PreparedStatement is an interface, which is available in java.mysql package. It extends the Statement interface.
Benefits of Prepared Statement:
It can be used to execute dynamic and parametrized SQL Query.
Prepared Statement is faster then Statement interface. Because in Statement Query will be compiled and execute every time, while in case of Prepared Statement Query won’t be compiled every time just executed.
It can be used for both static and dynamic query.
In case of Prepared Statement no chance of SQL Injection attack. It is some kind of problem in database programming.
CALLABLE STATEMENT
CallableStatement in JDBC is an interface present in a java.sql package and it is the child interface of Prepared Statement. Callable Statement is used to execute the Stored procedure and functions. Similarly to method stored procedure has its own parameters. Stored Procedure has 3 types of parameters.
IN PARAMETER : IN parameter is used to provide input values.
OUT PARAMETER : OUT parameter is used to collect output values.
IN OUT PARAMETER : It is used to provide input and to collect output values.
The driver software vendor is responsible for providing the implementations for Callable statement interface. If Stored Procedure has OUT parameter then to hold that output value we should register every OUT parameter by using registerOutParameter() method of CallableStatement. CallableStatement interface is better then Statement and PreparedStatement because its call the stored procedure which is already compiled and stored in the database.
ORM
ORMs have some nice features. They can handle much of the dog-work of copying database columns to object fields. They usually handle converting the language’s date and time types to the appropriate database type. They generally handle one-to-many relationships pretty elegantly as well by instantiating nested objects. I’ve found if you design your database with the strengths and weaknesses of the ORM in mind, it saves a lot of work in getting data in and out of the database. (You’ll want to know how it handles polymorphism and many-to-many relationships if you need to map those. It’s these two domains that provide most of the ‘impedance mismatch’ that makes some call ORM the ‘vietnam of computer science’.)
For applications that are transactional, i.e. you make a request, get some objects, traverse them to get some data and render it on a Web page, the performance tax is small, and in many cases ORM can be faster because it will cache objects it’s seen before, that otherwise would have queried the database multiple times.
For applications that are reporting-heavy, or deal with a large number of database rows per request, the ORM tax is much heavier, and the caching that they do turns into a big, useless memory-hogging burden. In that case, simple SQL mapping (LinQ or iBatis) or hand-coded SQL queries in a thin DAL is the way to go.
Pros of ORM:
Portable: ORM is used so that you write your structure once and ORM layer will handle the final statement that is suitable for the configured DBMS. This is an excellent advantage as simple operation like limit is added as ‘limit 0,100’ at the end of select statement in MySQL, while it is ‘select top 100 from table’ in MS SQL.
Nesting of data: in case of relationships, the ORM layer will pull the data automatically for you.
Single language: you don’t to know SQL language to deal the database only your development language.
Adding is like modifying: most ORM layers treat adding new data (SQL insert) and updating data (SQL Update) in the same way, these makes writing and maintaining code a piece of cake.
Cons of ORM
Slow: if you compare the performance between writing raw SQL or using ORM, you will find raw much faster as there is no translation layer.
Tuning: if you know SQL language and your default DBMS well, then you can use your knowledge to make queries faster but this is not the same when using ORM.
Complex Queries: some ORM layers have limitations especially when executing queries so sometimes you will find yourself writing raw SQL.
Studying: in case you are working in a big data project and you are not happy with the performance, you will find yourself studying the ORM layer so that you can minimize the DBMS hits.
JAVA ORM TOOLSHibernate
Hibernate is an object-relational mapping (ORM) library for the Java language, providing a framework for mapping an object-oriented domain model to a traditional relational database. Hibernate solves object-relational impedance mismatch problems by replacing direct persistence-related database accesses with high-level object handling functions.
Hibernate’s primary feature is mapping from Java classes to database tables (and from Java data types to SQL data types). Hibernate also provides data query and retrieval facilities. Hibernate generates the SQL calls and attempts to relieve the developer from manual result set handling and object conversion and keep the application portable to all supported SQL databases with little performance overhead.
IBatis / MyBatis
iBATIS is a persistence framework which automates the mapping between SQL databases and objects in Java, .NET, and Ruby on Rails. In Java, the objects are POJOs (Plain Old Java Objects). The mappings are decoupled from the application logic by packaging the SQL statements in XML configuration files. The result is a significant reduction in the amount of code that a developer needs to access a relational database using lower level APIs like JDBC and ODBC.
Other persistence frameworks such as Hibernate allow the creation of an object model (in Java, say) by the user, and create and maintain the relational database automatically. iBATIS takes the reverse approach: the developer starts with an SQL database and iBATIS automates the creation of the Java objects. Both approaches have advantages, and iBATIS is a good choice when the developer does not have full control over the SQL database schema.
For example, an application may need to access an existing SQL database used by other software, or access a new database whose schema is not fully under the application developer’s control, such as when a specialized database design team has created the schema and carefully optimized it for high performance.
Toplink
In computing, TopLink is an object-relational mapping (ORM) package for Java developers. It provides a framework for storing Java objects in a relational database or for converting Java objects to XML documents.
TopLink Essentials is the reference implementation of the EJB 3.0 Java Persistence API (JPA) and the open-source community edition of Oracle’s TopLink product. TopLink Essentials is a limited version of the proprietary product. For example, TopLink Essentials doesn’t provide cache synchronization between clustered applications, some cache invalidation policy, and query Cache.
.NET ORM TOOLSLinqConnect
LinqConnect is a fast, lightweight, and easy to use LINQ to SQL compatible ORM solution, supporting SQL Server, Oracle, MySQL, PostgreSQL, and SQLite. It allows you to use efficient and powerful data access for your .NET Framework, Metro, Silverlight, or Windows Phone applications supporting Code-First, Model-First, Database-First or mixed approaches.
NHibernate
Entity Developer for NHibernate, being the best NHibernate designer to-date, allows you to create NHibernate models fastly in a convenient GUI environment.
Devart has almost ten-year experience of developing visual ORM model designers for LINQ to SQL and Entity Framework and there is a number of satisfied LINQ to SQL and Entity Framework developers that use Entity Developer. Our extensive experience and skills have become the soundest cornerstone of our NHibernate designer.
Entity Framework 6
Entity Framework 6 (EF6) is a tried and tested object-relational mapper (O/RM) for .NET with many years of feature development and stabilization.
As an O/RM, EF6 reduces the impedance mismatch between the relational and object-oriented worlds, enabling developers to write applications that interact with data stored in relational databases using strongly-typed .NET objects that represent the application’s domain, and eliminating the need for a large portion of the data access “plumbing” code that they usually need to write.
NOSQL
NoSql solves the problem of scalability and availability against that of atomicity or consistency. So According to CAP(Consistency, Availability and Tolerance to network partitions) theorem for shared-data systems, only two can be achieved at any time. NoSql approach to store data and querying is quite better :
Schemaless data representation:Most of them offer schemaless data representation & allow storing semi-structured data.Can continue to evolve over time— including adding new fields or even nesting the data, for example, in case of JSON representation.
Development time: No complex SQL queries. No JOIN statements.
Speed:Very High speed delivery & Mostly in-built entity-level caching
Plan ahead for scalability:Avoiding rework
Why do we need NoSQL
NoSQL offers a simpler data model in other words, it motivates to use the concepts of embedding and indexing your data rather than using the concept of joining the data.
If a developer wants to do rapid development of the application then NoSQL will be useful.
NoSQL provides high scaling out capability.
NoSQL allows you to add any kind of data in your database because it is flexible.
It also provides distributed storage and high availability of the data.
Streaming is also accepted by NoSQL because it can handle a high volume of data which is stored in your database.
It offers real-time analysis, and redundancy so as to replicate your data on more than one servers.
As mentioned earlier, it is highly scalable therefore it can be implemented with a very low budget.
Types of NoSQL databases
There are 4 basic types of NoSQL databases:
Key-Value Store – It has a Big Hash Table of keys & values {Example- Riak, Amazon S3 (Dynamo)}
Document-based Store- It stores documents made up of tagged elements. {Example- CouchDB}
Column-based Store- Each storage block contains data from only one column, {Example- HBase, Cassandra}
Graph-based-A network database that uses edges and nodes to represent and store data. {Example- Neo4J}
HADOOPWhat is Hadoop:
Hadoop is an open-source tool from theApache Software Foundation.
It provides an efficient framework for running jobs on multiple nodes of clusters.
Hadoop consists of three key parts :
HADOOP Distributed file system (HDFS) – It is the storage layer of Hadoop.
Map Reduce – It is the data processing layer of Hadoop.
YARN – It is the resource management layer of Hadoop.
Why Hadoop
Hadoop is still the backbone of all the Big Data Applications, following characteristics of Hadoop make it a unique platform:
Open Source
Distributed Processing
Fault Tolerance
Reliability
High Availability
Scalability
Economic
Easy to use
Data Locality
HDFS Key Features
HDFS is a fault-tolerant and self-healing distributed filesystem designed to turn a cluster of industry-standard servers into a massively scalable pool of storage. Developed specifically for large-scale data processing workloads where scalability, flexibility, and throughput are critical, HDFS accepts data in any format regardless of schema, optimizes for high-bandwidth streaming, and scales to proven deployments of 100PB and beyond.
Hadoop Scalable:
HDFS is designed for massive scalability, so you can store unlimited amounts of data in a single platform. As your data needs grow, you can simply add more servers to linearly scale with your business.
Flexibility:
Store data of any type — structured, semi-structured, unstructured — without any upfront modeling. Flexible storage means you always have access to full-fidelity data for a wide range of analytics and use cases.
Reliability:
Automatic, tunable replication means multiple copies of your data are always available for access and protection from data loss. Built-in fault tolerance means servers can fail but your system will remain available for all workloads.
MapReduce Key FeaturesAccessibility:
Supports a wide range of languages for developers, including C++, Java, or Python, as well as high-level language through Apache Hive and Apache Pig.
Flexibility:
Process any and all data, regardless of type or format — whether structured, semi-structured, or unstructured. Original data remains available even after batch processing for further analytics, all in the same platform.
Reliability:
Built-in job and task trackers allows processes to fail and restart without affecting other processes or workloads. Additional scheduling allows you to prioritize processes based on needs such as SLAs.
Hadoop Scalable:
MapReduce is designed to match the massive scale of HDFS and Hadoop, so you can process unlimited amounts of data, fast, all within the same platform where it’s stored.
While MapReduce continues to be a popular batch-processing tool, Apache Spark’s flexibility and in-memory performance make it a much more powerful batch execution engine. Cloudera has been working with the community to bring the frameworks currently running on MapReduce onto Spark for faster, more robust processing.
MapReduce is designed to process unlimited amounts of data of any type that’s stored in HDFS by dividing workloads into multiple tasks across servers that are run in parallel.
YARN Key Features
YARN provides open source resource management for Hadoop, so you can move beyond batch processing and open up your data to a diverse set of workloads, including interactive SQL, advanced modeling, and real-time streaming.
Hadoop Scalable:
YARN is designed to handle scheduling for the massive scale of Hadoop so you can continue to add new and larger workloads, all within the same platform.
Dynamic Multi-tenancy:
Dynamic resource management provided by YARN supports multiple engines and workloads all sharing the same cluster resources. Open up your data to users across the entire business environment through batch, interactive, advanced, or real-time processing, all within the same platform so you can get the most value from your Hadoop platform.
0 notes
AR and VR in Healthcare Market 2019 Industry, Analysis, Share, Growth, Forecast to 2023
Virtual reality or virtual realities (VR), also known as immersive multimedia or computer-simulated reality, is a computer technology that replicates an environment, real or imagined, and simulates a user's physical presence and environment to allow for user interaction. Virtual realities artificially create sensory experience, which can include sight, touch, hearing, and smell.
 Get Sample copy with Latest Innovations and Future Advancements @ https://www.orbisresearch.com/contacts/request-sample/2484561
 Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality.
 Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.
 North America dominated the market in 2016 with a revenue share of 43%, which can be attributed to constant technological advancement of related products, prevalence of neurological & psychological disorders, increased adoption of such advanced technologies, and presence of a sophisticated healthcare infrastructure. Furthermore, growing technological advancements in information technology and government support for integration of these technologies in medical field contribute to the large share of the market.
 Augmented Reality (AR) and Virtual Reality (VR) in Healthcare Market Top Players:
·         SAMSUNG
·         MICROSOFT
·         GOOGLE
·         FaceBook
·         Carl Zeiss
·         Baofeng
·         Sony
·         Razer
·         HTC
·         Daqri
·         AMD
·         Atheer
·         Meta
·         CastAR
·         Skully
·         HP
·         Antvr
·         Lumus
·         Fove
·         Sulon
·         JINWEIDU
·         Virglass
·         Emaxv
·         Epson
 Enquiry Before Buying @ https://www.orbisresearch.com/contacts/enquiry-before-buying/2484561
Industrial Augmented Reality (AR) and Virtual Reality (VR) in Healthcare Market Segmentation:
Market Segment by Type, covers
·         Mobile
·         PC/Home Console
·         Headset AR
 Market Segment by Applications, can be divided into
·         Surgical Training
·         Surgical Navigation
·         Others
 Table of Contents:
There are 15 Chapters to deeply display the global Industrial Augmented Reality (AR) and Virtual Reality (VR) in Healthcare Market.
Chapter 1, to describe Industrial Augmented Reality (AR) and Virtual Reality (VR) in Healthcare  Introduction, product scope, market overview, market opportunities, market risk, market driving force;
Chapter 2, to analyze the top manufacturers of Industrial Augmented Reality (AR) and Virtual Reality (VR) in Healthcare , with sales, revenue, and price of Industrial Augmented Reality (AR) and Virtual Reality (VR) in Healthcare , in 2016 and 2017;
Chapter 3, to display the competitive situation among the top manufacturers, with sales, revenue and market share in 2016 and 2017;
Chapter 4, to show the global market by regions, with sales, revenue and market share of Industrial Augmented Reality (AR) and Virtual Reality (VR) in Healthcare , for each region, from 2013 to 2018;
 Browse Full Report @ https://www.orbisresearch.com/reports/index/global-north-america-europe-asia-pacific-south-america-middle-east-and-africa-augmented-reality-ar-and-virtual-reality-vr-in-healthcare-market-2018-forecast-to-2023
 Chapter 5, 6, 7, 8 and 9, to analyze the market by countries, by type, by application and by manufacturers, with sales, revenue and market share by key countries in these regions;
Chapter 10 and 11, to show the market by type and application, with sales market share and growth rate by type, application, from 2013 to 2018;
Chapter 12, Industrial Augmented Reality (AR) and Virtual Reality (VR) in Healthcare Market forecast, by regions, type and application, with sales and revenue, from 2018 to 2023;
Chapter 13, 14 and 15, to describe Industrial Augmented Reality (AR) and Virtual Reality (VR) in Healthcare  sales channel, distributors, traders, dealers, Research Findings and Conclusion, appendix and data source
 About Us:
Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.
Contact Us:
Hector Costello
Senior Manager – Client Engagements
4144N Central Expressway,
Suite 600, Dallas,
Texas - 75204, U.S.A.
Phone No.: +1 (214) 884-6817; +9120641 01019
0 notes
mikemortgage · 5 years
Text
Top 10 IoT vulnerabilities of 2018
Attention developers of Internet of Things devices: The Open Web Application Security Project (OWASP) has updated its 2018 Top 10 IoT vulnerabilities list.
If you’re a creator or manufacturer of IoT devices, these are the potential traps that have to be fixed before a product is released. The list comes as a colourful chart that can be posted.
And if you’re a buyer of IoT products these are the vulnerabilities you should be looking for before putting down cash.
The primary theme for the list is simplicity, says OWASP. “Rather than having separate lists for risks vs. threats vs. vulnerabilities—or for developers vs. enterprises vs. consumers—the project team elected to have a single, unified list that captures the top things to avoid when dealing with IoT Security.
“The team recognized that there are now dozens of organizations releasing elaborate guidance on IoT Security—all of which are designed for slightly different audiences and industry verticals. We thought the most useful resource we could create is a single list that addresses the highest priority issues for manufacturers, enterprises, and consumers at the same time.”
1-Weak, Guessable, or Hardcoded Passwords
Use of easily brute-forced, publicly available, or unchangeable credentials, including backdoors in firmware or client software that grants unauthorized access to deployed systems.
2-Insecure Network Services 
Unneeded or insecure network services running on the device itself, especially those exposed to the internet, that compromise the confidentiality, integrity/authenticity, or availability of information or allow unauthorized remote control…
3-Insecure Ecosystem Interfaces
Insecure web, backend API, cloud, or mobile interfaces in the ecosystem outside of the device that allows compromise of the device or its related components. Common issues include a lack of authentication/authorization, lacking or weak encryption, and a lack of input and output filtering.
4-Lack of Secure Update Mechanism
Lack of ability to securely update the device. This includes lack of firmware validation on device, lack of secure delivery (un-encrypted in transit), lack of anti-rollback mechanisms, and lack of notifications of security changes due to updates.
5-Use of Insecure or Outdated Components
Use of deprecated or insecure software components/libraries that could allow the device to be compromised. This includes insecure customization of operating system platforms, and the use of third-party software or hardware components from a compromised supply chain.
Related Articles
Patch new IoT devices fast, researchers warn, or they’ll be in a botnet
For some time threat actors who create Internet of Things-based botnets have been relying on brute force attacks to take…
December 12th, 2018
Howard Solomon @HowardITWC
Lots of vulnerabilities in IoT device Web interfaces: Study
CISOs have lots on their plates but the expansion of the so-called Internet of Things will only add to their…
November 25th, 2015
Howard Solomon @HowardITWC
6-Insufficient Privacy Protection
User’s personal information stored on the device or in the ecosystem that is used insecurely, improperly, or without permission.
7-Insecure Data Transfer and Storage Lack of encryption or access control of sensitive data anywhere within the ecosystem, including at rest, in transit, or during processing.
8-Lack of Device Management
Lack of security support on devices deployed in production, including asset management, update management, secure decommissioning, systems monitoring, and response capabilities.
9-Insecure Default Settings
Devices or systems shipped with insecure default settings or lack the ability to make the system more secure by restricting operators from modifying configurations.
10-Lack of Physical Hardening Lack of physical hardening measures, allowing potential attackers to gain sensitive information that can help in a future remote attack or take local control of the device.
  from Financial Post https://ift.tt/2CHrHb1 via IFTTT Blogger Mortgage Tumblr Mortgage Evernote Mortgage Wordpress Mortgage href="https://www.diigo.com/user/gelsi11">Diigo Mortgage
0 notes
louisonurmark · 5 years
Text
Samsung’s Galaxy Watch Active is the follow-up to its Gear Sport, a stripped-down smartwatch focused on fitness tracking and glanceable notifications. At just $200, it’s much cheaper than the flagship $350 Galaxy Watch and the Apple Watch Series 4. Saving money often means sacrificing features, but the Active doesn’t lose out on much; it’s fast, swim-proof, has built-in GPS, includes a slew of fitness modes, and lasts a long time on a charge. However, it does give up a beloved piece of hardware that’s become a signature of Samsung’s other smartwatches: the rotating bezel. But the Galaxy Watch Active still strikes a very appealing balance for its price. I’d already call it one of the better options for Android users who understand its limitations, which I’ll get into below.
The Active’s design is fairly generic. It comes in nicer colors than the black variant I reviewed, but all of them share the same understated style that almost looks like something Pebble would have made. It’s not flashy, but it works. The 20mm bands are easy to replace, as well.
The Galaxy Watch Active’s most shining attribute is comfort. Its aluminum case is far lighter than the stainless steel Galaxy Watch or the Apple Watch Series 4, and it’s thin enough to not get caught up on shirt cuffs or outerwear. The watch rests flat on my wrist, and I quickly forget that I’m wearing it, whereas I never really “forget” that my stainless steel Series 4 is on. It’s such a pleasant fit that I’ve been able to sleep wearing the Active without any discomfort. The watch automatically switches into sleep-tracking mode when it detects that you’ve gone to bed. It breaks down the quality of your sleep based on how long you spent in each phase. None of this is unique among fitness wearables that offer sleep tracking (like a Fitbit), but it’s something the pricier Apple Watch still can’t do natively.
The trade-off for that comfort is a smaller screen, especially compared to the monstrous 46mm Galaxy Watch. The Active has a 40mm case with a significant bezel running around its 1.1-inch OLED display. I wouldn’t call it cramped, per se — content on-screen is perfectly legible, and you can tap where you need without many mistakes — but whereas the latest Apple Watch can sometimes feel like a computer on your wrist, the Galaxy Watch Active doesn’t leave that same impression. Some people might actually prefer that, and Samsung clearly cared more about fit than screen real estate this time around. Speaking of fit, the company includes two sizes of its silicone sport strap in the box; I immediately had to swap for the larger one.
But Samsung took something very important away from the Galaxy Watch Active: it lacks the rotating bezel that has proven to be an intuitive, natural, and fun control mechanism on the company’s other smartwatches. The Tizen Wearable OS 4.0 software is designed to put the display’s circular shape to good use, but navigating the Galaxy Watch Active can feel more finicky without the rotating bezel and its satisfying clicks.
For one, the watch’s software doesn’t really take into account this significant change in how you interact with it. The user experience is largely identical to that of the Galaxy Watch, and it’s clearly meant to work best with a rotating bezel that can quickly scroll through menus.
Relying only on taps and swipes (plus the physical back and home buttons) isn’t the end of the world, but it undeniably feels like more work. If the app you tap on in the apps drawer isn’t already highlighted, for example, you’ve got to tap a second time to open it. The Active is at least a pleasure to swipe across with its smooth glass top and curved edges. Your finger won’t be knocking into a raised bezel as with the Galaxy Watch. (Of course, that also means you run a higher risk of a shattered screen if you drop it.)
Tizen OS 4.0 runs very fluidly on the Active, which is powered by the same processor as the Galaxy Watch. From the main watchface, you can swipe down from the top for quick toggles / settings, swipe right to view your notifications, or swipe left to move between any widgets (alarms, calendar, music controls, sleep tracking, etc.) you want fast access to. Hold down on the main screen to switch to another watchface or download more; there are thousands of watchfaces available, which, frankly, seems hopeless to navigate through. But at least you’ve got options.
The built-in watchfaces were fine for my tastes. You can change the color of some, and others offer customizable complications to display the information most important to you. Samsung’s watchfaces don’t feel quite as data-rich as some of those on the Apple Watch Series 4, but, again, that probably has to do with the modest display. Pit this against a Fitbit Versa, and the comparison swings in Samsung’s favor.
Samsung does a good job of optimizing its wearable apps for a round display, but I still think some of the icons and user interface elements are ugly compared to the Apple Watch or Google’s revamped Wear OS. The company says there have been some visual tweaks inspired by its One UI design guidelines, but they’re barely noticeable. Plus, since Tizen is its own separate operating system, sometimes you’re required to install a “companion” app on your Android phone if you download watch apps such as Uber. That feels like unnecessary clutter. Many apps send basic notifications without any useful response actions, but you can customize which ones you want to reach your wrist.
Third-party app selection, in general, on the Active is dismal compared to Apple Watch and Wear OS, though Samsung has a few popular fitness apps like Strava on board. And the included Spotify app lets you download music for offline listening, which the Apple Watch version can’t yet do. That’s good news for runners, and it might be enough to sell some people on the Active. But there’s no such thing as Google Maps or Google Messages for Tizen.
The Spotify app supports offline downloads.
Just as with Samsung’s phones, Bixby is a weak point on the Active. It sometimes fumbles the accuracy of dictated messages, and it will frequently steer you back to your phone for many questions if you go deeper than asking for the weather or telling it to fire off a text. At least Bixby is easier to just ignore when there’s not a dedicated button for it. I don’t foresee people using apps beyond those for music and fitness tracking on the Galaxy Watch Active very often. When you get notifications from messaging or email apps, you can choose from a list of canned responses or add your own custom reply. Failing that, you can scribble out a message letter by letter a la the Apple Watch if you really have no other option.
Samsung Pay is included for wireless payments, but only at terminals that support NFC; Samsung doesn’t include the MST technology found in flagship Galaxy phones that can mimic the magnetic stripe on credit / debit cards, allowing Samsung Pay to be used in many more situations.
It’s worth underlining that owners of Samsung phones get the best experience from the Galaxy Watch Active. It has versions of the company’s stock email and messaging apps on board, but no such luck for Gmail or Google’s Messages app. That’s disappointing for those using another Android device (or who dislike Samsung’s software). In either case, you’ll need the Galaxy Wearable app installed to get set up and adjust the watch’s settings.
The Galaxy Watch Active isn’t yet able to read blood pressure.
Unfortunately, the watch’s main new health-related feature — blood pressure detection — wasn’t yet ready to test at press time. When it does launch, it’ll be in beta. Allow me to reiterate that you shouldn’t trust a consumer gadget to serve as your doctor or a miracle device that can sense all ailments. The Active’s sleep tracking seems fairly on point, automatic workout detection was surprisingly quick to recognize activity, and my daily steps lined up closely with an Apple Watch on my other wrist. So it hits the fundamentals and also has a water resistance rating of 5ATM, meaning even a deep swim won’t damage it.
Samsung’s Health app offers a ton of functionality; aside from collecting your workout totals and showcasing your progress, it can log your nutrition (food, water, and caffeine intake) if you’re willing to consistently input that data. Rounding out fitness, Samsung includes breathing / relaxation apps and a brand-new widget for tracking your weight. And if you’re still for too long, the Active will nudge you to do a set of torso twists rather than stand up.
I’ve been happy with the Galaxy Watch Active’s endurance so far despite the small 230mAh battery inside. Samsung claims it can go for 45 hours on a single charge, but that’s only a realistic number if you’re using GPS rarely (if at all) and keep the always-on display option disabled. Turning that on is a major hit to battery life. With default settings, I’ve been able to get through two work days before needing to charge. The Active can be juiced up when placed on the back of a Galaxy S10 through Samsung’s new PowerShare feature. Plopping it onto my Samsung wireless charging stand didn’t charge it, though, so you’ll need the Duo Dock if you want to go that route.
Very few iPhone owners are going to give much thought to buying a Galaxy Watch Active. For them, the Apple Watch is the objectively better choice. It’s got more apps, the software is nicer and more coherent, and its integration with iOS allows for richer notifications and easy one-tap actions when those notifications warrant a response. You’ll feel a lot more constrained using the Active with iOS than with Android.
Samsung’s real competition is Fitbit with its Versa and products like the Fossil Sport that run Wear OS and are at par with most of the Galaxy Watch Active’s features, albeit for a slightly higher price. At $200, the Galaxy Watch Active is a strong value that I’d probably consider before both of those, assuming you can go without the rotating bezel and won’t envy other smartwatches and their bigger screens. Using the Galaxy Watch Active might be less fun than Samsung’s other watches, but it’s still a good time overall.
Photography by Chris Welch / The Verge
SAMSUNG GALAXY WATCH ACTIVE REVIEW: LESS FUN BUT STILL A GOOD TIME Samsung’s Galaxy Watch Active is the follow-up to its Gear Sport, a stripped-down smartwatch focused on fitness tracking and glanceable notifications.
0 notes
nicholerestrada · 6 years
Text
Apple Watch Series 4 is the most accessible watch yet
Steven Aquino Contributor
Share on Twitter
Steven Aquino is a freelance tech writer and iOS accessibility expert.
More posts by this contributor
The accessibility of the iPhone XS Max
At Apple’s WWDC 2018, accessibility pervades all 
Every time I ponder the impact Apple Watch has had on my life, my mind always goes to Matthew Panzarino’s piece published prior to the device’s launch in 2015. In it, Panzarino writes about how using Apple Watch saves time; as a “satellite” to your iPhone, the Watch can discreetly deliver messages without you having to disengage from moments to attend to your phone.
In the three years I’ve worn an Apple Watch, I’ve found this to be true. Like anyone nowadays, my iPhone is the foremost computing device in my life, but the addition of the Watch has somewhat deadened the reflex to check my phone so often. What’s more, the advent of Apple Watch turned me into a regular watch-wearer again, period, be it analog or digital. I went without one for several years, instead relying on my cell phone to tell me the time.
To piggyback on Panzarino’s thesis that Apple Watch saves you time, from my perspective as a disabled person, Apple’s smartwatch makes receiving notifications and the like a more accessible experience. As someone with multiple disabilities, Apple Watch not only promotes pro-social behavior, the device’s glanceable nature alleviates the friction of pulling my phone out of my pocket a thousand times an hour. For people with certain physical motor delays, the seemingly unremarkable act of even getting your phone can be quite an adventure. Apple Watch on my wrist eliminates that work, because all my iMessages and VIP emails are right there.
The fourth-generation Apple Watch, “Series 4” in Apple’s parlance, is the best, most accessible Apple Watch to date. The original value proposition for accessibility, to save on physical wear and tear, remains. Yet Series 4’s headlining features — the larger display, haptic-enabled Digital Crown and fall detection — all have enormous ramifications for accessibility. In my testing of a Series 4 model, a review unit provided to me by Apple, I have found it to be delightful to wear and use. This new version has made staying connected more efficient and accessible than ever before.
Big screen, small space
If there were but one banner feature of this year’s Apple Watch, it would indisputably be the bigger screen. I’ve been testing Series 4 for a few weeks and what I tweeted early on holds true: for accessibility, the Series 4’s larger display is today what Retina meant to iPhone 4 eight years ago. Which is to say, it is a highly significant development for the product; a milestone. If you are visually impaired, this should be as exciting as having a 6.5-inch iPhone. Again, the adage that bigger is better is entirely apropos — especially on such a small device as Apple Watch.
What makes Series 4’s larger screen so compelling in practice is just how expansive it is. As with the iPhone XS Max, the watch’s large display makes seeing content easier. As I wrote last month, once I saw the bigger model in the hands-on area following Apple’s presentation, my heart knew it was the size I wanted. The difference between my 42mm Series 3 and my 44mm Series 4 is stark. I’ve never complained about my previous watches being small, screen-wise, but after using the 44mm version for an extended time, the former feels downright minuscule by comparison. It’s funny how quickly and drastically one’s perception can change.
Series 4’s bigger display affects more than just text. Its bigger canvas allows for bigger icons and touch targets for user interface controls. The keypad for entering your passcode and the buttons for replying to iMessages are two standout examples. watchOS 5 has been updated in such a way that buttons have even more definition. They’re more pill-shaped to accommodate the curves of the new display; the Cancel/Pause buttons in the Timer app shows this off well. It aids in tapping, but it also gives them a visual boost that makes it easy to identify them as actionable buttons.
This is one area where watchOS excels over iOS, since Apple Watch’s relatively small display necessitates a more explicit design language. In other words, where iOS leans heavily on buttons that resemble ordinary text, watchOS sits at the polar end of the spectrum. A good rule of thumb for accessible design is that it’s generally better designers aim for concreteness with iconography and the like, rather than be cutesy and abstract because it’s en vogue and “looks cool” (the idea being a visually impaired person can more easily distinguish something that looks like a button as opposed to something that is technically a button but which looks like text).
Apple has course-corrected a lot in the five years since the iOS 7 overhaul; I hope further refinement is something that is addressed with the iOS 13 refresh that Axios’s Ina Fried first reported earlier this year was pushed back until 2019.
Of Series 4’s improvements, the bigger screen is by far my favorite. Apple Watch still isn’t a device you don’t want to interact with more than a minute, but the bigger display allows for another few milliseconds of comfort. As someone with low vision, that little bit of extra time is nice because I can take in more important information; the bigger screen mitigates my concerns over excessive eye strain and fatigue.
The Infograph and Infograph Modular faces
As I wrote in the previous section, the Series 4’s larger display allowed Apple to redesign watchOS such that it would look right given the bigger space. Another way Apple has taken advantage of Series 4’s big screens is the company has created two all-new watch faces that are exclusive to the new hardware: Infograph and Infograph Modular. (There are other cool ones — Breathe, Fire & Water, Liquid Metal and Vapor — that are all available on older Apple Watches that run watchOS 5.)
It’s not hard to understand why Apple chose to showcase Infograph in their marketing images for Series 4; it (and Infograph Modular) look fantastic with all the bright colors and bold San Francisco font. From an accessibility standpoint, however, my experience has been Infograph Modular is far more visually accessible than Infograph. While I appreciate the latter’s beauty (and bevy of complications), the functional downsides boil down to two things: contrast and telling time.
Contrast-wise, it’s disappointing you can’t change the dial to be another color but white and black. White is better here, but it is difficult to read the minute and second markers because they’re in a fainter grayish-black hue. If you choose the black dial, contrast is worse because it blends into the black background of the watch’s OLED display. You can change the color of the minute and second markers, but unless they’re neon yellow or green, readability is compromised.
Which brings us to the major problem with Infograph: it’s really difficult to tell time. This ties into the contrast issue — there are no numerals, and the hands are low contrast, so you have to have memorized the clock in order to see what time it is. Marco Arment articulates the problem well, and I can attest the issue is only made worse if you are visually impaired as I am. It’s a shame because Infograph is pretty and useful overall, but you have to be able to tell time. It makes absolutely no sense to add a digital time complication to what’s effectively an analog watch face. Perhaps Apple will add more customization options for Infograph in the future.
Infograph Modular, which I personally prefer, is not nearly as aesthetically pleasing as Infograph, but it’s far better functionally. Because it’s a digital face, the time is right there for you, and the colorful complications set against the black background is a triumph of high contrast. It is much easier on my eyes, and the face I recommend to anyone interested in trying out Series 4’s new watch faces.
Lastly, a note about the information density of these new faces. Especially on Infograph, it’s plausible that all the complications, in all their color, present an issue for some visually impaired people. This is because there’s a lot of “clutter” on screen and it may be difficult for some to pinpoint, say, the current temperature. Similarly, all the color may look like one washed-out rainbow to some who may have trouble distinguishing colors. It’d be nice if Apple added an option for monochromatic complications with the new faces.
In my usage, neither have been issues for me. I quite like how the colors boost contrast, particularly on Infograph Modular.
Haptics come to the crown
Given Apple’s push in recent years to integrate its so-called Taptic Engine technology — first introduced with the original Watch — across its product lines, it makes perfect sense that the Digital Crown gets it now. Haptics makes it better.
Before Apple Watch launched three years ago, I wrote a story in which I explained why haptic feedback (or “Force Touch,” as Apple coined it then) matters for accessibility. What I wrote then is just as relevant now: the addition of haptic feedback enhances the user experience, particularly for people with disabilities. The key factor is sensory input — as a user, you’re no longer simply watching a list go by. In my usage, the fact that I feel a “tick” as I’m scrolling through a list on the Watch in addition to seeing it move makes it more accessible.
The bi-modal sensory experience is helpful insofar as the secondary cue (the ticks) is another marker that I’m manipulating the device and something is happening. If I only rely on my poor eyesight, there’s a chance I could miss certain movements or animations, so the haptic feedback acts as a “backup,” so to speak. Likewise, I prefer my iPhone to ring and vibrate whenever a call comes in because I suffer from congenital hearing loss (due to my parents being deaf) and could conceivably miss important calls from loved ones or whomever. Thus, that my phone also vibrates while it’s ringing is another signal that someone is trying to reach me and I probably should answer.
Tim Cook made a point during the original Watch’s unveiling to liken the Digital Crown as equally innovative and revolutionary as what the mouse was to the Mac in 1984 and what multi-touch was to the iPhone in 2007. I won’t argue his assertion here, but I will say the Series 4’s crown is the best version of the “dial,” as Cook described it, to date. It’s because of the haptic feedback. It gives the crown even more precision and tactility, making it more of a compelling navigational tool.
Considering fall detection
As I watched from the audience as Apple COO Jeff Williams announced Series 4’s new fall detection feature, I immediately knew it was going to be a big deal. It’s something you hope to never use, as Williams said on stage, but the fact it exists at all is telling for a few reasons — the most important to me being accessibility.
I’ve long maintained accessibility, conceptually, isn’t limited to people with medically recognized disabilities. Accessibility can mean lots of different things, from mundane things like where you put the paper towel dispenser on the kitchen counter to more critical ones like building disabled parking spaces and wheelchair ramps for the general public. Accessibility also is applicable to the elderly who, in the case of fall detection, could benefit immensely from such a feature.
Instead of relying on a dedicated lifeline device, someone who’s even remotely interested in Apple Watch, and who’s also a fall risk, could look at Series 4 and decide the fall detection feature alone is worth the money. That’s exactly what happened to my girlfriend’s mother. She is an epileptic and is a high-risk individual for catastrophic falls. After seeing Ellen DeGeneres talk up the device on a recent episode of her show, she was gung-ho about Series 4 solely for fall detection. She’d considered a lifeline button prior, but after hearing how fall detection works, decided Apple Watch would be the better choice. As of this writing, she’s had her Apple Watch for a week, and can confirm the new software works as advertised.
Personally, my cerebral palsy makes it such that I can be unsteady on my feet at times and could potentially fall. Fortunately, I haven’t needed to test fall detection myself, but I trust the reports from my girlfriend’s mom and The Wall Street Journal’s Joanna Stern, who got a professional stunt woman’s approval.
Problematic packaging
Apple Watch Series 4 is pretty great all around, but there is a problem. One that has nothing to do with the product itself. How Apple has chosen to package Apple Watch Series 4 is bad.
Series 4’s unboxing experience is a regression from all previous models, in my opinion. The issue is Apple’s decision to pack everything “piecemeal” — the Watch case itself comes in an (admittedly cute) pouch that’s reminiscent of iPod Socks, while the band is in its own box. Not to mention the AC adapter and charging puck are located in their own compartment. I understand the operational logistics of changing the packaging this way, but for accessibility, it’s hardly efficient. In many ways, it’s chaotic. There are two reasons for this.
First, the discrete approach adds a lot in terms of cognitive load. While certainly not a dealbreaker for me, unboxing my review unit was jarring at first. Everything felt disjointed until I considered the logic behind doing it this way. But while I can manage to put everything together as if it were a jigsaw puzzle, many people with certain cognitive delays could have real trouble. They would first need to determine where everything is in the box before then determining how to put it all together; this can be frustrating for many. Conversely, the advantage of the “all-in-one” approach of Series past (where the case and band was one entity) meant there was far less mental processing needed to unbox the product. Aside from figuring out how the band works, the old setup was essentially a “grab and go” solution.
Second, the Series 4 packaging is more fiddly than before, quite literally. Instead of the Watch already being put together, now you have to fasten the band to the Watch in order to wear it. I acknowledge the built-in lesson for fastening and removing bands, but it can be inaccessible too. If you have visual and/or fine-motor impairments, you could spend several minutes trying to get your watch together so you can pair it with your iPhone. That time can be taxing, physically and emotionally, which in turn worsens the overall experience. Again, Apple’s previous packaging design alleviated much of this potential stress — whereas Series 4 exacerbates it.
I’ve long admired Apple’s product packaging for its elegance and simplicity, which is why the alarm bells went off as I’ve unboxed a few Series 4 models now. As I said, this year’s design definitely feels regressive, and I hope Apple reconsiders their old ways come Series 5. In fact, they could stand to take notes from Microsoft, which has gone to great lengths to ensure their packaging is as accessible as possible.
The bottom line
Three years in, I can confidently say I could live without my Apple Watch. But I also can confidently say I wouldn’t want to. Apple Watch has made my life better, and that’s not taking into account how it has raised my awareness for my overall health.
My gripes about the packaging and Infograph face aside, Series 4 is an exceptional update. The larger display is worth the price of admission, even from my year-old Series 3. The haptic Digital Crown and fall detection is the proverbial icing on the cake. I believe the arrival of Series 4 is a seminal moment for the product, and it’s the best, most accessible Apple Watch Apple has made yet.
Read more: https://techcrunch.com/2018/10/21/apple-watch-series-4-is-the-most-accessible-watch-yet/
Source: https://hashtaghighways.com/2018/10/25/apple-watch-series-4-is-the-most-accessible-watch-yet/
from Garko Media https://garkomedia1.wordpress.com/2018/10/25/apple-watch-series-4-is-the-most-accessible-watch-yet/
0 notes
un-enfant-immature · 6 years
Text
DARPA dedicates $75 million (to start) into reinventing chip tech
The Defense Department’s research arm, DARPA, is throwing a event around its “Electronics Resurgence Initiative,” an effort to leapfrog existing chip tech by funding powerful but unproven new ideas percolating in the industry. It plans to spend up to $1.5 billion on this over the years, of which about $75 million was earmarked today for a handful of new partners.
The ERI was announced last year in relatively broad terms, and since then it has solicited proposals from universities and research labs all over the country, arriving at a handful that it has elected to fund.
The list of partners and participants is quite long: think along the lines of MIT, Stanford, Princeton, Yale, the UCs, IBM, Intel, Qualcomm, National Labs, and so on. Big hitters. Each institution is generally associated with one of six sub-programs, each (naturally) equipped with their own acronym:
Software-defined Hardware (SDH) — Computing is often done on general-purpose processors, but specialized ones can get the job done faster. Problem is these “application specific integrated circuits,” or ASICs, are expensive and time-consuming to create. SDH is about making “hardware and software that can be reconfigured in real-time based on the data being processed.”
Domain-specific System on Chip (DSSoC) — This is related to SDH, but is about finding the right balance between custom chips, for instance or image recognition or message decryption, and general-purpose ones. DSSoC aims to create a “single programmable framework” that would let developers easily mix and match parts like ASICs, CPUs, and GPUs.
Intelligent Design of Electronic Assets (IDEA) — On a related note, creating such a chip’s actual physical wiring layout is an incredibly complex and specialized process. IDEA is looking to shorten the time it takes to design a chip from a year to a day, “to usher in an era of the 24-hour design cycle for DoD hardware systems.” Ideally no human would be necessary, though doubtless specialists would vet the resulting designs.
Posh Open Source Hardware (POSH) — This self-referential acronym refers to a program where specialized SoCs like those these programs are looking into would be pursued under open source licenses. Licensing can be a serious obstacle to creating the best system possible — one chip may use a proprietary system that can’t exist in concert with another chip’s proprietary system — so to enable reuse and easy distribution they’ll look into creating and testing a base set that have no such restrictions.
3-Dimensional Monolithic System-on-a-chip (3DSoC) — The standard model of having processors and chips connected to a central memory and execution system can lead to serious bottlenecks. So 3DSoC aims to combine everything into stacks (hence the 3D part) and “integrate logic, memory and input-output (I/O) elements in ways that dramatically shorten — more than 50-fold — computation times while using less power. The 50-fold number is, I’m guessing, largely aspirational.
Foundations Required for Novel Compute (FRANC) — That “standard model” of processor plus short term and long term memory is known as a Von Neumann architecture, after one of the founders of computing technology and theory, and is how nearly all computing is done today. But DARPA feels it’s time to move past this and create “novel compute topologies” with “new materials and integration schemes to process data in ways that eliminate or minimize data movement.” It’s rather sci-fi right now as you can tell but if we don’t try to escape Von Neumann, he will dominate us forever.
These are all extremely ambitious ideas, as you can see, but don’t think about it like DARPA contracting these researchers to create something useful right away. The Defense Department is a huge supporter of basic science; I can’t tell you how many papers I read where the Air Force, DARPA, or some other quasi-military entity has provided the funding. So think of it as trying to spur American innovation in important areas that also may happen to have military significance down the line.
A DARPA representative explained that $75 million is set aside for funding various projects under these headings, though the specifics are known only to the participants at this point. That’s the money just for FY18, and presumably more will be added according to the merits and requirements of the various projects. That all comes out of the greater $1.5 billion budget for the ERI overall.
The ERI summit is underway right now, with participants and DARPA reps sharing information, comparing notes, and setting expectations. The summit will no doubt repeat next year when a bit more work has been done.
0 notes