Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
10,900
10,900
16,030,891
2,616
A method and apparatus for displaying data at an incident scene is provided herein. During operation an officer will be assigned to a particular incident having an identified “incident type”. Individuals at the incident scene are identified via facial recognition and criminal histories for all recognized individuals are obtained. The incident type will be used to determine criminal histories relevant to the incident type. Only the criminal histories of those individuals at the incident scene that are relevant to the incident type will be displayed (ideally displayed proximate to the identified individuals).
1. An apparatus comprising: a network interface configured to receive an image of an individual from a device; a database comprising facial information and criminal information; identity analysis circuitry configured to use the facial information for performing facial recognition on the image of the individual to determine an identity of the individual and their past criminal activity; logic circuitry configured to receive a current incident type assigned to a public-safety officer and determine those incident types relevant to the current incident type, and also configured to determine if their past criminal activity is relevant to the current incident type, and determine augmented-reality data based on their past criminal activity being relevant to the current incident type; and wherein the network interface is also configured to transmit the augmented-reality data to the device. 2. The apparatus of claim 1 wherein the network interface comprises a wired network interface. 3. The apparatus of claim 1 wherein the network interface comprises a wireless network interface. 4. The apparatus of claim 1 wherein the augmented-reality data comprises text, a geometric shape, a color, and/or a shading. 5. The apparatus of claim 1 wherein the augmented-reality data instructs the device to place a virtual object near the individual if they have a criminal history that contains an incident type relevant to the current incident type. 6. The apparatus of claim 1 wherein logic circuitry determines those incident types relevant to the current incident type by mapping the current incident to relevant incident types. 7. An method comprising the steps of: receiving an image of an individual from a device; performing facial recognition on the image of the individual to identify the individual; determining past criminal activity for the individual; determining a current incident type assigned to a public-safety officer; determining relevant criminal histories related to the current incident type; determining if any past criminal activity is relevant to the current incident type; determining augmented-reality data based on any past criminal activity being relevant to the current incident type; and transmitting the augmented-reality data to the device, wherein the augmented reality data places a virtual object/shading near the individual only if they have past criminal activity being relevant the current incident type. 8. The method of claim 7 wherein the augmented-reality data comprises a geometric shape, text, a color, and/or a shading. 9. The method of claim 7 wherein the augmented-reality data instructs the device to place a virtual object over or near the individual if the individual has past criminal activity that matches relevant criminal histories. 10. A method comprising the steps of: receiving an image of an individual from a device; performing facial recognition on the image of the individual to identify the individual; determining past criminal activity for the individual; determining a current incident type assigned to a public-safety officer; determining relevant criminal histories related to the current incident type; determining if any past criminal activity is relevant to the current incident type; determining augmented-reality data comprising text, shapes, colors, or shading based on any past criminal activity being relevant to the current incident type; and transmitting the augmented-reality data to the device, wherein the augmented reality data places a virtual object/shading near the individual if they have past criminal activity that is relevant to the current incident type, otherwise no virtual object/shading is placed next to the individual.
A method and apparatus for displaying data at an incident scene is provided herein. During operation an officer will be assigned to a particular incident having an identified “incident type”. Individuals at the incident scene are identified via facial recognition and criminal histories for all recognized individuals are obtained. The incident type will be used to determine criminal histories relevant to the incident type. Only the criminal histories of those individuals at the incident scene that are relevant to the incident type will be displayed (ideally displayed proximate to the identified individuals).1. An apparatus comprising: a network interface configured to receive an image of an individual from a device; a database comprising facial information and criminal information; identity analysis circuitry configured to use the facial information for performing facial recognition on the image of the individual to determine an identity of the individual and their past criminal activity; logic circuitry configured to receive a current incident type assigned to a public-safety officer and determine those incident types relevant to the current incident type, and also configured to determine if their past criminal activity is relevant to the current incident type, and determine augmented-reality data based on their past criminal activity being relevant to the current incident type; and wherein the network interface is also configured to transmit the augmented-reality data to the device. 2. The apparatus of claim 1 wherein the network interface comprises a wired network interface. 3. The apparatus of claim 1 wherein the network interface comprises a wireless network interface. 4. The apparatus of claim 1 wherein the augmented-reality data comprises text, a geometric shape, a color, and/or a shading. 5. The apparatus of claim 1 wherein the augmented-reality data instructs the device to place a virtual object near the individual if they have a criminal history that contains an incident type relevant to the current incident type. 6. The apparatus of claim 1 wherein logic circuitry determines those incident types relevant to the current incident type by mapping the current incident to relevant incident types. 7. An method comprising the steps of: receiving an image of an individual from a device; performing facial recognition on the image of the individual to identify the individual; determining past criminal activity for the individual; determining a current incident type assigned to a public-safety officer; determining relevant criminal histories related to the current incident type; determining if any past criminal activity is relevant to the current incident type; determining augmented-reality data based on any past criminal activity being relevant to the current incident type; and transmitting the augmented-reality data to the device, wherein the augmented reality data places a virtual object/shading near the individual only if they have past criminal activity being relevant the current incident type. 8. The method of claim 7 wherein the augmented-reality data comprises a geometric shape, text, a color, and/or a shading. 9. The method of claim 7 wherein the augmented-reality data instructs the device to place a virtual object over or near the individual if the individual has past criminal activity that matches relevant criminal histories. 10. A method comprising the steps of: receiving an image of an individual from a device; performing facial recognition on the image of the individual to identify the individual; determining past criminal activity for the individual; determining a current incident type assigned to a public-safety officer; determining relevant criminal histories related to the current incident type; determining if any past criminal activity is relevant to the current incident type; determining augmented-reality data comprising text, shapes, colors, or shading based on any past criminal activity being relevant to the current incident type; and transmitting the augmented-reality data to the device, wherein the augmented reality data places a virtual object/shading near the individual if they have past criminal activity that is relevant to the current incident type, otherwise no virtual object/shading is placed next to the individual.
2,600
10,901
10,901
15,521,575
2,632
The present disclosure describes devices, systems, and methods for allocating bandwidth among communication links in a telecommunication system. Some aspects can involve identifying multiple transmission modes used to transmit downlink signals via remote units of a telecommunications system to groups of terminal devices. Each group of terminal devices may receive downlink signals using a respective transmission mode. Respective weights can be assigned to the groups of terminal devices based on the transmission modes. The downlink signals, which are provided to each remote unit associate with each group of terminal devices, can be configured using a respective signal power that is associated with a respective weight for the group of terminal devices associated with the respective remote unit.
1. A method comprising: identifying multiple transmission modes used to transmit downlink signals via remote units of a telecommunications system to groups of terminal devices, wherein each group of terminal devices receives downlink signals using a respective transmission mode; assigning respective weights to the groups of terminal devices based on the transmission modes; and configuring the downlink signals provided to each remote unit associated with each group of terminal devices using a respective signal power that is associated with a respective weight for the group of terminal devices associated with the respective remote unit. 2. The method of claim 1, wherein configuring the downlink signals includes configuring the downlink signals by a processing engine in a head-end unit in a distributed antenna system. 3. The method of claim 1, wherein configuring the downlink signals includes configuring the downlink signals by a processing engine in a base station communicatively coupled to a distributed antenna system. 4. The method of claim 1, wherein configuring the downlink signals comprises balancing the signal power allocated for downlink traffic targeted to terminal devices in a first group and the signal power allocated for downlink traffic targeted to terminal devices in one or more other groups. 5. The method of claim 1, further comprising limiting assignment of frequency resources for transmitting downlink signals to individual groups of terminal devices in a given time slot. 6. The method of claim 1, further comprising adjusting the signal power used to transmit different downlink signals such that a low-loaded remote unit transmits downlink signals using an output power that is below a threshold power. 7. The method of claim 1, wherein the downlink signals comprise control information transmitted over one or more control channels. 8. The method of claim 1, wherein transmission modes include transmission modes 1-6 of the LTE standard. 9. A telecommunications system comprising: a plurality of remote units configured to transmit downlink signals using multiple transmission modes to terminal devices; and a processing engine configured to: identify groups of terminal devices, wherein each group of terminal devices receives downlink signals using a respective transmission mode; assign respective weights to the groups of terminal devices based on the transmission modes; and configure the downlink signals provided to each remote unit associated with each group of terminal devices using a respective signal power that is associated with a respective weight for the group of terminal devices associated with the respective remote unit. 10. The system of claim 9, wherein the processing engine is further configured to: balance the signal power allocated for downlink traffic targeted to terminal devices in a first group; and balance the signal power allocated for downlink traffic targeted to terminal devices in one or more other groups. 11. The system of claim 9, wherein the processing engine is further configured to limit assignment of frequency resources for transmitting downlink signals to individual groups of terminal devices in a given time slot. 12. The system of claim 9, wherein the processing engine is further configured to adjust the signal power used to transmit different downlink signals such that a low-loaded remote unit transmits downlink signals using an output power that is below a threshold power. 13. The system of claim 9, wherein the telecommunications system comprises a unit configured to transmit downlink signals comprising control information over one or more control channels. 14. The system of claim 9, wherein the processing engine is further configured to: identify sets of terminal devices within groups of terminal devices wherein each set of terminal devices receives downlink signals based on a respective pre-coding matrix indicator; assign respective weights to the sets of terminal devices based on the pre-coding matrix indicator; and configure the downlink signals provided to each remote unit associated with each set of terminal devices using a respective signal power that is associated with a respective weight for the set of terminal devices associated with the respective remote unit. 15. A telecommunications system comprising: a plurality of remote units for transmitting downlink signals using a transmission mode to terminal devices; and a head-end unit configured to be communicatively coupled to a base station, receive downlink signals from the base station intended for terminal devices, and distribute the downlink signals to the remote units associated with the terminal devices, wherein the base station comprises a processing device configured to: identify groups of terminal devices, wherein each group of terminal devices receives downlink signals transmitted by the base station using a respective transmission mode, assign respective weights to the groups of terminal devices based on the transmission modes, and configure the downlink signals transmitted by the base station to the groups of terminal devices using a respective signal power that is associated with a respective weight for the groups of terminal devices; and a splitter unit in a signal path between the head-end unit and the remote units, the splitter unit configured to receive downlink signals intended for the terminal devices and to transmit modified downlink signals based on the weight of the associated group of terminal devices on one or more output ports communicatively coupled to the remote units associated with the terminal devices. 16. The system of claim 15, wherein the splitter unit is configured to combine the downlink signals intended for groups of terminal devices associated with a respective weight. 17. The system of claim 15, wherein the splitter unit is configured to divide the downlink signals intended for groups of terminal devices associated with a respective weight across multiple output ports of the splitter unit. 18. The system of claim 15, wherein the downlink signal is transmitted over a control channel and the splitter unit is configured to divide the signal power for a control signal over the control channel across multiple output of the splitter unit. 19. The system of claim 15, wherein the processing device is further configured to compensate for the effect of the splitter unit on signals transmitted over a control channel by adjusting a gain of the control channel. 20. The system of claim 15, wherein the processing device is further configured to provide dynamic control of gain assignment on a slot-by-slot basis for time slots used by the distributed antenna system.
The present disclosure describes devices, systems, and methods for allocating bandwidth among communication links in a telecommunication system. Some aspects can involve identifying multiple transmission modes used to transmit downlink signals via remote units of a telecommunications system to groups of terminal devices. Each group of terminal devices may receive downlink signals using a respective transmission mode. Respective weights can be assigned to the groups of terminal devices based on the transmission modes. The downlink signals, which are provided to each remote unit associate with each group of terminal devices, can be configured using a respective signal power that is associated with a respective weight for the group of terminal devices associated with the respective remote unit.1. A method comprising: identifying multiple transmission modes used to transmit downlink signals via remote units of a telecommunications system to groups of terminal devices, wherein each group of terminal devices receives downlink signals using a respective transmission mode; assigning respective weights to the groups of terminal devices based on the transmission modes; and configuring the downlink signals provided to each remote unit associated with each group of terminal devices using a respective signal power that is associated with a respective weight for the group of terminal devices associated with the respective remote unit. 2. The method of claim 1, wherein configuring the downlink signals includes configuring the downlink signals by a processing engine in a head-end unit in a distributed antenna system. 3. The method of claim 1, wherein configuring the downlink signals includes configuring the downlink signals by a processing engine in a base station communicatively coupled to a distributed antenna system. 4. The method of claim 1, wherein configuring the downlink signals comprises balancing the signal power allocated for downlink traffic targeted to terminal devices in a first group and the signal power allocated for downlink traffic targeted to terminal devices in one or more other groups. 5. The method of claim 1, further comprising limiting assignment of frequency resources for transmitting downlink signals to individual groups of terminal devices in a given time slot. 6. The method of claim 1, further comprising adjusting the signal power used to transmit different downlink signals such that a low-loaded remote unit transmits downlink signals using an output power that is below a threshold power. 7. The method of claim 1, wherein the downlink signals comprise control information transmitted over one or more control channels. 8. The method of claim 1, wherein transmission modes include transmission modes 1-6 of the LTE standard. 9. A telecommunications system comprising: a plurality of remote units configured to transmit downlink signals using multiple transmission modes to terminal devices; and a processing engine configured to: identify groups of terminal devices, wherein each group of terminal devices receives downlink signals using a respective transmission mode; assign respective weights to the groups of terminal devices based on the transmission modes; and configure the downlink signals provided to each remote unit associated with each group of terminal devices using a respective signal power that is associated with a respective weight for the group of terminal devices associated with the respective remote unit. 10. The system of claim 9, wherein the processing engine is further configured to: balance the signal power allocated for downlink traffic targeted to terminal devices in a first group; and balance the signal power allocated for downlink traffic targeted to terminal devices in one or more other groups. 11. The system of claim 9, wherein the processing engine is further configured to limit assignment of frequency resources for transmitting downlink signals to individual groups of terminal devices in a given time slot. 12. The system of claim 9, wherein the processing engine is further configured to adjust the signal power used to transmit different downlink signals such that a low-loaded remote unit transmits downlink signals using an output power that is below a threshold power. 13. The system of claim 9, wherein the telecommunications system comprises a unit configured to transmit downlink signals comprising control information over one or more control channels. 14. The system of claim 9, wherein the processing engine is further configured to: identify sets of terminal devices within groups of terminal devices wherein each set of terminal devices receives downlink signals based on a respective pre-coding matrix indicator; assign respective weights to the sets of terminal devices based on the pre-coding matrix indicator; and configure the downlink signals provided to each remote unit associated with each set of terminal devices using a respective signal power that is associated with a respective weight for the set of terminal devices associated with the respective remote unit. 15. A telecommunications system comprising: a plurality of remote units for transmitting downlink signals using a transmission mode to terminal devices; and a head-end unit configured to be communicatively coupled to a base station, receive downlink signals from the base station intended for terminal devices, and distribute the downlink signals to the remote units associated with the terminal devices, wherein the base station comprises a processing device configured to: identify groups of terminal devices, wherein each group of terminal devices receives downlink signals transmitted by the base station using a respective transmission mode, assign respective weights to the groups of terminal devices based on the transmission modes, and configure the downlink signals transmitted by the base station to the groups of terminal devices using a respective signal power that is associated with a respective weight for the groups of terminal devices; and a splitter unit in a signal path between the head-end unit and the remote units, the splitter unit configured to receive downlink signals intended for the terminal devices and to transmit modified downlink signals based on the weight of the associated group of terminal devices on one or more output ports communicatively coupled to the remote units associated with the terminal devices. 16. The system of claim 15, wherein the splitter unit is configured to combine the downlink signals intended for groups of terminal devices associated with a respective weight. 17. The system of claim 15, wherein the splitter unit is configured to divide the downlink signals intended for groups of terminal devices associated with a respective weight across multiple output ports of the splitter unit. 18. The system of claim 15, wherein the downlink signal is transmitted over a control channel and the splitter unit is configured to divide the signal power for a control signal over the control channel across multiple output of the splitter unit. 19. The system of claim 15, wherein the processing device is further configured to compensate for the effect of the splitter unit on signals transmitted over a control channel by adjusting a gain of the control channel. 20. The system of claim 15, wherein the processing device is further configured to provide dynamic control of gain assignment on a slot-by-slot basis for time slots used by the distributed antenna system.
2,600
10,902
10,902
15,549,201
2,642
A method and system for creating a constellation of electronic devices for providing optical or radio-frequency operations relating to earth surface data collection applications on a predetermined geographical area. On each of a plurality of (commercial) airplanes at least one electronic device from the constellation is provided, and during its flight each airplane has a flight path over at least a portion of the geographical area. Each electronic device is configured for the operations during the flight with an earth coverage range for the operations determined by an individual airplane coverage range of a portion of the earth surface as provided by the associated airplane. One or more electronic devices are activated for the operations when the individual airplane coverage of the one or more airplanes associated with the one or more electronic devices is within the geographical area.
1. A method of creating a constellation of electronic devices for providing optical or radio-frequency operations relating to earth surface data collection applications on a predetermined geographical area, comprising: providing on each of a plurality of airplanes at least one electronic device from the constellation, the at least one electronic device being a unit attached to the associated airplane and comprising a data collection sensor, during its flight each airplane having a flight path over at least a portion of said predetermined geographical area, each electronic device being configured for said operations during the flight with an earth coverage range for said operations determined by an individual airplane coverage range of a portion of the earth surface as provided by the associated airplane; activating one or more electronic devices for said operations when the individual airplane coverage of the one or more airplanes associated with the one or more electronic devices is within the predetermined geographical area, wherein the data collection sensor comprises a passive or an active data collection sensor arranged to detect signals from the earth surface during operation. 2. The method according to claim 1, wherein the airplane is a commercial passenger airplane or a commercial cargo airplane operating in accordance with its flight path and schedule 3. The method according to claim 1, wherein the optical or radio-frequency operations relating to earth surface data collection applications comprises earth observation applications and/or automated identification system (AIS) applications. 4. The method according to claim 1, wherein the activation of each electronic device of the constellation is controlled by deriving for each respective electronic device its position in said predetermined geographical area in such a way that the earth coverage ranges for said operations of the activated electronic devices substantially cover the predetermined geographical area for a predetermined duration of time. 5. The method according to claim 1, wherein the method further comprises establishing a communications link between the electronic device and a ground station connected to a network backbone within the earth coverage range of the electronic device. 6. The method according to claim 1, further comprising establishing an inter-constellation communications link between one electronic device onboard one airplane from the plurality of airplanes and another electronic device onboard another airplane of said plurality of airplanes. 7. The method according to claim 6, wherein said inter-constellation communications link is an inter-plane communications link in which the one and other airplanes are within the line-of-sight of each other, such as an RF or optical communication link, and/or a satellite communications link in case there is no line of sight. 8. The method according to claim 1, wherein the method further comprises collection of data during said operations of the electronic device when in flight, and uploading said collected data to a predetermined network location via an earth based communication link provided after landing of the airplane. 9. A system of a constellation of electronic devices for providing optical or radio-frequency operations relating to earth surface data collection applications on a predetermined geographical area of at least a portion of the earth surface, each electronic device being arranged for providing optical or radio-frequency operations relating to earth surface data collection applications on a predetermined geographical area of at least a portion of the earth surface, the electronic device being a unit attachable to an associated airplane and comprising a data collection sensor, each electronic device further comprising a radome assembly with a radome mounted to a radome base, the radome base being attachable to the associated airplane, and the data collection sensor comprising a passive or an active data collection sensor arranged to detect signals from the earth surface during operation. 10. The system according to claim 9, wherein the data collection sensor comprises one or more of: a wide field of view camera, a narrow field of view camera, an infrared camera, a radar imaging unit, a synthetic aperture radar unit, a lidar unit, an automated identification system (AIS) unit. 11. The system according to claim 9, the system being further arranged to execute the following method: providing on each of a plurality of airplanes at least one electronic device from the constellation, the at least one electronic device being a unit attached to the associated airplane and comprising a data collection sensor, during its flight each airplane having a flight path over at least a portion of said predetermined geographical area, each electronic device being configured for said operations during the flight with an earth coverage range for said operations determined by an individual airplane coverage range of a portion of the earth surface as provided by the associated airplane; and activating one or more electronic devices for said operations when the individual airplane coverage of the one or more airplanes associated with the one or more electronic devices is within the predetermined geographical area, wherein the data collection sensor comprises a passive or an active data collection sensor arranged to detect signals from the earth surface during operation. 12. The system according to claim 11, further comprising a control station, and wherein the electronic device comprises a communications unit arranged to provide exchange of control data with the control station. 13. The system according to claim 11, further comprising a ground station, and wherein the electronic device comprises a communications unit arranged to provide exchange of data with the ground station. 14. The system according to claim 12, wherein the communications unit of the electronic device is arranged to provide data communications with a further electronic device. 15. The system according to claim 12, wherein the control station is arranged to control activation of one or more electronic devices for said operations when the individual airplane coverage of the one or more airplanes associated with the one or more electronic devices is within the predetermined geographical area. 16. The system according to claim 15, wherein the control station is further arranged for executing a handover of the operations performed by one electronic device to a further electronic device when the earth coverage range of said one electronic device is moving out of the predetermined geographical area and the earth coverage range of said further electronic device is within the predetermined geographical area. 17. The system according to claim 15, wherein the control station is further arranged to control the activation of one or more of the electronic devices based on optimizing or maximizing coverage of the predetermined geographical area for a maximized time, based on the earth coverage range of individual electronic devices. 18. The system according to claim 17, wherein the control station is further arranged to estimate the coverage of the predetermined geographical area from a predetermined timing schedule of flight and flight path for each of the associated airplanes.
A method and system for creating a constellation of electronic devices for providing optical or radio-frequency operations relating to earth surface data collection applications on a predetermined geographical area. On each of a plurality of (commercial) airplanes at least one electronic device from the constellation is provided, and during its flight each airplane has a flight path over at least a portion of the geographical area. Each electronic device is configured for the operations during the flight with an earth coverage range for the operations determined by an individual airplane coverage range of a portion of the earth surface as provided by the associated airplane. One or more electronic devices are activated for the operations when the individual airplane coverage of the one or more airplanes associated with the one or more electronic devices is within the geographical area.1. A method of creating a constellation of electronic devices for providing optical or radio-frequency operations relating to earth surface data collection applications on a predetermined geographical area, comprising: providing on each of a plurality of airplanes at least one electronic device from the constellation, the at least one electronic device being a unit attached to the associated airplane and comprising a data collection sensor, during its flight each airplane having a flight path over at least a portion of said predetermined geographical area, each electronic device being configured for said operations during the flight with an earth coverage range for said operations determined by an individual airplane coverage range of a portion of the earth surface as provided by the associated airplane; activating one or more electronic devices for said operations when the individual airplane coverage of the one or more airplanes associated with the one or more electronic devices is within the predetermined geographical area, wherein the data collection sensor comprises a passive or an active data collection sensor arranged to detect signals from the earth surface during operation. 2. The method according to claim 1, wherein the airplane is a commercial passenger airplane or a commercial cargo airplane operating in accordance with its flight path and schedule 3. The method according to claim 1, wherein the optical or radio-frequency operations relating to earth surface data collection applications comprises earth observation applications and/or automated identification system (AIS) applications. 4. The method according to claim 1, wherein the activation of each electronic device of the constellation is controlled by deriving for each respective electronic device its position in said predetermined geographical area in such a way that the earth coverage ranges for said operations of the activated electronic devices substantially cover the predetermined geographical area for a predetermined duration of time. 5. The method according to claim 1, wherein the method further comprises establishing a communications link between the electronic device and a ground station connected to a network backbone within the earth coverage range of the electronic device. 6. The method according to claim 1, further comprising establishing an inter-constellation communications link between one electronic device onboard one airplane from the plurality of airplanes and another electronic device onboard another airplane of said plurality of airplanes. 7. The method according to claim 6, wherein said inter-constellation communications link is an inter-plane communications link in which the one and other airplanes are within the line-of-sight of each other, such as an RF or optical communication link, and/or a satellite communications link in case there is no line of sight. 8. The method according to claim 1, wherein the method further comprises collection of data during said operations of the electronic device when in flight, and uploading said collected data to a predetermined network location via an earth based communication link provided after landing of the airplane. 9. A system of a constellation of electronic devices for providing optical or radio-frequency operations relating to earth surface data collection applications on a predetermined geographical area of at least a portion of the earth surface, each electronic device being arranged for providing optical or radio-frequency operations relating to earth surface data collection applications on a predetermined geographical area of at least a portion of the earth surface, the electronic device being a unit attachable to an associated airplane and comprising a data collection sensor, each electronic device further comprising a radome assembly with a radome mounted to a radome base, the radome base being attachable to the associated airplane, and the data collection sensor comprising a passive or an active data collection sensor arranged to detect signals from the earth surface during operation. 10. The system according to claim 9, wherein the data collection sensor comprises one or more of: a wide field of view camera, a narrow field of view camera, an infrared camera, a radar imaging unit, a synthetic aperture radar unit, a lidar unit, an automated identification system (AIS) unit. 11. The system according to claim 9, the system being further arranged to execute the following method: providing on each of a plurality of airplanes at least one electronic device from the constellation, the at least one electronic device being a unit attached to the associated airplane and comprising a data collection sensor, during its flight each airplane having a flight path over at least a portion of said predetermined geographical area, each electronic device being configured for said operations during the flight with an earth coverage range for said operations determined by an individual airplane coverage range of a portion of the earth surface as provided by the associated airplane; and activating one or more electronic devices for said operations when the individual airplane coverage of the one or more airplanes associated with the one or more electronic devices is within the predetermined geographical area, wherein the data collection sensor comprises a passive or an active data collection sensor arranged to detect signals from the earth surface during operation. 12. The system according to claim 11, further comprising a control station, and wherein the electronic device comprises a communications unit arranged to provide exchange of control data with the control station. 13. The system according to claim 11, further comprising a ground station, and wherein the electronic device comprises a communications unit arranged to provide exchange of data with the ground station. 14. The system according to claim 12, wherein the communications unit of the electronic device is arranged to provide data communications with a further electronic device. 15. The system according to claim 12, wherein the control station is arranged to control activation of one or more electronic devices for said operations when the individual airplane coverage of the one or more airplanes associated with the one or more electronic devices is within the predetermined geographical area. 16. The system according to claim 15, wherein the control station is further arranged for executing a handover of the operations performed by one electronic device to a further electronic device when the earth coverage range of said one electronic device is moving out of the predetermined geographical area and the earth coverage range of said further electronic device is within the predetermined geographical area. 17. The system according to claim 15, wherein the control station is further arranged to control the activation of one or more of the electronic devices based on optimizing or maximizing coverage of the predetermined geographical area for a maximized time, based on the earth coverage range of individual electronic devices. 18. The system according to claim 17, wherein the control station is further arranged to estimate the coverage of the predetermined geographical area from a predetermined timing schedule of flight and flight path for each of the associated airplanes.
2,600
10,903
10,903
16,066,334
2,649
During a medical intervention such as an angiography, the X-ray examination equipment (such as that mounted on a C-arm) produces a very large number of imaging frames of the intervention, as it progresses. This information contains frame sequences which can be effectively used to improve a medical report of the intervention. The sequence will contain sequences which contain similar clinical information though, and these frames may be considered to be redundant and not useful for inclusion in the medical report. The aspects detailed herein enable a selection of non-redundant sequences and/or frames, based on contextual information, obtained from the sequence of images, and/or other medical equipment, during an intervention. In this way, the redundancy inherent in the original frame sequence can be removed, leaving a set of prepared candidate sequences for insertion into a multimedia or documentary medical report.
1. An apparatus for computer-aided medical report provision, comprising: a processing unit; wherein the processing unit is configured to receive a sequence of frames representing a region of interest of a patient, obtained from a C-arm imaging system to generate contextual information derived from an input from the C-arm imaging system; and orientation information of the C-arm imaging system, wherein the contextual information comprises orientation information obtained from (i) azimuth and elevation angles of the C-arm imaging system, or (ii) obtained from an image processing algorithm applied to image content of the input sequence of images from the C-arm; wherein the frame clustering condition comprises an orientation condition; wherein the frames of the sequence of frames are indexed to the contextual information, to generate at least one representative subset of the frames of the sequence of frames, wherein the frames of the at least one representative subset are selected from the sequence of frames by comparing the contextual information of frames of the sequence of frames to at least one frame clustering condition, to select at least one further subset of the frames of the at least one representative subset using a selection parameter defined for the at least one representative subset and to output a multimedia or documentary report comprising the further subset of frames. 2. (canceled) 3. The apparatus according to claim 1, wherein the processor is further configured to: receive an event signal comprising information about an event linked to the stage of a medical procedure, to generate a procedure status indication using the event signal, wherein the contextual information further comprises the procedure status indication, and wherein frames of the at least one representative subset are selected from the sequence of frames based on the procedure status indication, and the at least one frame clustering condition. 4. The apparatus according to claim 3, wherein the event signal is a balloon inflation state signal. 5. The apparatus according to claim 1, wherein the processor is further configured to: receive a measurement device activity parameter from output signals of a patient monitoring device, wherein the contextual information further comprises the measurement device activity parameter, wherein the frames of the at least one representative subset are selected based on the presence of the measurement device activity parameter, and wherein the multimedia or documentary report comprises a measurement from the output signals of a patient monitoring device displayed in proximity to the further subset of frames. 6. The apparatus according to claim 5, wherein the processor is further configured to: classify frames of the sequence of frames by identifying a specific artery system represented in the frames, thereby providing frames of the sequence of frames with an artery system classification, wherein the contextual information further comprises the artery system classification, wherein the frame clustering condition comprises an artery system classification parameter. 7. The apparatus according to claim 1, wherein the processor is further configured to: compute a contrast agent quality metric for a plurality of frames of the at least one representative subset of the frames, wherein the contrast agent quality metric is a measure of how much contrast agent is visible in a frame, and wherein the selection parameter is an injection quality parameter defining a quality of a distribution of contrast agent in a lumen, and the frames of the at least one further subset of the frames are selected based on a comparison of the contrast agent quality metric and the injection quality parameter. 8. The apparatus according to claim 7, wherein the processor is further configured to: compute a balloon inflation progress metric for a plurality of frames of the at least one representative subset of the frames, wherein the selection parameter is a balloon inflation extent metric, and wherein the frames of the at least one further subset of the frames are selected based on a comparison of the balloon inflation progress metric and the balloon inflation extent metric. 9. A method for computer-aided medical report provision using a C-arm imaging system, comprising the following steps: a) receiving a sequence of frames, obtained from a C-arm imaging system, representing a region of interest of a patient; b) generating contextual information comprising orientation information obtained from (i) azimuth and elevation angles of the C-arm imaging system, or (ii) obtained from an image processing algorithm applied to image content of the input sequence of images from the C-arm; wherein the frames of the sequence of frames are indexed to the contextual information; c) generating at least one representative subset of the frames of the sequence of frames, wherein the frames of the at least one representative subset are selected from the sequence of frames by comparing the contextual information of frames of the sequence of frames to at least one frame clustering condition, wherein the frame clustering condition comprises an orientation condition; d) selecting at least one further subset of the frames of the at least one representative subset using a selection parameter defined for the at least one representative subset; and e) outputting a multimedia or documentary report comprising the further subset of frames. 10. The method according to claim 9, further comprising the step al): a1) receiving orientation information of the item of medical imaging equipment; wherein in step b), the contextual information comprises orientation information; and wherein in step c), the frame clustering condition comprises an orientation condition. 11. (canceled) 12. The method according to claim 9, further comprising in step c): c1) computing a contrast agent quality metric for a plurality of frames of the at least one representative subset of the frames; wherein in step d), the selection parameter is an injection quality parameter, and the frames of the at least one further subset of the frames are selected based on a comparison of the contrast agent quality metric and the injection quality parameter. 13. A system configured for medical reporting, comprising: a medical imaging system an apparatus according to claim 1, and a display arrangement; wherein the C-arm imaging system is configured to generate a sequence of frames; wherein the apparatus is communicatively coupled to the C-arm imaging system and the display arrangement, and wherein, in operation, the apparatus receives the sequence of frames and azimuth and elevation angle signals from the C-arm imaging system, and outputs a multimedia or documentary report comprising a subset of frames of the sequence of frames. 14. A computer program element for medical reporting, comprising instructions which, when the computer program element is executed by a processing unit, is adapted to perform the method steps according to claim 9. 15. A computer readable medium having stored the computer program element of claim 14.
During a medical intervention such as an angiography, the X-ray examination equipment (such as that mounted on a C-arm) produces a very large number of imaging frames of the intervention, as it progresses. This information contains frame sequences which can be effectively used to improve a medical report of the intervention. The sequence will contain sequences which contain similar clinical information though, and these frames may be considered to be redundant and not useful for inclusion in the medical report. The aspects detailed herein enable a selection of non-redundant sequences and/or frames, based on contextual information, obtained from the sequence of images, and/or other medical equipment, during an intervention. In this way, the redundancy inherent in the original frame sequence can be removed, leaving a set of prepared candidate sequences for insertion into a multimedia or documentary medical report.1. An apparatus for computer-aided medical report provision, comprising: a processing unit; wherein the processing unit is configured to receive a sequence of frames representing a region of interest of a patient, obtained from a C-arm imaging system to generate contextual information derived from an input from the C-arm imaging system; and orientation information of the C-arm imaging system, wherein the contextual information comprises orientation information obtained from (i) azimuth and elevation angles of the C-arm imaging system, or (ii) obtained from an image processing algorithm applied to image content of the input sequence of images from the C-arm; wherein the frame clustering condition comprises an orientation condition; wherein the frames of the sequence of frames are indexed to the contextual information, to generate at least one representative subset of the frames of the sequence of frames, wherein the frames of the at least one representative subset are selected from the sequence of frames by comparing the contextual information of frames of the sequence of frames to at least one frame clustering condition, to select at least one further subset of the frames of the at least one representative subset using a selection parameter defined for the at least one representative subset and to output a multimedia or documentary report comprising the further subset of frames. 2. (canceled) 3. The apparatus according to claim 1, wherein the processor is further configured to: receive an event signal comprising information about an event linked to the stage of a medical procedure, to generate a procedure status indication using the event signal, wherein the contextual information further comprises the procedure status indication, and wherein frames of the at least one representative subset are selected from the sequence of frames based on the procedure status indication, and the at least one frame clustering condition. 4. The apparatus according to claim 3, wherein the event signal is a balloon inflation state signal. 5. The apparatus according to claim 1, wherein the processor is further configured to: receive a measurement device activity parameter from output signals of a patient monitoring device, wherein the contextual information further comprises the measurement device activity parameter, wherein the frames of the at least one representative subset are selected based on the presence of the measurement device activity parameter, and wherein the multimedia or documentary report comprises a measurement from the output signals of a patient monitoring device displayed in proximity to the further subset of frames. 6. The apparatus according to claim 5, wherein the processor is further configured to: classify frames of the sequence of frames by identifying a specific artery system represented in the frames, thereby providing frames of the sequence of frames with an artery system classification, wherein the contextual information further comprises the artery system classification, wherein the frame clustering condition comprises an artery system classification parameter. 7. The apparatus according to claim 1, wherein the processor is further configured to: compute a contrast agent quality metric for a plurality of frames of the at least one representative subset of the frames, wherein the contrast agent quality metric is a measure of how much contrast agent is visible in a frame, and wherein the selection parameter is an injection quality parameter defining a quality of a distribution of contrast agent in a lumen, and the frames of the at least one further subset of the frames are selected based on a comparison of the contrast agent quality metric and the injection quality parameter. 8. The apparatus according to claim 7, wherein the processor is further configured to: compute a balloon inflation progress metric for a plurality of frames of the at least one representative subset of the frames, wherein the selection parameter is a balloon inflation extent metric, and wherein the frames of the at least one further subset of the frames are selected based on a comparison of the balloon inflation progress metric and the balloon inflation extent metric. 9. A method for computer-aided medical report provision using a C-arm imaging system, comprising the following steps: a) receiving a sequence of frames, obtained from a C-arm imaging system, representing a region of interest of a patient; b) generating contextual information comprising orientation information obtained from (i) azimuth and elevation angles of the C-arm imaging system, or (ii) obtained from an image processing algorithm applied to image content of the input sequence of images from the C-arm; wherein the frames of the sequence of frames are indexed to the contextual information; c) generating at least one representative subset of the frames of the sequence of frames, wherein the frames of the at least one representative subset are selected from the sequence of frames by comparing the contextual information of frames of the sequence of frames to at least one frame clustering condition, wherein the frame clustering condition comprises an orientation condition; d) selecting at least one further subset of the frames of the at least one representative subset using a selection parameter defined for the at least one representative subset; and e) outputting a multimedia or documentary report comprising the further subset of frames. 10. The method according to claim 9, further comprising the step al): a1) receiving orientation information of the item of medical imaging equipment; wherein in step b), the contextual information comprises orientation information; and wherein in step c), the frame clustering condition comprises an orientation condition. 11. (canceled) 12. The method according to claim 9, further comprising in step c): c1) computing a contrast agent quality metric for a plurality of frames of the at least one representative subset of the frames; wherein in step d), the selection parameter is an injection quality parameter, and the frames of the at least one further subset of the frames are selected based on a comparison of the contrast agent quality metric and the injection quality parameter. 13. A system configured for medical reporting, comprising: a medical imaging system an apparatus according to claim 1, and a display arrangement; wherein the C-arm imaging system is configured to generate a sequence of frames; wherein the apparatus is communicatively coupled to the C-arm imaging system and the display arrangement, and wherein, in operation, the apparatus receives the sequence of frames and azimuth and elevation angle signals from the C-arm imaging system, and outputs a multimedia or documentary report comprising a subset of frames of the sequence of frames. 14. A computer program element for medical reporting, comprising instructions which, when the computer program element is executed by a processing unit, is adapted to perform the method steps according to claim 9. 15. A computer readable medium having stored the computer program element of claim 14.
2,600
10,904
10,904
16,119,576
2,677
Systems and processes for providing a virtual assistant service are provided. In accordance with one or more examples, a method includes receiving, from an accessory device communicatively coupled to the first electronic device, a representation of a speech input representing a user request. The method further includes detecting a second electronic device and transmitting, from the first electronic device, a representation of the user request and data associated with the detected second electronic device to a third electronic device. The method further includes receiving, from the third electronic device, a determination of whether a task is to be performed by the second electronic device in accordance with the user request; and in accordance with a determination that a task is to be performed by the second electronic device, requesting the second electronic device to performed the task in accordance with the user request.
1. A first electronic device, comprising: one or more processors; a microphone; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, from an accessory device communicatively coupled to the first electronic device, a representation of a speech input representing a user request; detecting a second electronic device; transmitting, from the first electronic device, a representation of the user request and data. associated with the detected second electronic device to a third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the second electronic device in accordance with the user request; and in accordance with a determination that a task is to be performed by the second electronic device, requesting the second electronic device to perform the task in accordance with the user request. 2. The first electronic device of claim 1, wherein the accessory device is a headphone capable of performing near-field communication. 3. The first electronic device of claim 1, wherein the accessory device is wirelessly coupled to both the first electronic device and the second electronic device. 4. The first electronic device of claim 1, wherein the accessory device is wirelessly coupled to the first electronic device, but not the second electronic device. 5. The first electronic device of claim 1, wherein the first electronic device and the second electronic device are both capable of operating virtual assistants to process speech inputs. 6. The first electronic device of claim 1, wherein the first electronic device is communicatively coupled to the second electronic device via near-field communication. 7. The first electronic device of claim 1, wherein the one or more programs comprise further instructions for, prior to transmitting the representation of the user request and data associated with the detected second electronic device: displaying, at the first electronic device, a graphical user interface indicating the receiving of the representation of the speech input. 8. The first electronic device of claim 1, wherein the data associated with the detected second electronic device comprise one or more of: metadata of the detected second electronic device; capability data associated with the detected second electronic device; and user-specific data stored in the detected second electronic device. 9. The first electronic device of claim 8, wherein the capability data associated with the first electronic device or the second comprises one or more of: device capability; application capability; and informational capability. 10. The first electronic device of claim 8, wherein the one or more programs comprise further instructions for: transmitting, from the first electronic device, data associated with the first electronic device to a third electronic device, wherein the data associated with the first electronic device comprise one or more of: metadata of the first electronic device; capability data associated with the first electronic device; and user-specific data stored in first electronic device. 11. The first electronic device of claim 1, wherein the determination of whether a task is to be performed by the second electronic device in accordance with the user request is performed at the third electronic device based on one or more of: intent derived from the representation of the user request; capability data associated with at least one of the first electronic device and the second electronic device; and user-specific data. 12. The first electronic device of claim 1, wherein receiving the determination of whether a task is to be performed by the second electronic device comprises: receiving a command that causes the second electronic device to perform the task in accordance with the user request. 13. The first electronic device of claim 12, wherein requesting the second electronic device to perform the task in accordance with the user request comprises: transmitting the command to the second electronic device. 14. The first electronic device of claim 1, wherein audio data corresponding to the performing of the task by the second electronic device are transmitted to the accessory device. 15. The first electronic device of claim 14, wherein the audio data corresponding to the performing of the task in accordance with the user request are transmitted directly from the second electronic device to the accessory device. 16. The first electronic device of claim 14, wherein the audio data corresponding to the performing of the task in accordance with the user request are transmitted from the second electronic device to the accessory device through the first electronic device. 17. The first electronic device of claim 1, wherein the one or more program comprise further instructions for: in accordance with a determination that the task is to be performed by the second electronic device. displaying, at the first electronic device, a visual response to the user request; and transmitting audio data corresponding to the visual response to the accessory device. 18. The first electronic device of claim 1, wherein the one or more program comprise further instructions for: detecting one or more additional electronic devices; transmitting, from the first electronic device, data associated with the detected additional one or more electronic devices to the third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the one or more additional electronic devices in accordance with the user request; and in accordance with a determination that a task is to be performed by the one or more additional electronic devices and not the second electronic device, requesting the one or more additional electronic devices to perform the task in accordance with the user request. 19. The first electronic device of claim 18, wherein audio data corresponding to performing of the task by the one or more additional electronic device are transmitted to the accessory device. 20. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a first electronic device, the one or more programs including instructions for: receiving, from an accessory device communicatively coupled to the first electronic device, a representation of a speech input representing a user request; detecting a second electronic device; transmitting, from the first electronic device, a representation of the user request and data associated with the detected second electronic device to a third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the second electronic device in accordance with the user request; and in accordance with a determination that a task is to be performed by the second electronic device, requesting the second electronic device to performed the task in accordance with the user request. 21. The computer-readable storage medium of claim 20, wherein the accessory device is a headphone capable of performing near-field communication. 22. The computer-readable storage medium of claim 20, wherein the accessory device is wirelessly coupled to both the first electronic device and the second electronic device. 23. The computer-readable storage medium of claim 20, wherein the accessory device is wirelessly coupled to the first electronic device, but not the second electronic device. 24. The computer-readable storage medium of claim 20, wherein the data associated with the detected second electronic device comprise one or more of: metadata of the detected second electronic device; capability data associated with the detected second electronic device; and user-specific data stored in the detected second electronic device. 25. The computer-readable storage medium of claim 24, wherein the capability data associated with the first electronic device or the second comprises one or more of: device capability; application capability; and informational capability. 26. The computer-readable storage medium of claim 24, wherein the one or more programs comprise further instructions for: transmitting, from the first electronic device, data associated with the first electronic device to a third electronic device, wherein the data associated with the first electronic device comprise one or more of: metadata of the first electronic device; capability data associated with the first electronic device; and user-specific data stored in first electronic device. 27. The computer-readable storage medium of claim 20, wherein the determination of whether a task is to be performed by the second electronic device in accordance with the user request is performed at the third electronic device based on one or more of: intent derived from the representation of the user request; capability data associated with at least one of the first electronic device and the second electronic device; and user-specific data. 28. The computer-readable storage medium of claim 20, wherein receiving the determination of whether a task is to be performed by the second electronic device comprises: receiving a command that causes the second electronic device to perform the task in accordance with the user request. 29. The computer-readable storage medium of claim 28, wherein requesting the second electronic device to perform the task in accordance with the user request comprises: transmitting the command to the second electronic device. 30. The computer-readable storage medium of claim 20, wherein audio data corresponding to the performing of the task by the second electronic device are transmitted to the accessory device. 31. The computer-readable storage medium of claim 20, wherein the one or more programs comprise further instructions for: in accordance with a determination that the task is to be performed by the second electronic device, displaying, at the first electronic device, a visual response to the user request; and transmitting audio data corresponding to the visual response to the accessory device. 32. The computer-readable storage medium of claim 20, wherein the one or more programs comprise further instructions for: detecting one or more additional electronic devices; transmitting, from the first electronic device, data associated with the detected additional one or more electronic devices to the third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the one or more additional electronic devices in accordance with the user request; and in accordance with a determination that a task is to be performed by the one or more additional electronic devices and not the second electronic device, requesting the one or more additional electronic devices to perform the task in accordance with the user request. 33. The computer-readable storage medium of claim 32, wherein audio data corresponding to performing of the task by the one or more additional electronic device are transmitted to the accessory device. 34. A method for providing a virtual assistant service, comprising: at a first electronic device with one or more processors and memory: receiving, from an accessory device communicatively coupled to the first electronic device, a representation of a speech input representing a user request; detecting a second electronic device; transmitting, from the first electronic device, a representation of the user request and data associated with the detected second electronic device to a third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the second electronic device in accordance with the user request; and in accordance with a determination that a task is to be performed by the second electronic device, requesting the second electronic device to performed the task in accordance with the user request. 35. The method of claim 34, wherein the accessory device is a headphone capable of performing near-field communication. 36. The method of claim 34, wherein the accessory device is wirelessly coupled to both the first electronic device and the second electronic device. 37. The method of claim 34, wherein the accessory device is wirelessly coupled to the first electronic device, but not the second electronic device. 38. The method of claim 34, wherein the data associated with the detected second electronic device comprise one or more of: metadata of the detected second electronic device; capability data associated with the detected second electronic device; and user-specific data stored in the detected second electronic device. 39. The method of claim 38, wherein the capability data associated with the first, c onic device or the second comprises one or more of: device capability; application capability; and informational capability. 40. The method of claim 38, further comprising: transmitting, from the first electronic device, data associated with the first electronic device to a third electronic device, wherein the data associated with the first electronic device comprise one or more of: metadata of the first electronic device; capability data associated with the first electronic device; and user-specific data stored in first electronic device. 41. The method of claim 34, wherein the determination of whether a task is to be performed by the second electronic device in accordance with the user request is performed at the third electronic device based on one or more of: intent derived from the representation of the user request; capability data associated with at least one of the first electronic device and the second electronic device; and user-specific data. 42. The method of claim 34, wherein receiving the determination of whether a task is to be performed by the second electronic device comprises: receiving a command that causes the second electronic device to perform the task in accordance with the user request. 43. The method of claim 42, wherein requesting the second electronic device to perform the task in accordance with the user request comprises: transmitting the command to the second electronic device. 44. The method of claim 34, wherein audio data corresponding to the performing of the task by the second electronic device are transmitted to the accessory device. 45. The method of claim 34, further comprising: in accordance with a determination that the task is to be performed by the second electronic device, displaying, at the first electronic device, a visual response to the user request; and transmitting audio data corresponding to the visual response to the accessory device. 46. The method of claim 34, further comprising: detecting one or more additional electronic devices; transmitting, from the first electronic device, data associated with the detected additional one or more electronic devices to the third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the one or more additional electronic devices in accordance with the user request; and in accordance with a determination that a task is to be performed by the one or more additional electronic devices and not the second electronic device, requesting the one or more additional electronic devices to perform the task in accordance with the user request. 47. The method of claim 46, wherein audio data corresponding to performing of the task by the one or more additional electronic device are transmitted to the accessory device.
Systems and processes for providing a virtual assistant service are provided. In accordance with one or more examples, a method includes receiving, from an accessory device communicatively coupled to the first electronic device, a representation of a speech input representing a user request. The method further includes detecting a second electronic device and transmitting, from the first electronic device, a representation of the user request and data associated with the detected second electronic device to a third electronic device. The method further includes receiving, from the third electronic device, a determination of whether a task is to be performed by the second electronic device in accordance with the user request; and in accordance with a determination that a task is to be performed by the second electronic device, requesting the second electronic device to performed the task in accordance with the user request.1. A first electronic device, comprising: one or more processors; a microphone; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, from an accessory device communicatively coupled to the first electronic device, a representation of a speech input representing a user request; detecting a second electronic device; transmitting, from the first electronic device, a representation of the user request and data. associated with the detected second electronic device to a third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the second electronic device in accordance with the user request; and in accordance with a determination that a task is to be performed by the second electronic device, requesting the second electronic device to perform the task in accordance with the user request. 2. The first electronic device of claim 1, wherein the accessory device is a headphone capable of performing near-field communication. 3. The first electronic device of claim 1, wherein the accessory device is wirelessly coupled to both the first electronic device and the second electronic device. 4. The first electronic device of claim 1, wherein the accessory device is wirelessly coupled to the first electronic device, but not the second electronic device. 5. The first electronic device of claim 1, wherein the first electronic device and the second electronic device are both capable of operating virtual assistants to process speech inputs. 6. The first electronic device of claim 1, wherein the first electronic device is communicatively coupled to the second electronic device via near-field communication. 7. The first electronic device of claim 1, wherein the one or more programs comprise further instructions for, prior to transmitting the representation of the user request and data associated with the detected second electronic device: displaying, at the first electronic device, a graphical user interface indicating the receiving of the representation of the speech input. 8. The first electronic device of claim 1, wherein the data associated with the detected second electronic device comprise one or more of: metadata of the detected second electronic device; capability data associated with the detected second electronic device; and user-specific data stored in the detected second electronic device. 9. The first electronic device of claim 8, wherein the capability data associated with the first electronic device or the second comprises one or more of: device capability; application capability; and informational capability. 10. The first electronic device of claim 8, wherein the one or more programs comprise further instructions for: transmitting, from the first electronic device, data associated with the first electronic device to a third electronic device, wherein the data associated with the first electronic device comprise one or more of: metadata of the first electronic device; capability data associated with the first electronic device; and user-specific data stored in first electronic device. 11. The first electronic device of claim 1, wherein the determination of whether a task is to be performed by the second electronic device in accordance with the user request is performed at the third electronic device based on one or more of: intent derived from the representation of the user request; capability data associated with at least one of the first electronic device and the second electronic device; and user-specific data. 12. The first electronic device of claim 1, wherein receiving the determination of whether a task is to be performed by the second electronic device comprises: receiving a command that causes the second electronic device to perform the task in accordance with the user request. 13. The first electronic device of claim 12, wherein requesting the second electronic device to perform the task in accordance with the user request comprises: transmitting the command to the second electronic device. 14. The first electronic device of claim 1, wherein audio data corresponding to the performing of the task by the second electronic device are transmitted to the accessory device. 15. The first electronic device of claim 14, wherein the audio data corresponding to the performing of the task in accordance with the user request are transmitted directly from the second electronic device to the accessory device. 16. The first electronic device of claim 14, wherein the audio data corresponding to the performing of the task in accordance with the user request are transmitted from the second electronic device to the accessory device through the first electronic device. 17. The first electronic device of claim 1, wherein the one or more program comprise further instructions for: in accordance with a determination that the task is to be performed by the second electronic device. displaying, at the first electronic device, a visual response to the user request; and transmitting audio data corresponding to the visual response to the accessory device. 18. The first electronic device of claim 1, wherein the one or more program comprise further instructions for: detecting one or more additional electronic devices; transmitting, from the first electronic device, data associated with the detected additional one or more electronic devices to the third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the one or more additional electronic devices in accordance with the user request; and in accordance with a determination that a task is to be performed by the one or more additional electronic devices and not the second electronic device, requesting the one or more additional electronic devices to perform the task in accordance with the user request. 19. The first electronic device of claim 18, wherein audio data corresponding to performing of the task by the one or more additional electronic device are transmitted to the accessory device. 20. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a first electronic device, the one or more programs including instructions for: receiving, from an accessory device communicatively coupled to the first electronic device, a representation of a speech input representing a user request; detecting a second electronic device; transmitting, from the first electronic device, a representation of the user request and data associated with the detected second electronic device to a third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the second electronic device in accordance with the user request; and in accordance with a determination that a task is to be performed by the second electronic device, requesting the second electronic device to performed the task in accordance with the user request. 21. The computer-readable storage medium of claim 20, wherein the accessory device is a headphone capable of performing near-field communication. 22. The computer-readable storage medium of claim 20, wherein the accessory device is wirelessly coupled to both the first electronic device and the second electronic device. 23. The computer-readable storage medium of claim 20, wherein the accessory device is wirelessly coupled to the first electronic device, but not the second electronic device. 24. The computer-readable storage medium of claim 20, wherein the data associated with the detected second electronic device comprise one or more of: metadata of the detected second electronic device; capability data associated with the detected second electronic device; and user-specific data stored in the detected second electronic device. 25. The computer-readable storage medium of claim 24, wherein the capability data associated with the first electronic device or the second comprises one or more of: device capability; application capability; and informational capability. 26. The computer-readable storage medium of claim 24, wherein the one or more programs comprise further instructions for: transmitting, from the first electronic device, data associated with the first electronic device to a third electronic device, wherein the data associated with the first electronic device comprise one or more of: metadata of the first electronic device; capability data associated with the first electronic device; and user-specific data stored in first electronic device. 27. The computer-readable storage medium of claim 20, wherein the determination of whether a task is to be performed by the second electronic device in accordance with the user request is performed at the third electronic device based on one or more of: intent derived from the representation of the user request; capability data associated with at least one of the first electronic device and the second electronic device; and user-specific data. 28. The computer-readable storage medium of claim 20, wherein receiving the determination of whether a task is to be performed by the second electronic device comprises: receiving a command that causes the second electronic device to perform the task in accordance with the user request. 29. The computer-readable storage medium of claim 28, wherein requesting the second electronic device to perform the task in accordance with the user request comprises: transmitting the command to the second electronic device. 30. The computer-readable storage medium of claim 20, wherein audio data corresponding to the performing of the task by the second electronic device are transmitted to the accessory device. 31. The computer-readable storage medium of claim 20, wherein the one or more programs comprise further instructions for: in accordance with a determination that the task is to be performed by the second electronic device, displaying, at the first electronic device, a visual response to the user request; and transmitting audio data corresponding to the visual response to the accessory device. 32. The computer-readable storage medium of claim 20, wherein the one or more programs comprise further instructions for: detecting one or more additional electronic devices; transmitting, from the first electronic device, data associated with the detected additional one or more electronic devices to the third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the one or more additional electronic devices in accordance with the user request; and in accordance with a determination that a task is to be performed by the one or more additional electronic devices and not the second electronic device, requesting the one or more additional electronic devices to perform the task in accordance with the user request. 33. The computer-readable storage medium of claim 32, wherein audio data corresponding to performing of the task by the one or more additional electronic device are transmitted to the accessory device. 34. A method for providing a virtual assistant service, comprising: at a first electronic device with one or more processors and memory: receiving, from an accessory device communicatively coupled to the first electronic device, a representation of a speech input representing a user request; detecting a second electronic device; transmitting, from the first electronic device, a representation of the user request and data associated with the detected second electronic device to a third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the second electronic device in accordance with the user request; and in accordance with a determination that a task is to be performed by the second electronic device, requesting the second electronic device to performed the task in accordance with the user request. 35. The method of claim 34, wherein the accessory device is a headphone capable of performing near-field communication. 36. The method of claim 34, wherein the accessory device is wirelessly coupled to both the first electronic device and the second electronic device. 37. The method of claim 34, wherein the accessory device is wirelessly coupled to the first electronic device, but not the second electronic device. 38. The method of claim 34, wherein the data associated with the detected second electronic device comprise one or more of: metadata of the detected second electronic device; capability data associated with the detected second electronic device; and user-specific data stored in the detected second electronic device. 39. The method of claim 38, wherein the capability data associated with the first, c onic device or the second comprises one or more of: device capability; application capability; and informational capability. 40. The method of claim 38, further comprising: transmitting, from the first electronic device, data associated with the first electronic device to a third electronic device, wherein the data associated with the first electronic device comprise one or more of: metadata of the first electronic device; capability data associated with the first electronic device; and user-specific data stored in first electronic device. 41. The method of claim 34, wherein the determination of whether a task is to be performed by the second electronic device in accordance with the user request is performed at the third electronic device based on one or more of: intent derived from the representation of the user request; capability data associated with at least one of the first electronic device and the second electronic device; and user-specific data. 42. The method of claim 34, wherein receiving the determination of whether a task is to be performed by the second electronic device comprises: receiving a command that causes the second electronic device to perform the task in accordance with the user request. 43. The method of claim 42, wherein requesting the second electronic device to perform the task in accordance with the user request comprises: transmitting the command to the second electronic device. 44. The method of claim 34, wherein audio data corresponding to the performing of the task by the second electronic device are transmitted to the accessory device. 45. The method of claim 34, further comprising: in accordance with a determination that the task is to be performed by the second electronic device, displaying, at the first electronic device, a visual response to the user request; and transmitting audio data corresponding to the visual response to the accessory device. 46. The method of claim 34, further comprising: detecting one or more additional electronic devices; transmitting, from the first electronic device, data associated with the detected additional one or more electronic devices to the third electronic device; receiving, from the third electronic device, a determination of whether a task is to be performed by the one or more additional electronic devices in accordance with the user request; and in accordance with a determination that a task is to be performed by the one or more additional electronic devices and not the second electronic device, requesting the one or more additional electronic devices to perform the task in accordance with the user request. 47. The method of claim 46, wherein audio data corresponding to performing of the task by the one or more additional electronic device are transmitted to the accessory device.
2,600
10,905
10,905
15,312,528
2,687
An agricultural harvester including a sensor for measuring a position of an object in surroundings of the harvester. The sensor is connected to a processor for defining at least two different danger zones respectively corresponding to at least two different sets of harvester driving parameter values. The processor can determine actual harvester driving parameter values, and the processor can compare the actual harvester driving parameter values with the different sets to select a matching set and to select a corresponding one of the danger zones. The processor outputs a warning signal if the position is in the corresponding danger zone.
1. An agricultural harvester comprising: a sensor adapted for measuring a position of an object in surroundings of the harvester; a memory storing at least two different sets of harvester driving parameter values in relation to at least two different danger zones in the surroundings; and a processor operationally connected to the sensor, to the memory, and to the agricultural harvester for determining actual harvester driving parameter values, wherein the processor is configured for comparing the actual harvester driving parameter values with the two different sets of harvester driving parameters stored in the memory to select a matching set and to select a corresponding one of the at least two different danger zones, wherein the processor is further configured for outputting a warning signal if the position of the object is determined to be in the selected danger zone. 2. The agricultural harvester of claim 1, wherein the harvester driving parameter values comprise a harvester steering wheel position. 3. The agricultural harvester of claim 1, wherein the harvester driving parameter values comprise an unloading tube position. 4. The agricultural harvester of claim 1, wherein harvester driving parameter values comprise gearbox settings. 5. The agricultural harvester of claim 1, wherein harvester driving parameter values comprise residue spreading system settings. 6. The agricultural harvester of claim 1, wherein the warning signal is output to the operator via at least one of visual and acoustic outputting devices. 7. The agricultural harvester of claim 1, wherein the warning signal causes the agricultural harvester to enter an operational safe mode. 8. The agricultural harvester of claim 1, wherein the sensor is comprises at least one radar sensor placed at a back end of the agricultural harvester. 9. The agricultural harvester of claim 8, wherein the sensor comprises multiple radar sensors arranged alongside each other to scan the surroundings of the harvester. 10. The agricultural harvester of claim 1, further comprising a display in the cabin of the harvester to visualize a corresponding one of the at least two different danger zones together with the position of the object. 11. A method for sending a warning signal in an agricultural harvester, the harvester comprising a sensor adapted for measuring a position of an object that is situated in surroundings of the harvester, the method comprising: defining in the surroundings at least two different danger zones respectively corresponding to at least two different sets of harvester driving parameter values; detecting an object in the surroundings via the sensor; wherein the method further comprises, upon detection of the object: measuring the position of the object in the surroundings; comparing actual harvester driving parameter values with the at least two different sets of harvester driving parameter values to select a matching one of the at least two different sets of harvester driving parameter values and selecting a corresponding one of the at least two different danger zones; determining whether the position of the object is in the corresponding danger zone; and sending the warning signal if the position of the object is determined to be in the corresponding danger zone.
An agricultural harvester including a sensor for measuring a position of an object in surroundings of the harvester. The sensor is connected to a processor for defining at least two different danger zones respectively corresponding to at least two different sets of harvester driving parameter values. The processor can determine actual harvester driving parameter values, and the processor can compare the actual harvester driving parameter values with the different sets to select a matching set and to select a corresponding one of the danger zones. The processor outputs a warning signal if the position is in the corresponding danger zone.1. An agricultural harvester comprising: a sensor adapted for measuring a position of an object in surroundings of the harvester; a memory storing at least two different sets of harvester driving parameter values in relation to at least two different danger zones in the surroundings; and a processor operationally connected to the sensor, to the memory, and to the agricultural harvester for determining actual harvester driving parameter values, wherein the processor is configured for comparing the actual harvester driving parameter values with the two different sets of harvester driving parameters stored in the memory to select a matching set and to select a corresponding one of the at least two different danger zones, wherein the processor is further configured for outputting a warning signal if the position of the object is determined to be in the selected danger zone. 2. The agricultural harvester of claim 1, wherein the harvester driving parameter values comprise a harvester steering wheel position. 3. The agricultural harvester of claim 1, wherein the harvester driving parameter values comprise an unloading tube position. 4. The agricultural harvester of claim 1, wherein harvester driving parameter values comprise gearbox settings. 5. The agricultural harvester of claim 1, wherein harvester driving parameter values comprise residue spreading system settings. 6. The agricultural harvester of claim 1, wherein the warning signal is output to the operator via at least one of visual and acoustic outputting devices. 7. The agricultural harvester of claim 1, wherein the warning signal causes the agricultural harvester to enter an operational safe mode. 8. The agricultural harvester of claim 1, wherein the sensor is comprises at least one radar sensor placed at a back end of the agricultural harvester. 9. The agricultural harvester of claim 8, wherein the sensor comprises multiple radar sensors arranged alongside each other to scan the surroundings of the harvester. 10. The agricultural harvester of claim 1, further comprising a display in the cabin of the harvester to visualize a corresponding one of the at least two different danger zones together with the position of the object. 11. A method for sending a warning signal in an agricultural harvester, the harvester comprising a sensor adapted for measuring a position of an object that is situated in surroundings of the harvester, the method comprising: defining in the surroundings at least two different danger zones respectively corresponding to at least two different sets of harvester driving parameter values; detecting an object in the surroundings via the sensor; wherein the method further comprises, upon detection of the object: measuring the position of the object in the surroundings; comparing actual harvester driving parameter values with the at least two different sets of harvester driving parameter values to select a matching one of the at least two different sets of harvester driving parameter values and selecting a corresponding one of the at least two different danger zones; determining whether the position of the object is in the corresponding danger zone; and sending the warning signal if the position of the object is determined to be in the corresponding danger zone.
2,600
10,906
10,906
15,139,987
2,632
A system and method of communicating sounding reference signals (SRSs) in a cellular time division duplex (TDD) mmWave system. A transmission point (TP) may transmit beamformed reference signals to a user equipment (UE), each of the beamformed reference signals having been transmitted according to a beam direction in a set of beam directions available to the TP. The TP may receive a feedback message from the UE that identifies one of the beamformed reference signals transmitted to the UE. The TP may select, from the set of beam directions available to the TP, a subset of beam directions for SRS reception based on the feedback message received from the UE, and receive uplink SRS signals according to beam directions in the subset of beam directions. The set of beam directions available to the TP includes at least one beam direction that is excluded from the subset of beam directions.
1. A method for communicating sounding reference signals (SRSs) in a cellular time division duplex (TDD) mmWave system, the method comprising: receiving, by a user equipment (UE), one or more signals from a transmit point (TP) according to one or more beam directions in a set of beam directions available to the UE; selecting, from the set of beam directions available to the UE, a subset of beam directions for SRS transmission based on the one or more signals, wherein the set of beam directions available to the UE includes at least one beam direction that is excluded from the subset of beam directions selected for SRS transmission; and transmitting, by the UE, uplink SRS signals to the TP according to beam directions in the subset of beam directions selected for uplink SRS transmission without using the at least one beam direction excluded from the subset of beam directions. 2. The method of claim 1, wherein the one or more signals comprise at least one of a downlink synchronization signal, a broadcast signal, or a data signal. 3. The method of claim 1, further comprising receiving a SRS configuration message, wherein selecting the subset of beam directions for uplink SRS transmission comprises: determining a number of SRS transmission opportunities for the UE based on an SRS configuration parameter carried in the SRS configuration message, and selecting a number of beam directions for inclusion in the subset of beam directions based on the number of SRS transmission opportunities for the UE. 4. The method of claim 3, wherein the SRS configuration message is a cell-specific SRS configuration message, and wherein the SRS configuration parameter comprises at least one of a maximum number of SRS sounding opportunities for different beams, a number of times each beam needs to be re-transmitted, or a frequency comb spacing. 5. The method of claim 3, wherein the SRS configuration message is a UE-specific SRS configuration message, and wherein the SRS configuration parameter comprises at least one of a sub-carrier offset assigned to the UE, a code sequence assigned to the UE, an SRS sub-frame sounding time assigned to the UE, a number of SRS sounding opportunities for different beams assigned to the UE, a number of times each beam needs to be re-transmitted, a frequency comb spacing assigned to the UE, a time/frequency multiplexing flag assigned to the UE, or TP beam indices for each assigned time period sounding time assigned to the UE. 6. The method of claim 3, wherein the SRS configuration message is a UE-specific SRS configuration message, and wherein the SRS configuration parameter comprises a time/frequency flag that indicates whether or not SRS transmissions from different radio frequency (RF) chains of the UE should be multiplexed in a time domain or a frequency domain. 7. The method of claim 6, wherein the UE-specific SRS configuration message includes a frequency comb spacing, and wherein the time/frequency flag indicates that SRS transmissions from different RF chains of the UE should be multiplexed in the frequency domain in accordance with a frequency comb spacing. 8. The method of claim 6, wherein the time/frequency flag indicates that SRS transmissions from different RF chains of the UE should be multiplexed in the time domain. 9. The method of claim 1, wherein selecting the subset of beam directions comprises: identifying which of the one or more signals has one of a highest received signal power level, a highest received signal to interference ratio, or a highest received signal to noise ratio; and selecting the subset of beam directions based on the one or more beam directions used to receive the identified one or more signals. 10-20. (canceled) 21. The method of claim 1, further comprising receiving, by the UE, an indication indicating receive beams used by the TP. 22. A user equipment (UE) comprising: one or more processors; and a computer readable storage medium storing programming for execution by the one or more processors, the programming including instructions to configure the UE to: receive one or more signals from a transmit point (TP) according to one or more beam directions in a set of beam directions available to the UE, select, from the set of beam directions available to the UE, a subset of beam directions for sounding reference signal (SRS) transmission based on the one or more signals, wherein the set of beam directions available to the UE includes at least one beam direction that is excluded from the subset of beam directions selected for SRS transmission, and transmit uplink SRS signals to the TP according to beam directions in the subset of beam directions selected for uplink SRS transmission without using the at least one beam direction excluded from the subset of beam directions. 23. The UE of claim 22, wherein the programming includes instructions to configure the UE to receive a SRS configuration message. 24. The UE of claim 23, wherein the programming includes instructions to configure the UE to determine a number of SRS transmission opportunities for the UE based on an SRS configuration parameter carried in the SRS configuration message, and select a number of beam directions for inclusion in the subset of beam directions based on the number of SRS transmission opportunities for the UE. 25. The UE of claim 24, wherein the SRS configuration message is a cell-specific SRS configuration message, and wherein the SRS configuration parameter comprises at least one of a maximum number of SRS sounding opportunities for different beams, a number of times each beam needs to be re-transmitted, or a frequency comb spacing. 26. The UE of claim 24, wherein the SRS configuration message is a UE-specific SRS configuration message, and wherein the SRS configuration parameter comprises at least one of a sub-carrier offset assigned to the UE, a code sequence assigned to the UE, an SRS sub-frame sounding time assigned to the UE, a number of SRS sounding opportunities for different beams assigned to the UE, a number of times each beam needs to be re-transmitted, a frequency comb spacing assigned to the UE, a time/frequency multiplexing flag assigned to the UE, or TP beam indices for each assigned time period sounding time assigned to the UE. 27. The UE of claim 24, wherein the SRS configuration message is a UE-specific SRS configuration message, and wherein the SRS configuration parameter comprises a time/frequency flag that indicates whether or not SRS transmissions from different radio frequency (RF) chains of the UE should be multiplexed in a time domain or a frequency domain. 28. The UE of claim 27, wherein the UE-specific SRS configuration message includes a frequency comb spacing, and wherein the time/frequency flag indicates that SRS transmissions from different RF chains of the UE should be multiplexed in the frequency domain in accordance with a frequency comb spacing. 29. The UE of claim 22, wherein the programming includes instructions to configure the UE to identify which of the one or more signals has one of a highest received signal power level, a highest received signal to interference ratio, or a highest received signal to noise ratio, and selecting the subset of beam directions based on the one or more beam directions used to receive the identified one or more signals. 30. The UE of claim 22, wherein the programming includes instructions to configure the UE to receive an indication indicating receive beams used by the TP. 31. A non-transitory computer-readable medium storing programming for execution by one or more processors, the programming including instructions to: receive one or more signals from a transmit point (TP) according to one or more beam directions in a set of beam directions available to a user equipment (UE), select, from the set of beam directions available to the UE, a subset of beam directions for sounding reference signal (SRS) transmission based on the one or more signals, wherein the set of beam directions available to the UE includes at least one beam direction that is excluded from the subset of beam directions selected for SRS transmission, and transmit uplink SRS signals to the TP according to beam directions in the subset of beam directions selected for uplink SRS transmission without using the at least one beam direction excluded from the subset of beam directions.
A system and method of communicating sounding reference signals (SRSs) in a cellular time division duplex (TDD) mmWave system. A transmission point (TP) may transmit beamformed reference signals to a user equipment (UE), each of the beamformed reference signals having been transmitted according to a beam direction in a set of beam directions available to the TP. The TP may receive a feedback message from the UE that identifies one of the beamformed reference signals transmitted to the UE. The TP may select, from the set of beam directions available to the TP, a subset of beam directions for SRS reception based on the feedback message received from the UE, and receive uplink SRS signals according to beam directions in the subset of beam directions. The set of beam directions available to the TP includes at least one beam direction that is excluded from the subset of beam directions.1. A method for communicating sounding reference signals (SRSs) in a cellular time division duplex (TDD) mmWave system, the method comprising: receiving, by a user equipment (UE), one or more signals from a transmit point (TP) according to one or more beam directions in a set of beam directions available to the UE; selecting, from the set of beam directions available to the UE, a subset of beam directions for SRS transmission based on the one or more signals, wherein the set of beam directions available to the UE includes at least one beam direction that is excluded from the subset of beam directions selected for SRS transmission; and transmitting, by the UE, uplink SRS signals to the TP according to beam directions in the subset of beam directions selected for uplink SRS transmission without using the at least one beam direction excluded from the subset of beam directions. 2. The method of claim 1, wherein the one or more signals comprise at least one of a downlink synchronization signal, a broadcast signal, or a data signal. 3. The method of claim 1, further comprising receiving a SRS configuration message, wherein selecting the subset of beam directions for uplink SRS transmission comprises: determining a number of SRS transmission opportunities for the UE based on an SRS configuration parameter carried in the SRS configuration message, and selecting a number of beam directions for inclusion in the subset of beam directions based on the number of SRS transmission opportunities for the UE. 4. The method of claim 3, wherein the SRS configuration message is a cell-specific SRS configuration message, and wherein the SRS configuration parameter comprises at least one of a maximum number of SRS sounding opportunities for different beams, a number of times each beam needs to be re-transmitted, or a frequency comb spacing. 5. The method of claim 3, wherein the SRS configuration message is a UE-specific SRS configuration message, and wherein the SRS configuration parameter comprises at least one of a sub-carrier offset assigned to the UE, a code sequence assigned to the UE, an SRS sub-frame sounding time assigned to the UE, a number of SRS sounding opportunities for different beams assigned to the UE, a number of times each beam needs to be re-transmitted, a frequency comb spacing assigned to the UE, a time/frequency multiplexing flag assigned to the UE, or TP beam indices for each assigned time period sounding time assigned to the UE. 6. The method of claim 3, wherein the SRS configuration message is a UE-specific SRS configuration message, and wherein the SRS configuration parameter comprises a time/frequency flag that indicates whether or not SRS transmissions from different radio frequency (RF) chains of the UE should be multiplexed in a time domain or a frequency domain. 7. The method of claim 6, wherein the UE-specific SRS configuration message includes a frequency comb spacing, and wherein the time/frequency flag indicates that SRS transmissions from different RF chains of the UE should be multiplexed in the frequency domain in accordance with a frequency comb spacing. 8. The method of claim 6, wherein the time/frequency flag indicates that SRS transmissions from different RF chains of the UE should be multiplexed in the time domain. 9. The method of claim 1, wherein selecting the subset of beam directions comprises: identifying which of the one or more signals has one of a highest received signal power level, a highest received signal to interference ratio, or a highest received signal to noise ratio; and selecting the subset of beam directions based on the one or more beam directions used to receive the identified one or more signals. 10-20. (canceled) 21. The method of claim 1, further comprising receiving, by the UE, an indication indicating receive beams used by the TP. 22. A user equipment (UE) comprising: one or more processors; and a computer readable storage medium storing programming for execution by the one or more processors, the programming including instructions to configure the UE to: receive one or more signals from a transmit point (TP) according to one or more beam directions in a set of beam directions available to the UE, select, from the set of beam directions available to the UE, a subset of beam directions for sounding reference signal (SRS) transmission based on the one or more signals, wherein the set of beam directions available to the UE includes at least one beam direction that is excluded from the subset of beam directions selected for SRS transmission, and transmit uplink SRS signals to the TP according to beam directions in the subset of beam directions selected for uplink SRS transmission without using the at least one beam direction excluded from the subset of beam directions. 23. The UE of claim 22, wherein the programming includes instructions to configure the UE to receive a SRS configuration message. 24. The UE of claim 23, wherein the programming includes instructions to configure the UE to determine a number of SRS transmission opportunities for the UE based on an SRS configuration parameter carried in the SRS configuration message, and select a number of beam directions for inclusion in the subset of beam directions based on the number of SRS transmission opportunities for the UE. 25. The UE of claim 24, wherein the SRS configuration message is a cell-specific SRS configuration message, and wherein the SRS configuration parameter comprises at least one of a maximum number of SRS sounding opportunities for different beams, a number of times each beam needs to be re-transmitted, or a frequency comb spacing. 26. The UE of claim 24, wherein the SRS configuration message is a UE-specific SRS configuration message, and wherein the SRS configuration parameter comprises at least one of a sub-carrier offset assigned to the UE, a code sequence assigned to the UE, an SRS sub-frame sounding time assigned to the UE, a number of SRS sounding opportunities for different beams assigned to the UE, a number of times each beam needs to be re-transmitted, a frequency comb spacing assigned to the UE, a time/frequency multiplexing flag assigned to the UE, or TP beam indices for each assigned time period sounding time assigned to the UE. 27. The UE of claim 24, wherein the SRS configuration message is a UE-specific SRS configuration message, and wherein the SRS configuration parameter comprises a time/frequency flag that indicates whether or not SRS transmissions from different radio frequency (RF) chains of the UE should be multiplexed in a time domain or a frequency domain. 28. The UE of claim 27, wherein the UE-specific SRS configuration message includes a frequency comb spacing, and wherein the time/frequency flag indicates that SRS transmissions from different RF chains of the UE should be multiplexed in the frequency domain in accordance with a frequency comb spacing. 29. The UE of claim 22, wherein the programming includes instructions to configure the UE to identify which of the one or more signals has one of a highest received signal power level, a highest received signal to interference ratio, or a highest received signal to noise ratio, and selecting the subset of beam directions based on the one or more beam directions used to receive the identified one or more signals. 30. The UE of claim 22, wherein the programming includes instructions to configure the UE to receive an indication indicating receive beams used by the TP. 31. A non-transitory computer-readable medium storing programming for execution by one or more processors, the programming including instructions to: receive one or more signals from a transmit point (TP) according to one or more beam directions in a set of beam directions available to a user equipment (UE), select, from the set of beam directions available to the UE, a subset of beam directions for sounding reference signal (SRS) transmission based on the one or more signals, wherein the set of beam directions available to the UE includes at least one beam direction that is excluded from the subset of beam directions selected for SRS transmission, and transmit uplink SRS signals to the TP according to beam directions in the subset of beam directions selected for uplink SRS transmission without using the at least one beam direction excluded from the subset of beam directions.
2,600
10,907
10,907
14,905,813
2,623
Included herein is an electronic pen ( 200 ) with pen position detection comprising at least one electric voltage source ( 206 ), at least a digital control unit ( 217, 106 ), a writing lead ( 209 ), at least one data transfer module ( 219, 109 ) and at least two position determination sensors ( 201, 202, 101, 102 ) for determination of the position and/or motion of the electronic pen ( 200 ), characterized in that the electronic pen ( 200 ) comprises an energy management unit ( 218, 110 ) in communication with the digital control unit ( 217, 106 ) for managing the electrical energy consumption, in particular to minimize the electric energy consumption, and/or comprises means (for example one of 103, 104, 105, 204, 205, 212, 213, 214, 215 ) to be able to generate electrical energy itself.
1. An electronic pen comprising: a pen position detection having: at least one electric voltage source; at least a digital control unit; a writing lead; at least one data transfer module; and at least two position determination sensors for determination of the position and/or motion of the electronic pen; and an energy management unit in communication with the digital control unit for managing the electrical energy consumption to minimize the electric energy consumption, the energy management unit having an electrical generation means. 2. The electronic pen according to claim 1, configured having operating modes of at least an active mode, a standby mode, and an off mode with each mode having a different energy consumption, wherein the energy management unit manages said operation modes and controls changes between operating modes. 3. The electronic pen according to claim 2, further comprising at least one DC-to-DC converter. 4. The electronic pen according to claim 1, further comprising at least one DC-to-DC converter. 5. The electronic pen according to claim 4, further comprising an additional energy storage for self-generated electrical energy, and the electrical voltage source is configured to be recharged with the self-generated electrical energy generated by the electronic pen. 6. The electronic pen according to claim 3, further comprising an additional energy storage for self-generated electrical energy, and the electrical voltage source is configured to be recharged with the self-generated electrical energy generated by the electronic pen. 7. The electronic pen according to claim 2, further comprising an additional energy storage for self-generated electrical energy, and the electrical voltage source is configured to be recharged with the self-generated electrical energy generated by the electronic pen. 8. The electronic pen according to claim 1, further comprising an additional energy storage for self-generated electrical energy, and the electrical voltage source is configured to be recharged with the self-generated electrical energy generated by the electronic pen. 9. The electronic pen according to claim 8, further comprising at least one photo cell configured to measure the light intensity in the environment of the electronic pen, and, given a sufficient light intensity, generate electrical energy and provide it to the electronic pen. 10. The electronic pen according to claim 2, further comprising at least one photo cell configured to measure the light intensity in the environment of the electronic pen, and, given a sufficient light intensity, generate electrical energy and provide it to the electronic pen. 11. The electronic pen according to claim 1, further comprising at least one photo cell configured to measure the light intensity in the environment of the electronic pen, and, given a sufficient light intensity, generate electrical energy and provide it to the electronic pen. 12. The electronic pen according to claim 11, wherein: the at least one photo cell is configured to measure the light intensity in the environment of the electronic pen, and the energy management unit is configured such that: when the measured the light intensity falls below a predetermined light intensity threshold at the at least one photo cell for a predetermined minimum duration, the energy management unit changes the operation mode from the active mode to the standby mode or the off mode; and when the measured the light exceeds a predetermined light intensity threshold for an excess minimum duration, the energy management unit changes from the operation mode the off mode into the active mode or standby mode. 13. The electronic pen according to claim 10, wherein: the at least one photo cell is configured to measure the light intensity in the environment of the electronic pen, and the energy management unit is configured such that: when the measured the light intensity falls below a predetermined light intensity threshold at the at least one photo cell for a predetermined minimum duration, the energy management unit changes the operation mode from the active mode to the standby mode or the off mode; and when the measured the light exceeds a predetermined light intensity threshold for an excess minimum duration, the energy management unit changes from the operation mode the off mode into the active mode or standby mode. 14. The electronic pen according to claim 9, wherein: the at least one photo cell is configured to measure the light intensity in the environment of the electronic pen, and the energy management unit is configured such that: when the measured the light intensity falls below a predetermined light intensity threshold at the at least one photo cell for a predetermined minimum duration, the energy management unit changes the operation mode from the active mode to the standby mode or the off mode; and when the measured the light exceeds a predetermined light intensity threshold for an excess minimum duration, the energy management unit changes from the operation mode the off mode into the active mode or standby mode. 15. The electronic pen according to claim 13, wherein the position determination sensors measure an activity level and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen falls below a predetermined measurement activity threshold for a predetermined minimum duration to change the operating mode of the electronic pen when the activity level of the electronic pen exceeds a predetermined measurement activity threshold for a predetermined minimum duration. 16. The electronic pen according to claim 10, wherein the position determination sensors measure an activity level and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen falls below a predetermined measurement activity threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen exceeds a predetermined measurement activity threshold for a predetermined minimum duration. 17. The electronic pen according to claim 7, wherein the position determination sensors measure an activity level and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen falls below a predetermined measurement activity threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen exceeds a predetermined measurement activity threshold for a predetermined minimum duration. 18. The electronic pen according to claim 2, wherein the position determination sensors measure an activity level and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen falls below a predetermined measurement activity threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen exceeds a predetermined measurement activity threshold for a predetermined minimum duration. 19. The electronic pen according to claim 18, further comprising a force sensor coupled to the writing lead. 20. The electronic pen according to claim 12, further comprising a force sensor coupled to the writing lead. 21. The electronic pen according to claim 11, further comprising a force sensor coupled to the writing lead. 22. The electronic pen according to claim 8, further comprising a force sensor coupled to the writing lead. 23. The electronic pen according to claim 2, further comprising a force sensor coupled to the writing lead. 24. The electronic pen according to claim 1, further comprising a force sensor coupled to the writing lead. 25. The electronic pen according to claim 24, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 26. The electronic pen according to claim 23, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 27. The electronic pen according to claim 22, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 28. The electronic pen according to claim 21, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 29. The electronic pen according to claim 20, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 30. The electronic pen according to claim 19, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 31. The electronic pen according to claim 30, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 32. The electronic pen according to claim 20, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 33. The electronic pen according to claim 19, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 34. The electronic pen according to claim 11, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 35. The electronic pen according to claim 8, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 36. The electronic pen according to claim 4, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 37. The electronic pen according to claim 1, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 38. The electronic pen according to claim 2, wherein the position determination sensors include a sampling rate of data and the digital control unit and the energy management unit are configured to control the sampling rate during the operating modes of the electronic pen. 39. The electronic pen according to claim 2, wherein the data transfer module includes a sampling rate of data and the digital control unit and the energy management unit are configured to control the sampling rate during the operating modes of the electronic pen. 40. The electronic pen according to claim 10, wherein the photo cell includes a sampling rate of data and the digital control unit and the energy management unit are configured to control the sampling rate during the operating modes of the electronic pen. 41. The electronic pen according to claim 20, wherein the force sensor includes a sampling rate of data and the digital control unit and the energy management unit are configured to control the sampling rate during the operating modes of the electronic pen. 42. The electronic pen according to claim 1, further including an electrical conductor path on a circuit carrier s configured as an antenna for the data transfer module. 43. The electronic pen according to claim 1, wherein the position determination sensors are selected from the group consisting of: acceleration sensors, rotation rate sensors, and magnetic field sensors. 44. An apparatus for the electronic detection of a position of an writing utensil, the apparatus comprising: the writing utensil being an electronic pen having a pen position detection with at least one data transfer module and an energy management unit; at least one data receiving module for receiving the data transmitted by the data transfer module of the electronic pen; an external data processing unit for analyzing and processing the received data: a data display unit; a data storage unit; wherein the data processing unit includes as an energy management configuration unit, which can be used to configure the energy management unit of the electronic pen. 45. The apparatus of claim 44, further comprising a writing substrate, wherein the writing substrate is a writing paper. 46. A method for the electronic detection of the position of an electronic pen, the method comprising: providing the electronic pen that: includes an energy management unit and a data transfer module; consumes energy; generates at least a part of the energy required for its operation itself; and transmits data on the particular pen position data through the data transfer module to an external data processing unit having a data receiving module. 47. The method according to claim 46, wherein the data transfer rate between the data transfer module and the data receiving module varies according to the type of writing substrate.
Included herein is an electronic pen ( 200 ) with pen position detection comprising at least one electric voltage source ( 206 ), at least a digital control unit ( 217, 106 ), a writing lead ( 209 ), at least one data transfer module ( 219, 109 ) and at least two position determination sensors ( 201, 202, 101, 102 ) for determination of the position and/or motion of the electronic pen ( 200 ), characterized in that the electronic pen ( 200 ) comprises an energy management unit ( 218, 110 ) in communication with the digital control unit ( 217, 106 ) for managing the electrical energy consumption, in particular to minimize the electric energy consumption, and/or comprises means (for example one of 103, 104, 105, 204, 205, 212, 213, 214, 215 ) to be able to generate electrical energy itself.1. An electronic pen comprising: a pen position detection having: at least one electric voltage source; at least a digital control unit; a writing lead; at least one data transfer module; and at least two position determination sensors for determination of the position and/or motion of the electronic pen; and an energy management unit in communication with the digital control unit for managing the electrical energy consumption to minimize the electric energy consumption, the energy management unit having an electrical generation means. 2. The electronic pen according to claim 1, configured having operating modes of at least an active mode, a standby mode, and an off mode with each mode having a different energy consumption, wherein the energy management unit manages said operation modes and controls changes between operating modes. 3. The electronic pen according to claim 2, further comprising at least one DC-to-DC converter. 4. The electronic pen according to claim 1, further comprising at least one DC-to-DC converter. 5. The electronic pen according to claim 4, further comprising an additional energy storage for self-generated electrical energy, and the electrical voltage source is configured to be recharged with the self-generated electrical energy generated by the electronic pen. 6. The electronic pen according to claim 3, further comprising an additional energy storage for self-generated electrical energy, and the electrical voltage source is configured to be recharged with the self-generated electrical energy generated by the electronic pen. 7. The electronic pen according to claim 2, further comprising an additional energy storage for self-generated electrical energy, and the electrical voltage source is configured to be recharged with the self-generated electrical energy generated by the electronic pen. 8. The electronic pen according to claim 1, further comprising an additional energy storage for self-generated electrical energy, and the electrical voltage source is configured to be recharged with the self-generated electrical energy generated by the electronic pen. 9. The electronic pen according to claim 8, further comprising at least one photo cell configured to measure the light intensity in the environment of the electronic pen, and, given a sufficient light intensity, generate electrical energy and provide it to the electronic pen. 10. The electronic pen according to claim 2, further comprising at least one photo cell configured to measure the light intensity in the environment of the electronic pen, and, given a sufficient light intensity, generate electrical energy and provide it to the electronic pen. 11. The electronic pen according to claim 1, further comprising at least one photo cell configured to measure the light intensity in the environment of the electronic pen, and, given a sufficient light intensity, generate electrical energy and provide it to the electronic pen. 12. The electronic pen according to claim 11, wherein: the at least one photo cell is configured to measure the light intensity in the environment of the electronic pen, and the energy management unit is configured such that: when the measured the light intensity falls below a predetermined light intensity threshold at the at least one photo cell for a predetermined minimum duration, the energy management unit changes the operation mode from the active mode to the standby mode or the off mode; and when the measured the light exceeds a predetermined light intensity threshold for an excess minimum duration, the energy management unit changes from the operation mode the off mode into the active mode or standby mode. 13. The electronic pen according to claim 10, wherein: the at least one photo cell is configured to measure the light intensity in the environment of the electronic pen, and the energy management unit is configured such that: when the measured the light intensity falls below a predetermined light intensity threshold at the at least one photo cell for a predetermined minimum duration, the energy management unit changes the operation mode from the active mode to the standby mode or the off mode; and when the measured the light exceeds a predetermined light intensity threshold for an excess minimum duration, the energy management unit changes from the operation mode the off mode into the active mode or standby mode. 14. The electronic pen according to claim 9, wherein: the at least one photo cell is configured to measure the light intensity in the environment of the electronic pen, and the energy management unit is configured such that: when the measured the light intensity falls below a predetermined light intensity threshold at the at least one photo cell for a predetermined minimum duration, the energy management unit changes the operation mode from the active mode to the standby mode or the off mode; and when the measured the light exceeds a predetermined light intensity threshold for an excess minimum duration, the energy management unit changes from the operation mode the off mode into the active mode or standby mode. 15. The electronic pen according to claim 13, wherein the position determination sensors measure an activity level and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen falls below a predetermined measurement activity threshold for a predetermined minimum duration to change the operating mode of the electronic pen when the activity level of the electronic pen exceeds a predetermined measurement activity threshold for a predetermined minimum duration. 16. The electronic pen according to claim 10, wherein the position determination sensors measure an activity level and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen falls below a predetermined measurement activity threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen exceeds a predetermined measurement activity threshold for a predetermined minimum duration. 17. The electronic pen according to claim 7, wherein the position determination sensors measure an activity level and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen falls below a predetermined measurement activity threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen exceeds a predetermined measurement activity threshold for a predetermined minimum duration. 18. The electronic pen according to claim 2, wherein the position determination sensors measure an activity level and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen falls below a predetermined measurement activity threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the activity level of the electronic pen exceeds a predetermined measurement activity threshold for a predetermined minimum duration. 19. The electronic pen according to claim 18, further comprising a force sensor coupled to the writing lead. 20. The electronic pen according to claim 12, further comprising a force sensor coupled to the writing lead. 21. The electronic pen according to claim 11, further comprising a force sensor coupled to the writing lead. 22. The electronic pen according to claim 8, further comprising a force sensor coupled to the writing lead. 23. The electronic pen according to claim 2, further comprising a force sensor coupled to the writing lead. 24. The electronic pen according to claim 1, further comprising a force sensor coupled to the writing lead. 25. The electronic pen according to claim 24, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 26. The electronic pen according to claim 23, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 27. The electronic pen according to claim 22, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 28. The electronic pen according to claim 21, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 29. The electronic pen according to claim 20, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 30. The electronic pen according to claim 19, wherein the force sensor measures a force and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen falls below a predetermined measurement force threshold for a predetermined minimum duration and the energy management unit is configured to change the operating mode of the electronic pen when the measured force of the electronic pen exceeds a predetermined measurement force threshold for a predetermined minimum duration. 31. The electronic pen according to claim 30, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 32. The electronic pen according to claim 20, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 33. The electronic pen according to claim 19, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 34. The electronic pen according to claim 11, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 35. The electronic pen according to claim 8, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 36. The electronic pen according to claim 4, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 37. The electronic pen according to claim 1, further comprising at least one electrical energy generating device capable of generating electrical energy, provide the electrical energy to the electronic pen, and selected from the following group: a solar cell, a thermoelectric generator, or a piezoelectric generator. 38. The electronic pen according to claim 2, wherein the position determination sensors include a sampling rate of data and the digital control unit and the energy management unit are configured to control the sampling rate during the operating modes of the electronic pen. 39. The electronic pen according to claim 2, wherein the data transfer module includes a sampling rate of data and the digital control unit and the energy management unit are configured to control the sampling rate during the operating modes of the electronic pen. 40. The electronic pen according to claim 10, wherein the photo cell includes a sampling rate of data and the digital control unit and the energy management unit are configured to control the sampling rate during the operating modes of the electronic pen. 41. The electronic pen according to claim 20, wherein the force sensor includes a sampling rate of data and the digital control unit and the energy management unit are configured to control the sampling rate during the operating modes of the electronic pen. 42. The electronic pen according to claim 1, further including an electrical conductor path on a circuit carrier s configured as an antenna for the data transfer module. 43. The electronic pen according to claim 1, wherein the position determination sensors are selected from the group consisting of: acceleration sensors, rotation rate sensors, and magnetic field sensors. 44. An apparatus for the electronic detection of a position of an writing utensil, the apparatus comprising: the writing utensil being an electronic pen having a pen position detection with at least one data transfer module and an energy management unit; at least one data receiving module for receiving the data transmitted by the data transfer module of the electronic pen; an external data processing unit for analyzing and processing the received data: a data display unit; a data storage unit; wherein the data processing unit includes as an energy management configuration unit, which can be used to configure the energy management unit of the electronic pen. 45. The apparatus of claim 44, further comprising a writing substrate, wherein the writing substrate is a writing paper. 46. A method for the electronic detection of the position of an electronic pen, the method comprising: providing the electronic pen that: includes an energy management unit and a data transfer module; consumes energy; generates at least a part of the energy required for its operation itself; and transmits data on the particular pen position data through the data transfer module to an external data processing unit having a data receiving module. 47. The method according to claim 46, wherein the data transfer rate between the data transfer module and the data receiving module varies according to the type of writing substrate.
2,600
10,908
10,908
15,949,968
2,623
A force transducer for an electronic device can be operated in a drive mode and a sense mode simultaneously. In particular, the force transducer can provide haptic output while simultaneously receiving force input from a user. The force transducer is primarily defined by a monolithic piezoelectric body, a ground electrode, a drive electrode, and a sense electrode. The ground electrode and the drive electrode each include multiple electrically-electrically conductive sheets that extend into the monolithic body; the electrically conductive sheets of the ground electrode and the drive electrode are interdigitally engaged. The sense electrode of the force transducer is typically disposed on an exterior surface of the monolithic body.
1. A force transducer comprising: a ground electrode comprising a first plurality of sheets; a drive electrode comprising a second plurality of sheets, interdigitally engaged with the first plurality of sheets; a monolithic body separating the first plurality of sheets from the second plurality of sheets and comprising an upper surface; and a sense electrode disposed on the upper surface of the monolithic body. 2. The force transducer of claim 1, wherein: the first plurality of sheets comprises an upper sheet; and the upper sheet is below the upper surface of the monolithic body. 3. The force transducer of claim 1, wherein the monolithic body is formed from a piezoelectric material. 4. The force transducer of claim 3, wherein the piezoelectric material comprises barium titanate. 5. The force transducer of claim 1, wherein the sense electrode is a member of a set of sense electrodes arranged in a row on the upper surface of the monolithic body. 6. The force transducer of claim 1, wherein: the ground electrode is coupled to the sense electrode via a sense circuit; and the ground electrode is coupled to the drive electrode via a drive circuit. 7. The force transducer of claim 1, wherein: the first plurality of sheets comprises a first number of sheets and the second plurality of sheets comprises a second number of sheets; and the first number is different from the second number. 8. An electronic device comprising: a housing; a display within the housing; a multimode force interface positioned below the display and comprising: an interdigitated force transducer coupled to a surface of the display and aligned with an active display region of the display; and a controller coupled to the interdigitated force transducer; wherein the controller is configured to operate the interdigitated force transducer in a drive mode and a sense mode. 9. The electronic device of claim 8, wherein, in the drive mode, the controller applies a drive signal to the interdigitated force transducer and, in response, the interdigitated force transducer generates a haptic output through the display. 10. The electronic device of claim 8, wherein, in the sense mode, the controller receives a sense signal from the interdigitated force transducer that corresponds to a force input applied by a user to the display. 11. The electronic device of claim 8, wherein the interdigitated force transducer comprises: a body comprising a piezoelectric material; a ground electrode comprising more than one electrically conductive sheets extending into the body; and a drive electrode interdigitally engaged with the ground electrode. 12. The electronic device of claim 11, wherein the interdigitated force transducer further comprises an array of sense electrodes disposed on an external surface of the body, parallel to one of the electrically-electrically conductive sheets of the ground electrode. 13. The electronic device of claim 9, wherein: the interdigitated force transducer is a first interdigitated force transducer; and the multimode force interface comprises: an array of interdigitated force transducers comprising the first interdigitated force transducer. 14. The electronic device of claim 13, wherein the array of interdigitated force transducers are arranged in a grid. 15. A method of operating a multimode force interface comprising: providing a drive signal to a drive electrode of an interdigitated force transducer; obtaining a sense signal from a sense electrode of the interdigitated force transducer; and filtering the sense signal based on the drive signal. 16. The method of claim 15, wherein filtering the sense signal comprises reducing an amplitude of the sense signal. 17. The method of claim 15, wherein filtering the sense signal comprises applying a low-pass filter to the sense signal. 18. The method of claim 15, wherein the drive signal is a higher frequency signal than the sense signal. 19. The method of claim 15, wherein the drive signal is associated with a selected haptic output. 20. The method of claim 19, wherein the selected haptic output is a click.
A force transducer for an electronic device can be operated in a drive mode and a sense mode simultaneously. In particular, the force transducer can provide haptic output while simultaneously receiving force input from a user. The force transducer is primarily defined by a monolithic piezoelectric body, a ground electrode, a drive electrode, and a sense electrode. The ground electrode and the drive electrode each include multiple electrically-electrically conductive sheets that extend into the monolithic body; the electrically conductive sheets of the ground electrode and the drive electrode are interdigitally engaged. The sense electrode of the force transducer is typically disposed on an exterior surface of the monolithic body.1. A force transducer comprising: a ground electrode comprising a first plurality of sheets; a drive electrode comprising a second plurality of sheets, interdigitally engaged with the first plurality of sheets; a monolithic body separating the first plurality of sheets from the second plurality of sheets and comprising an upper surface; and a sense electrode disposed on the upper surface of the monolithic body. 2. The force transducer of claim 1, wherein: the first plurality of sheets comprises an upper sheet; and the upper sheet is below the upper surface of the monolithic body. 3. The force transducer of claim 1, wherein the monolithic body is formed from a piezoelectric material. 4. The force transducer of claim 3, wherein the piezoelectric material comprises barium titanate. 5. The force transducer of claim 1, wherein the sense electrode is a member of a set of sense electrodes arranged in a row on the upper surface of the monolithic body. 6. The force transducer of claim 1, wherein: the ground electrode is coupled to the sense electrode via a sense circuit; and the ground electrode is coupled to the drive electrode via a drive circuit. 7. The force transducer of claim 1, wherein: the first plurality of sheets comprises a first number of sheets and the second plurality of sheets comprises a second number of sheets; and the first number is different from the second number. 8. An electronic device comprising: a housing; a display within the housing; a multimode force interface positioned below the display and comprising: an interdigitated force transducer coupled to a surface of the display and aligned with an active display region of the display; and a controller coupled to the interdigitated force transducer; wherein the controller is configured to operate the interdigitated force transducer in a drive mode and a sense mode. 9. The electronic device of claim 8, wherein, in the drive mode, the controller applies a drive signal to the interdigitated force transducer and, in response, the interdigitated force transducer generates a haptic output through the display. 10. The electronic device of claim 8, wherein, in the sense mode, the controller receives a sense signal from the interdigitated force transducer that corresponds to a force input applied by a user to the display. 11. The electronic device of claim 8, wherein the interdigitated force transducer comprises: a body comprising a piezoelectric material; a ground electrode comprising more than one electrically conductive sheets extending into the body; and a drive electrode interdigitally engaged with the ground electrode. 12. The electronic device of claim 11, wherein the interdigitated force transducer further comprises an array of sense electrodes disposed on an external surface of the body, parallel to one of the electrically-electrically conductive sheets of the ground electrode. 13. The electronic device of claim 9, wherein: the interdigitated force transducer is a first interdigitated force transducer; and the multimode force interface comprises: an array of interdigitated force transducers comprising the first interdigitated force transducer. 14. The electronic device of claim 13, wherein the array of interdigitated force transducers are arranged in a grid. 15. A method of operating a multimode force interface comprising: providing a drive signal to a drive electrode of an interdigitated force transducer; obtaining a sense signal from a sense electrode of the interdigitated force transducer; and filtering the sense signal based on the drive signal. 16. The method of claim 15, wherein filtering the sense signal comprises reducing an amplitude of the sense signal. 17. The method of claim 15, wherein filtering the sense signal comprises applying a low-pass filter to the sense signal. 18. The method of claim 15, wherein the drive signal is a higher frequency signal than the sense signal. 19. The method of claim 15, wherein the drive signal is associated with a selected haptic output. 20. The method of claim 19, wherein the selected haptic output is a click.
2,600
10,909
10,909
16,035,541
2,622
Computing devices and at least one machine readable medium for controlling the functioning of a touch screen are described herein. The computing device includes a touchscreen having one or more force sensors. The computing device also includes first logic to detect a force applied to the touchscreen via the one or more force sensors and second logic to control a functioning of the touchscreen in response to the applied force.
1-21. (canceled) 22. A mobile computing device, comprising: memory; a power source; communication circuitry to interact with a network; a touchscreen having a capacitive touch sensor, the touch sensor associated with a first power state and a second power state, the first power state being a lower power state than the second power state; a sensor to detect application of a force on the touchscreen when the touch sensor is in the first power state; and at least one circuit to: determine satisfaction of a condition by the application of the force; detect the application of the force at a location on the touchscreen based on output from the sensor; and cause the touch sensor to switch to the second power state in response to (1) the satisfaction of the condition and (2) the location being in a particular position of the touchscreen. 23. The mobile computing device as defined in claim 22, wherein the touch sensor is to remain in the first power state when the location is outside of the particular position of the touchscreen. 24. The mobile computing device as defined in claim 22, wherein the at least one circuit is to not switch to the second power state when the condition is not satisfied. 25. The mobile computing device as defined in claim 24, wherein the condition is satisfied when the force exceeds a threshold. 26. The mobile computing device as defined in claim 24, wherein the condition is satisfied when the force is a continuous force applied for a threshold period of time. 27. The mobile computing device as defined in claim 26, wherein the continuous force is to include a sliding action along a particular region of the touchscreen. 28. The mobile computing device as defined in claim 22, further including a plurality of sensors to detect the application of the force, the at least one circuit to determine the location of the application of the force based on differences between an amount of the force sensed by different ones of the plurality of sensors. 29. The mobile computing device as defined in claim 22, wherein the at least one circuit is a processor. 30. One or more storage devices comprising instructions that, when executed, cause a machine to at least: detect application of a force on a touchscreen when a capacitive touch sensor of the touchscreen is in a first power state, the first power state being lower than a second power state; and determine satisfaction of a condition by the application of the force; detect the application of the force at a location on the touchscreen based on an output from a sensor; and cause the touch sensor to switch to the second power state in response to (1) the satisfaction of the condition and (2) the location being in a predefined position of the touchscreen. 31. The one or more storage devices as defined in claim 30, wherein the instructions further cause the machine to cause the touch sensor to remain in the first power state when the location is outside of the predefined position of the touchscreen. 32. The one or more storage devices as defined in claim 30, wherein the instructions further cause the machine to disregard the force when the condition is not satisfied. 33. The one or more storage devices as defined in claim 32, wherein the instructions cause the machine to determine the condition is satisfied when the force exceeds a threshold. 34. The one or more storage devices as defined in claim 32, wherein the instructions cause the machine to determine the condition is satisfied when the force is applied in excess of a threshold amount for at least a threshold period of time. 35. The one or more storage devices as defined in claim 32, wherein the instructions cause the machine to determine the condition is satisfied when the force includes a sliding action along a particular region of the touchscreen and is applied in excess of a threshold amount for at least a threshold period of time. 36. The one or more storage devices as defined in claim 30, wherein the instructions further cause the machine to determine the location of the application of the force based on differences between an amount of the force sensed by different ones of a plurality of sensors. 37. A mobile computing device, comprising: a power source; means for connecting with a network; a touchscreen having a capacitive touch sensor, the touch sensor associated with a first power state and a second power state, the first power state being a lower power state than the second power state; means for sensing application of a force on the touchscreen when the capacitive touch sensor is in the first power state; means for controlling the touchscreen by: determining satisfaction of a condition by the application of the force; and detecting the application of the force at a location on the touchscreen based on an output from the means for sensing; and causing the touch sensor to switch to the second power state in response to (1) the satisfaction of the condition and (2) the location being in a particular position of the touchscreen. 38. The mobile computing device as defined in claim 37, wherein the touch sensor is to remain in the first power state when the location is outside of the particular position of the touchscreen. 39. The mobile computing device as defined in claim 37, wherein the means for controlling is to disregard the force when the condition is not satisfied. 40. The mobile computing device as defined in claim 39, wherein the condition is satisfied when the force exceeds a threshold. 41. The mobile computing device as defined in claim 39, wherein the force is applied in excess of a threshold amount for at least a threshold period of time. 42. The mobile computing device as defined in claim 39, wherein the force is to include a sliding action along a particular region of the touchscreen and is applied in excess of a threshold amount for at least a threshold period of time. 43. The mobile computing device as defined in claim 37, wherein the means for controlling is to determine the location of the application of the force based on differences between an amount of the force sensed by different ones of a plurality of the means for sensing.
Computing devices and at least one machine readable medium for controlling the functioning of a touch screen are described herein. The computing device includes a touchscreen having one or more force sensors. The computing device also includes first logic to detect a force applied to the touchscreen via the one or more force sensors and second logic to control a functioning of the touchscreen in response to the applied force.1-21. (canceled) 22. A mobile computing device, comprising: memory; a power source; communication circuitry to interact with a network; a touchscreen having a capacitive touch sensor, the touch sensor associated with a first power state and a second power state, the first power state being a lower power state than the second power state; a sensor to detect application of a force on the touchscreen when the touch sensor is in the first power state; and at least one circuit to: determine satisfaction of a condition by the application of the force; detect the application of the force at a location on the touchscreen based on output from the sensor; and cause the touch sensor to switch to the second power state in response to (1) the satisfaction of the condition and (2) the location being in a particular position of the touchscreen. 23. The mobile computing device as defined in claim 22, wherein the touch sensor is to remain in the first power state when the location is outside of the particular position of the touchscreen. 24. The mobile computing device as defined in claim 22, wherein the at least one circuit is to not switch to the second power state when the condition is not satisfied. 25. The mobile computing device as defined in claim 24, wherein the condition is satisfied when the force exceeds a threshold. 26. The mobile computing device as defined in claim 24, wherein the condition is satisfied when the force is a continuous force applied for a threshold period of time. 27. The mobile computing device as defined in claim 26, wherein the continuous force is to include a sliding action along a particular region of the touchscreen. 28. The mobile computing device as defined in claim 22, further including a plurality of sensors to detect the application of the force, the at least one circuit to determine the location of the application of the force based on differences between an amount of the force sensed by different ones of the plurality of sensors. 29. The mobile computing device as defined in claim 22, wherein the at least one circuit is a processor. 30. One or more storage devices comprising instructions that, when executed, cause a machine to at least: detect application of a force on a touchscreen when a capacitive touch sensor of the touchscreen is in a first power state, the first power state being lower than a second power state; and determine satisfaction of a condition by the application of the force; detect the application of the force at a location on the touchscreen based on an output from a sensor; and cause the touch sensor to switch to the second power state in response to (1) the satisfaction of the condition and (2) the location being in a predefined position of the touchscreen. 31. The one or more storage devices as defined in claim 30, wherein the instructions further cause the machine to cause the touch sensor to remain in the first power state when the location is outside of the predefined position of the touchscreen. 32. The one or more storage devices as defined in claim 30, wherein the instructions further cause the machine to disregard the force when the condition is not satisfied. 33. The one or more storage devices as defined in claim 32, wherein the instructions cause the machine to determine the condition is satisfied when the force exceeds a threshold. 34. The one or more storage devices as defined in claim 32, wherein the instructions cause the machine to determine the condition is satisfied when the force is applied in excess of a threshold amount for at least a threshold period of time. 35. The one or more storage devices as defined in claim 32, wherein the instructions cause the machine to determine the condition is satisfied when the force includes a sliding action along a particular region of the touchscreen and is applied in excess of a threshold amount for at least a threshold period of time. 36. The one or more storage devices as defined in claim 30, wherein the instructions further cause the machine to determine the location of the application of the force based on differences between an amount of the force sensed by different ones of a plurality of sensors. 37. A mobile computing device, comprising: a power source; means for connecting with a network; a touchscreen having a capacitive touch sensor, the touch sensor associated with a first power state and a second power state, the first power state being a lower power state than the second power state; means for sensing application of a force on the touchscreen when the capacitive touch sensor is in the first power state; means for controlling the touchscreen by: determining satisfaction of a condition by the application of the force; and detecting the application of the force at a location on the touchscreen based on an output from the means for sensing; and causing the touch sensor to switch to the second power state in response to (1) the satisfaction of the condition and (2) the location being in a particular position of the touchscreen. 38. The mobile computing device as defined in claim 37, wherein the touch sensor is to remain in the first power state when the location is outside of the particular position of the touchscreen. 39. The mobile computing device as defined in claim 37, wherein the means for controlling is to disregard the force when the condition is not satisfied. 40. The mobile computing device as defined in claim 39, wherein the condition is satisfied when the force exceeds a threshold. 41. The mobile computing device as defined in claim 39, wherein the force is applied in excess of a threshold amount for at least a threshold period of time. 42. The mobile computing device as defined in claim 39, wherein the force is to include a sliding action along a particular region of the touchscreen and is applied in excess of a threshold amount for at least a threshold period of time. 43. The mobile computing device as defined in claim 37, wherein the means for controlling is to determine the location of the application of the force based on differences between an amount of the force sensed by different ones of a plurality of the means for sensing.
2,600
10,910
10,910
16,217,234
2,647
An allocation method of communication resources is use with at least a user mobile device, at least first and second wireless network access point devices, and a base station device for mobile communication. In the allocation method, the base station device receives a registration request from the user mobile device; and the base station device determines an order of the first and second wireless network access point devices for the user mobile device to register in according to intensities of signals of the first and second wireless network access point devices, which are measured by the user mobile device, and counts of user mobile devices having registered in the first and second wireless network access point devices, respectively.
1. A method for associating a user mobile device with one of a group of wireless network access point devices for mobile communication, the group of wireless network access point devices and the user mobile device all linking to a base station device, the group of wireless network access point devices including a first wireless network access point device and a second wireless network access point device, and the method comprising: the base station device receiving a registration request from the user mobile device; and the base station device determining an order of the first and second wireless network access point devices for the user mobile device to register in according to intensities of signals of the first and second wireless network access point devices, which are measured by the user mobile device, and counts of user mobile devices having registered in the first and second wireless network access point devices, respectively. 2. The method according to claim 1, wherein the base station device requests information of the first signal intensity and the second signal intensity from the user mobile device, and records the information of the first and/or second wireless network access point devices in compliance with a specific condition in a first table. 3. The method according to claim 2, wherein the base station device further establishes a second table, in which an identification code of the first wireless network access point device, a count of user mobile devices registered in the first wireless network access point device, an identification code of the second wireless network access point device, and a count of user mobile devices registered in the second wireless network access point device are recorded. 4. The method according to claim 3, wherein the base station device determines an order of the first wireless network access point device, and the second wireless network access point device for the user mobile device to register in according to contents of the first table and the second table. 5. The method according to claim 3, wherein when the first and second wireless network access point devices have the same counts of registered user mobile devices according to the second table, one of the first and second wireless network access point devices having a higher signal intensity according to the first table is selected by the user mobile device to register in. 6. The method according to claim 2, wherein the first or second wireless network access point devices is determined to comply with the specific condition when the signal intensity of the corresponding wireless network access point device is greater than a default threshold. 7. The allocation method according to claim 1, further comprising: the base station associating the user mobile device with one of the wireless network access point devices, which has the most prior index, for registration. 8. The method according to claim 7, wherein if registration limes of the user mobile device into the first or second wireless network access point device have reached a preset number but still fails, another wireless network access point device, which has the second most prior index, will be automatically picked up for the user mobile device to register in. 9. The method according to claim 1, wherein if a count of registered user mobile devices in the first wireless network access point device and a count of registered user mobile devices in the second wireless network access point device are equal, one of the first and second wireless network access point devices having a higher signal intensity is selected by the user mobile device to register in. 10. A base station device for mobile communication, for use with at least one user mobile device and at least two wireless network access point devices, comprising: a node device in communication with Internet and wirelessly connected to the at least one user mobile devices and the at least two wireless network access point devices to operate in a heterogeneous network, the node device receiving a registration request from the user mobile device; and an estimating device in communication with the node device, determining an order of the at least two wireless network access point devices for the at least one user mobile device to register in according to intensities of signals of the at least two wireless network access point device, which are measured by the user mobile device, and counts of user mobile devices having registered in the at least two wireless network access point devices, respectively. 11. The base station device according to claim 10, wherein the node device requests information of the first signal intensity and the second signal intensity from the user mobile device, and records the information of the first and/or second wireless network access point devices in compliance with a specific condition in a first table. 12. The base station device according to claim 11, wherein the node device further establishes a second table, in which identification codes of the at least two wireless network access point devices, and respective counts of user mobile devices registered in the at least two wireless network access point devices are recorded. 13. The base station device according to claim 12, wherein the estimating device determines an order of the at least two wireless network access point devices for at least one user mobile device to register in according to contents of the first table and the second table. 14. The base station device according to claim 12, wherein when the estimating device determines that two of the at least two wireless network access point devices have the same counts of registered user mobile devices according to the second table, one of the at least two wireless network access point devices, which has a higher signal intensity according to the first table is selected by the at least one user mobile device to register in. 15. The base station device according to claim 11, wherein any of the at least two wireless network access point devices is determined to comply with the specific condition when the signal intensity of the corresponding wireless network access paint device is greater than a default threshold. 16. The base station device according to claim 10, further comprising: the node device associating the user mobile device to with one of the wireless network access point devices, which has the most prior index, for registration. 17. The base station device according to claim 13 wherein if registration times of the at least one user mobile device into one of the at least two wireless network access point devices have reached a preset number but still fails, another one of the at least two wireless network access point devices, which has the second most prior index, will be automatically picked up by the node device for the user mobile device to register in. 18. The base station device according to claim 10, wherein if a count of registered user mobile devices in one of the at least two wireless network access point device and a count of registered user mobile devices in another one of the at least two wireless network access point devices are equal, one of the at least two wireless network access point devices, which has a higher signal intensity is selected by the user mobile device to register in.
An allocation method of communication resources is use with at least a user mobile device, at least first and second wireless network access point devices, and a base station device for mobile communication. In the allocation method, the base station device receives a registration request from the user mobile device; and the base station device determines an order of the first and second wireless network access point devices for the user mobile device to register in according to intensities of signals of the first and second wireless network access point devices, which are measured by the user mobile device, and counts of user mobile devices having registered in the first and second wireless network access point devices, respectively.1. A method for associating a user mobile device with one of a group of wireless network access point devices for mobile communication, the group of wireless network access point devices and the user mobile device all linking to a base station device, the group of wireless network access point devices including a first wireless network access point device and a second wireless network access point device, and the method comprising: the base station device receiving a registration request from the user mobile device; and the base station device determining an order of the first and second wireless network access point devices for the user mobile device to register in according to intensities of signals of the first and second wireless network access point devices, which are measured by the user mobile device, and counts of user mobile devices having registered in the first and second wireless network access point devices, respectively. 2. The method according to claim 1, wherein the base station device requests information of the first signal intensity and the second signal intensity from the user mobile device, and records the information of the first and/or second wireless network access point devices in compliance with a specific condition in a first table. 3. The method according to claim 2, wherein the base station device further establishes a second table, in which an identification code of the first wireless network access point device, a count of user mobile devices registered in the first wireless network access point device, an identification code of the second wireless network access point device, and a count of user mobile devices registered in the second wireless network access point device are recorded. 4. The method according to claim 3, wherein the base station device determines an order of the first wireless network access point device, and the second wireless network access point device for the user mobile device to register in according to contents of the first table and the second table. 5. The method according to claim 3, wherein when the first and second wireless network access point devices have the same counts of registered user mobile devices according to the second table, one of the first and second wireless network access point devices having a higher signal intensity according to the first table is selected by the user mobile device to register in. 6. The method according to claim 2, wherein the first or second wireless network access point devices is determined to comply with the specific condition when the signal intensity of the corresponding wireless network access point device is greater than a default threshold. 7. The allocation method according to claim 1, further comprising: the base station associating the user mobile device with one of the wireless network access point devices, which has the most prior index, for registration. 8. The method according to claim 7, wherein if registration limes of the user mobile device into the first or second wireless network access point device have reached a preset number but still fails, another wireless network access point device, which has the second most prior index, will be automatically picked up for the user mobile device to register in. 9. The method according to claim 1, wherein if a count of registered user mobile devices in the first wireless network access point device and a count of registered user mobile devices in the second wireless network access point device are equal, one of the first and second wireless network access point devices having a higher signal intensity is selected by the user mobile device to register in. 10. A base station device for mobile communication, for use with at least one user mobile device and at least two wireless network access point devices, comprising: a node device in communication with Internet and wirelessly connected to the at least one user mobile devices and the at least two wireless network access point devices to operate in a heterogeneous network, the node device receiving a registration request from the user mobile device; and an estimating device in communication with the node device, determining an order of the at least two wireless network access point devices for the at least one user mobile device to register in according to intensities of signals of the at least two wireless network access point device, which are measured by the user mobile device, and counts of user mobile devices having registered in the at least two wireless network access point devices, respectively. 11. The base station device according to claim 10, wherein the node device requests information of the first signal intensity and the second signal intensity from the user mobile device, and records the information of the first and/or second wireless network access point devices in compliance with a specific condition in a first table. 12. The base station device according to claim 11, wherein the node device further establishes a second table, in which identification codes of the at least two wireless network access point devices, and respective counts of user mobile devices registered in the at least two wireless network access point devices are recorded. 13. The base station device according to claim 12, wherein the estimating device determines an order of the at least two wireless network access point devices for at least one user mobile device to register in according to contents of the first table and the second table. 14. The base station device according to claim 12, wherein when the estimating device determines that two of the at least two wireless network access point devices have the same counts of registered user mobile devices according to the second table, one of the at least two wireless network access point devices, which has a higher signal intensity according to the first table is selected by the at least one user mobile device to register in. 15. The base station device according to claim 11, wherein any of the at least two wireless network access point devices is determined to comply with the specific condition when the signal intensity of the corresponding wireless network access paint device is greater than a default threshold. 16. The base station device according to claim 10, further comprising: the node device associating the user mobile device to with one of the wireless network access point devices, which has the most prior index, for registration. 17. The base station device according to claim 13 wherein if registration times of the at least one user mobile device into one of the at least two wireless network access point devices have reached a preset number but still fails, another one of the at least two wireless network access point devices, which has the second most prior index, will be automatically picked up by the node device for the user mobile device to register in. 18. The base station device according to claim 10, wherein if a count of registered user mobile devices in one of the at least two wireless network access point device and a count of registered user mobile devices in another one of the at least two wireless network access point devices are equal, one of the at least two wireless network access point devices, which has a higher signal intensity is selected by the user mobile device to register in.
2,600
10,911
10,911
15,611,223
2,623
Aspects of the present disclosure are directed to systems, apparatuses, and methods for the projection of images onto keycaps of a computer keyboard.
1. An electronic keyboard to detect user inputs, the electronic keyboard comprising: an electro-mechanical interface including a plurality of keys, each of the keys including a keycap, configured and arranged to receive a first user input on a first keycap of said plurality of keys, and a key matrix coupled to each key of said plurality of keys, and in response to the first user input is configured and arranged to convert the first user input into an electrical signal indicative of a first data input; and an image projecting means configured and arranged to project an image onto the one or more keycaps. 2. The electronic keyboard of claim 1, wherein the image projecting means is further configured and arranged to project an image onto the one or more keycaps indicative of a data input associated with each key. 3. The electronic keyboard of claim 1, wherein the electro-mechanical interface further includes a membrane coupled between the key matrix and the plurality of keys, the membrane configured and arranged to return the first key to an initial position after responding to the first user input, and wherein the image projecting means is a display coupled opposite the plurality of keys relative to the key matrix. 4. The electronic keyboard of claim 1, wherein the image projecting means is a display, and the key matrix is a printed circuit board including electrical circuitry, the electro-mechanical interface coupled to a top surface of the display. 5. The electronic keyboard of claim 1, wherein the electronic keyboard further includes a housing that encompasses the image projecting means, and at least partially encompasses the electro-mechanical interface; and the electro-mechanical interface further includes a membrane coupled between the electro-mechanical interface and the one or more keys, the membrane configured and arranged to complete an electrical circuit of the key matrix, and return the one or more keys to an initial position after responding to the user input. 6. The electronic keyboard of claim 5, wherein the key matrix is substantially optically transparent and the electro-mechanical interface includes a plurality of apertures that extend through at least one of the membrane and the housing, and the image projecting means is coupled to the key matrix opposite the keys, and is further configured and arranged to project the image to the keycap of the one or more keys via the substantially optically transparent key matrix, and the plurality of apertures in the at least one of the membrane and the housing. 7. The electronic keyboard of claim 5, wherein the key matrix and the membrane are substantially optically transparent, the image projecting means is coupled to the key matrix opposite the keys, and image projecting means is further configured and arranged to project an image to the keycaps of the one or more keys via the substantially optically transparent key matrix and membrane. 8. The electronic keyboard of claim 1, further including controller circuitry configured and arranged to receive data indicative of a selected keyboard mapping, and to project an image onto the keycaps of one or more of the keys, via the image projecting means, associated with the mapped data inputs of each key. 9. The electronic keyboard of claim 1, wherein the image projecting means is further configured and arranged to project a number of partial images in proximity to the first key that form a single image on the first keycap. 10. The electronic keyboard of claim 1, wherein the one or more keys further include a hollow shaft configured and arranged to translate the mechanical user input from the key to the key matrix, and to facilitate the projection of images from the image projecting means to the keycap. 11. The electronic keyboard of claim 1, further including fiber optic cable coupled between the keycap of each of the keys and the image projecting means, the fiber optic cable configured and arranged to transmit the projected image from the image projecting means to the keycap of each key. 12. The electronic keyboard of claim 1, wherein the image projecting means is a plurality of displays, where each display is coupled to a back side of the one or more keys, and configured and arranged to project an image through the keys coupled to the displays and onto each of the respective keycaps of the keys. 13. The electronic keyboard of claim 1, wherein the keys are optically transparent and configured and arranged to facilitate projection of an image through the key to the keycap. 14. The electronic keyboard of claim 1, wherein the one or more keys include a printed image on the keycap of each key indicative of a default data input associated with activation of the key; and the image projecting means is further configured and arranged to operate in a low-power mode where the projected image is disabled and only the printed images on the keycaps of the one or more keys are visible, and a normal operating mode where the image projecting means projects an image onto the keycap of the key indicative of a mapped data input associated with activation of the key, and wherein the projected image diminishes the visibility of the printed image on the top surface of the key. 15. The electronic keyboard of claim 14, wherein the printed image on the keycap includes material with material characteristics including increased optical translucence in response to irradiation of light in a spectrum consistent with a computer display. 16. The electronic keyboard of claim 1, wherein the electro-mechanical interface includes an optical key actuation detection circuitry configured and arranged to detect the absence of light in proximity to the first keycap caused by a user input on the first key, and associate the absence of light in proximity to the first keycap with a mapped data input assigned to the first key. 17. An electronic keyboard comprising: one or more keys, each key including a keycap, the keys configured and arranged to detect user inputs; and an image projecting means configured and arranged to project an image onto the keycaps of the one or more keys indicative of a mapped data input associated with each key. 18. The electronic keyboard of claim 17, wherein the image projecting means further includes controller circuitry configured and arranged to receive data indicative of a selected keyboard mapping, and to project an image onto the keycaps of each of the keys associated with the mapped data inputs for each key. 19. The electronic keyboard of claim 17, further including controller circuitry configured and arranged to receive image data to be displayed on the keycaps of each of the keys associated with an application-specific mapping, and the image projecting means is further configured and arranged to either display the application-specific images, or to simultaneously display both the application-specific images and a default data input image on the keycaps of the one or more keys. 20. A method of operating an electronic keyboard including: selecting a keyboard layout in a computer operating system; projecting images on keycaps of each of one or more keys of the electronic keyboard indicative of a first data input assigned to each of the keys based on the selected keyboard layout; in response to the selection of a first key, electronically transmitting a first key code associated with the first key to the computer operating system; associating the first key code to the first data input assigned to the first key based on the selected keyboard layout; and inputting the first data input associated with the first key code into the computer operating system. 21. The method of claim 20, wherein said selecting step is performed by a user. 22. The method of claim 20, wherein said selecting step is performed automatically by software or firmware.
Aspects of the present disclosure are directed to systems, apparatuses, and methods for the projection of images onto keycaps of a computer keyboard.1. An electronic keyboard to detect user inputs, the electronic keyboard comprising: an electro-mechanical interface including a plurality of keys, each of the keys including a keycap, configured and arranged to receive a first user input on a first keycap of said plurality of keys, and a key matrix coupled to each key of said plurality of keys, and in response to the first user input is configured and arranged to convert the first user input into an electrical signal indicative of a first data input; and an image projecting means configured and arranged to project an image onto the one or more keycaps. 2. The electronic keyboard of claim 1, wherein the image projecting means is further configured and arranged to project an image onto the one or more keycaps indicative of a data input associated with each key. 3. The electronic keyboard of claim 1, wherein the electro-mechanical interface further includes a membrane coupled between the key matrix and the plurality of keys, the membrane configured and arranged to return the first key to an initial position after responding to the first user input, and wherein the image projecting means is a display coupled opposite the plurality of keys relative to the key matrix. 4. The electronic keyboard of claim 1, wherein the image projecting means is a display, and the key matrix is a printed circuit board including electrical circuitry, the electro-mechanical interface coupled to a top surface of the display. 5. The electronic keyboard of claim 1, wherein the electronic keyboard further includes a housing that encompasses the image projecting means, and at least partially encompasses the electro-mechanical interface; and the electro-mechanical interface further includes a membrane coupled between the electro-mechanical interface and the one or more keys, the membrane configured and arranged to complete an electrical circuit of the key matrix, and return the one or more keys to an initial position after responding to the user input. 6. The electronic keyboard of claim 5, wherein the key matrix is substantially optically transparent and the electro-mechanical interface includes a plurality of apertures that extend through at least one of the membrane and the housing, and the image projecting means is coupled to the key matrix opposite the keys, and is further configured and arranged to project the image to the keycap of the one or more keys via the substantially optically transparent key matrix, and the plurality of apertures in the at least one of the membrane and the housing. 7. The electronic keyboard of claim 5, wherein the key matrix and the membrane are substantially optically transparent, the image projecting means is coupled to the key matrix opposite the keys, and image projecting means is further configured and arranged to project an image to the keycaps of the one or more keys via the substantially optically transparent key matrix and membrane. 8. The electronic keyboard of claim 1, further including controller circuitry configured and arranged to receive data indicative of a selected keyboard mapping, and to project an image onto the keycaps of one or more of the keys, via the image projecting means, associated with the mapped data inputs of each key. 9. The electronic keyboard of claim 1, wherein the image projecting means is further configured and arranged to project a number of partial images in proximity to the first key that form a single image on the first keycap. 10. The electronic keyboard of claim 1, wherein the one or more keys further include a hollow shaft configured and arranged to translate the mechanical user input from the key to the key matrix, and to facilitate the projection of images from the image projecting means to the keycap. 11. The electronic keyboard of claim 1, further including fiber optic cable coupled between the keycap of each of the keys and the image projecting means, the fiber optic cable configured and arranged to transmit the projected image from the image projecting means to the keycap of each key. 12. The electronic keyboard of claim 1, wherein the image projecting means is a plurality of displays, where each display is coupled to a back side of the one or more keys, and configured and arranged to project an image through the keys coupled to the displays and onto each of the respective keycaps of the keys. 13. The electronic keyboard of claim 1, wherein the keys are optically transparent and configured and arranged to facilitate projection of an image through the key to the keycap. 14. The electronic keyboard of claim 1, wherein the one or more keys include a printed image on the keycap of each key indicative of a default data input associated with activation of the key; and the image projecting means is further configured and arranged to operate in a low-power mode where the projected image is disabled and only the printed images on the keycaps of the one or more keys are visible, and a normal operating mode where the image projecting means projects an image onto the keycap of the key indicative of a mapped data input associated with activation of the key, and wherein the projected image diminishes the visibility of the printed image on the top surface of the key. 15. The electronic keyboard of claim 14, wherein the printed image on the keycap includes material with material characteristics including increased optical translucence in response to irradiation of light in a spectrum consistent with a computer display. 16. The electronic keyboard of claim 1, wherein the electro-mechanical interface includes an optical key actuation detection circuitry configured and arranged to detect the absence of light in proximity to the first keycap caused by a user input on the first key, and associate the absence of light in proximity to the first keycap with a mapped data input assigned to the first key. 17. An electronic keyboard comprising: one or more keys, each key including a keycap, the keys configured and arranged to detect user inputs; and an image projecting means configured and arranged to project an image onto the keycaps of the one or more keys indicative of a mapped data input associated with each key. 18. The electronic keyboard of claim 17, wherein the image projecting means further includes controller circuitry configured and arranged to receive data indicative of a selected keyboard mapping, and to project an image onto the keycaps of each of the keys associated with the mapped data inputs for each key. 19. The electronic keyboard of claim 17, further including controller circuitry configured and arranged to receive image data to be displayed on the keycaps of each of the keys associated with an application-specific mapping, and the image projecting means is further configured and arranged to either display the application-specific images, or to simultaneously display both the application-specific images and a default data input image on the keycaps of the one or more keys. 20. A method of operating an electronic keyboard including: selecting a keyboard layout in a computer operating system; projecting images on keycaps of each of one or more keys of the electronic keyboard indicative of a first data input assigned to each of the keys based on the selected keyboard layout; in response to the selection of a first key, electronically transmitting a first key code associated with the first key to the computer operating system; associating the first key code to the first data input assigned to the first key based on the selected keyboard layout; and inputting the first data input associated with the first key code into the computer operating system. 21. The method of claim 20, wherein said selecting step is performed by a user. 22. The method of claim 20, wherein said selecting step is performed automatically by software or firmware.
2,600
10,912
10,912
11,617,553
2,621
Some aspects of the invention provide an interactive, touch-sensitive user interface device. A user interface device for a data processing system is provided, which includes a sensor having a surface configured to contactingly receive a pointing device and further configured to detect physical contact by a living human and to differentiate between said physical contact and movement of the pointing device when the pointing device is engaged with the surface. The user interface device is operable to transmit information corresponding to said physical contact to the data processing system.
1. A user interface device for a data processing system, comprising a sensor having a surface configured to contactingly receive a pointing device and further configured to detect physical contact by a living human and to differentiate between said physical contact and movement of the pointing device when the pointing device is engaged with the surface, wherein the user interface device is operable to transmit information corresponding to said physical contact to the data processing system. 2. The user interface device of claim 1, wherein the pointing device is a mouse and the user interface device is a mouse pad. 3. The user interface device of claim 1, wherein information is transmitted to the data processing system by wireless means. 4. The user interface device of claim 1, wherein the surface comprises a plurality of surface regions and the information corresponding to said physical contact and transmitted to the data processing system comprises information to identify the specific surface region receiving the physical contact. 5. The user interface device of claim 4, wherein each of the plurality of surface regions corresponds to a command issued to a software application running on the data processing system. 6. The user interface device of claim 4, wherein each surface region in the plurality of surface regions is associated with a predefined permanent sensor region. 7. The user interface device of claim 4, wherein each surface region in the plurality of surface regions is associated with one of a plurality of sensor regions that have been dynamically defined based on a software application in association with which the user interface device is to be used. 8. A mouse pad for a data processing system, comprising a sensor having a surface configured to contactingly receive a mouse and further configured to detect physical contact by a living human and to differentiate between said physical contact movement of the mouse when the mouse is engaged with the surface, wherein the mouse pad is operable to transmit information corresponding to said physical contact to the data processing system. 9. The mouse pad of claim 8, wherein information is transmitted to the data processing system by wireless means. 10. The mouse pad of claim 8, wherein the surface comprises a plurality of surface regions and the information corresponding to said physical contact and transmitted to the data processing system comprises information to identify the specific surface region receiving the physical contact. 11. The mouse pad of claim 10, wherein each of the plurality of surface regions corresponds to a command issued to a software application running on the data processing system. 12. The mouse pad of claim 10, wherein each surface region in the plurality of surface regions is associated with a predefined permanent sensor region. 13. The mouse pad of claim 10, wherein each surface region in the plurality of surface regions is associated with one of a plurality of sensor regions that have been dynamically defined based on a software application in association with which the user interface device is to be used.
Some aspects of the invention provide an interactive, touch-sensitive user interface device. A user interface device for a data processing system is provided, which includes a sensor having a surface configured to contactingly receive a pointing device and further configured to detect physical contact by a living human and to differentiate between said physical contact and movement of the pointing device when the pointing device is engaged with the surface. The user interface device is operable to transmit information corresponding to said physical contact to the data processing system.1. A user interface device for a data processing system, comprising a sensor having a surface configured to contactingly receive a pointing device and further configured to detect physical contact by a living human and to differentiate between said physical contact and movement of the pointing device when the pointing device is engaged with the surface, wherein the user interface device is operable to transmit information corresponding to said physical contact to the data processing system. 2. The user interface device of claim 1, wherein the pointing device is a mouse and the user interface device is a mouse pad. 3. The user interface device of claim 1, wherein information is transmitted to the data processing system by wireless means. 4. The user interface device of claim 1, wherein the surface comprises a plurality of surface regions and the information corresponding to said physical contact and transmitted to the data processing system comprises information to identify the specific surface region receiving the physical contact. 5. The user interface device of claim 4, wherein each of the plurality of surface regions corresponds to a command issued to a software application running on the data processing system. 6. The user interface device of claim 4, wherein each surface region in the plurality of surface regions is associated with a predefined permanent sensor region. 7. The user interface device of claim 4, wherein each surface region in the plurality of surface regions is associated with one of a plurality of sensor regions that have been dynamically defined based on a software application in association with which the user interface device is to be used. 8. A mouse pad for a data processing system, comprising a sensor having a surface configured to contactingly receive a mouse and further configured to detect physical contact by a living human and to differentiate between said physical contact movement of the mouse when the mouse is engaged with the surface, wherein the mouse pad is operable to transmit information corresponding to said physical contact to the data processing system. 9. The mouse pad of claim 8, wherein information is transmitted to the data processing system by wireless means. 10. The mouse pad of claim 8, wherein the surface comprises a plurality of surface regions and the information corresponding to said physical contact and transmitted to the data processing system comprises information to identify the specific surface region receiving the physical contact. 11. The mouse pad of claim 10, wherein each of the plurality of surface regions corresponds to a command issued to a software application running on the data processing system. 12. The mouse pad of claim 10, wherein each surface region in the plurality of surface regions is associated with a predefined permanent sensor region. 13. The mouse pad of claim 10, wherein each surface region in the plurality of surface regions is associated with one of a plurality of sensor regions that have been dynamically defined based on a software application in association with which the user interface device is to be used.
2,600
10,913
10,913
16,190,704
2,651
The technology disclosed herein enables an improved communication dialer system that uses a calling-party identifier having previous association with a called party to request a communication session with that called party. In a particular embodiment, a method provides identifying a called party for a first communication session and determining a first calling-party identifier from a plurality of calling-party identifiers available to the communication dialer for establishing the first communication session with the called party. The first calling-party identifier was previously used in association with a communication session with the called party. The method further provides directing the communication dialer to request the first communication session with the called party using the first calling-party identifier.
1. A method for improving calling-party identifier selection by a communication dialer of a contact center, the method comprising: identifying a called party for a first communication session; determining a first calling-party identifier from a plurality of calling-party identifiers available to the communication dialer for establishing the first communication session with the called party, wherein the first calling-party identifier was previously used in association with a communication session between the contact center and with the called party; and directing the communication dialer to request the first communication session with the called party using the first calling-party identifier. 2. The method of claim 1, wherein determining the first calling-party identifier comprises: referencing a record for the called party that provides a mapping to the called party of one or more calling-party identifiers of the plurality of calling-party identifiers that were each previously used in association with a successful communication session with the called party; and selecting the first calling-party identifier from the one or more calling-party identifiers. 3. The method of claim 2, further comprising: prior to the first communication session, determining that a second communication session with the called party using the first calling-party identifier is successful; and in response to determining that the second communication session was successful, adding the first calling-party identifier to the one or more calling-party identifiers. 4. The method of claim 2, further comprising: after determining that the request was unsuccessful, referencing the record for a number of instances where requests for a communication session with the called party using the first calling-party identifier were also unsuccessful; and upon determining that the number of instances is above a threshold number, removing the first calling-party identifier from the mapping. 5. The method of claim 1, wherein the first calling-party identifier comprises one or more of: the one of the plurality of calling-party identifiers that, when used to request a communication session with the called party, resulted in the highest number of established communication sessions; the one of the plurality of calling-party identifiers that, when used to request a communication session with the called party, resulted in a longest duration communication session with the called party; and the one of the plurality of calling-party identifiers that most recently resulted in an established communication session when used to request a communication session with the called party. 6. The method of claim 1, further comprising: after the communication dialer receives a rejection from the called party in response to requesting the first communication session, selecting a second calling-party identifier from the plurality of calling-party identifiers that was previously used in association with a successful communication session with the called party; and directing the communication dialer to request the first communication session with the called party using the second calling-party identifier. 7. The method of claim 1, further comprising: after the communication dialer receives a rejection from the called party in response to requesting the first communication session, determining that no other calling-party identifier of the plurality of calling-party identifiers was previously used in association with a successful communication session with the called party and, responsively, randomly selecting a second calling-party identifier from the plurality of calling-party identifiers; and directing the communication dialer to request the first communication session with the called party using the second calling-party identifier. 8. The method of claim 1, further comprising: after directing the communication dialer to request the first communication session with the called party using the first calling-party identifier, determining that the first calling-party identifier is not available for use with the first communication session; and after determining that the first calling-party identifier is not available, waiting a period of time for the first calling-party identifier to become available; if the first calling-party identifier becomes available within the period of time, requesting the first communication session with the called party using the first calling-party identifier; and if the first calling-party identifier does not become available within the period of time, determining a second calling-party identifier from the plurality of calling-party identifiers and directing the communication dialer to request the first communication session with the called party using the second calling-party identifier. 9. The method of claim 1, wherein each of the plurality of calling-party identifiers comprises a telephone number and the first calling-party identifier comprises one of the calling-party identifiers that was used by the called party to contact the contact center in a previous communication session. 10. The method of claim 1, wherein determining the first calling-party identifier comprises: receiving input, from an agent of the contact center, comprising an instruction for the communication dialer to use the first calling-party identifier for communication sessions with the called party. 11. An apparatus for improving calling-party identifier selection by a communication dialer of a contact center, the apparatus comprising: one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to: identify a called party for a first communication session; determine a first calling-party identifier from a plurality of calling-party identifiers available to the communication dialer for establishing the first communication session with the called party, wherein the first calling-party identifier was previously used in association with a communication session between the contact center and the called party; and direct the communication dialer to request the first communication session with the called party using the first calling-party identifier. 12. The apparatus of claim 11, wherein to determine the first calling-party identifier, the program instructions direct the processing system to: reference a record for the called party that provides a mapping to the called party of one or more calling-party identifiers of the plurality of calling-party identifiers that were each previously used in association with a successful communication session with the called party; and select the first calling-party identifier from the one or more calling-party identifiers. 13. The apparatus of claim 12, wherein the program instructions further direct the processing system to: prior to the first communication session, determine that a second communication session with the called party using the first calling-party identifier is successful; and in response to determining that the second communication session was successful, add the first calling-party identifier to the one or more calling-party identifiers. 14. The apparatus of claim 12, wherein the program instructions further direct the processing system to: after determining that the request was unsuccessful, reference the record for a number of instances where requests for a communication session with the called party using the first calling-party identifier were also unsuccessful; and upon determining that the number of instances is above a threshold number, remove the first calling-party identifier from the mapping. 15. The apparatus of claim 11, wherein the first calling-party identifier comprises one or more of: the one of the plurality of calling-party identifiers that, when used to request a communication session with the called party, resulted in the highest number of established communication sessions; the one of the plurality of calling-party identifiers that, when used to request a communication session with the called party, resulted in a longest duration communication session with the called party; and the one of the plurality of calling-party identifiers that most recently resulted in an established communication session when used to request a communication session with the called party. 16. The apparatus of claim 11, wherein the program instructions further direct the processing system to: after the communication dialer receives a rejection from the called party in response to requesting the first communication session, select a second calling-party identifier from the plurality of calling-party identifiers that was previously used in association with a successful communication session with the called party; and direct the communication dialer to request the first communication session with the called party using the second calling-party identifier. 17. The apparatus of claim 11, wherein the program instructions further direct the processing system to: after the communication dialer receives a rejection from the called party in response to requesting the first communication session, determine that no other calling-party identifier of the plurality of calling-party identifiers was previously used in association with a successful communication session with the called party and, responsively, randomly select a second calling-party identifier from the plurality of calling-party identifiers; and direct the communication dialer to request the first communication session with the called party using the second calling-party identifier. 18. The apparatus of claim 11, wherein the program instructions further direct the processing system to: after directing the communication dialer to request the first communication session with the called party using the first calling-party identifier, determine that the first calling-party identifier is not available for use with the first communication session; and after determining that the first calling-party identifier is not available, wait a period of time for the first calling-party identifier to become available; if the first calling-party identifier becomes available within the period of time, request the first communication session with the called party using the first calling-party identifier; and if the first calling-party identifier does not become available within the period of time, determine a second calling-party identifier from the plurality of calling-party identifiers and direct the communication dialer to request the first communication session with the called party using the second calling-party identifier. 19. The apparatus of claim 11, wherein each of the plurality of calling-party identifiers comprises a telephone number and the first calling-party identifier comprises one of the calling-party identifiers that was used by the called party to contact the contact center in a previous communication session. 20. One or more computer readable storage media having program instructions stored thereon for improving calling-party identifier selection by a communication dialer of a contact center, the program instructions, when reached and executed by a processing system, direct the processing system to: identify a called party for a first communication session; determine a first calling-party identifier from a plurality of calling-party identifiers available to the communication dialer for establishing the first communication session with the called party, wherein the first calling-party identifier was previously used in association with a communication session between the contact center and with the called party; and direct the communication dialer to request the first communication session with the called party using the first calling-party identifier.
The technology disclosed herein enables an improved communication dialer system that uses a calling-party identifier having previous association with a called party to request a communication session with that called party. In a particular embodiment, a method provides identifying a called party for a first communication session and determining a first calling-party identifier from a plurality of calling-party identifiers available to the communication dialer for establishing the first communication session with the called party. The first calling-party identifier was previously used in association with a communication session with the called party. The method further provides directing the communication dialer to request the first communication session with the called party using the first calling-party identifier.1. A method for improving calling-party identifier selection by a communication dialer of a contact center, the method comprising: identifying a called party for a first communication session; determining a first calling-party identifier from a plurality of calling-party identifiers available to the communication dialer for establishing the first communication session with the called party, wherein the first calling-party identifier was previously used in association with a communication session between the contact center and with the called party; and directing the communication dialer to request the first communication session with the called party using the first calling-party identifier. 2. The method of claim 1, wherein determining the first calling-party identifier comprises: referencing a record for the called party that provides a mapping to the called party of one or more calling-party identifiers of the plurality of calling-party identifiers that were each previously used in association with a successful communication session with the called party; and selecting the first calling-party identifier from the one or more calling-party identifiers. 3. The method of claim 2, further comprising: prior to the first communication session, determining that a second communication session with the called party using the first calling-party identifier is successful; and in response to determining that the second communication session was successful, adding the first calling-party identifier to the one or more calling-party identifiers. 4. The method of claim 2, further comprising: after determining that the request was unsuccessful, referencing the record for a number of instances where requests for a communication session with the called party using the first calling-party identifier were also unsuccessful; and upon determining that the number of instances is above a threshold number, removing the first calling-party identifier from the mapping. 5. The method of claim 1, wherein the first calling-party identifier comprises one or more of: the one of the plurality of calling-party identifiers that, when used to request a communication session with the called party, resulted in the highest number of established communication sessions; the one of the plurality of calling-party identifiers that, when used to request a communication session with the called party, resulted in a longest duration communication session with the called party; and the one of the plurality of calling-party identifiers that most recently resulted in an established communication session when used to request a communication session with the called party. 6. The method of claim 1, further comprising: after the communication dialer receives a rejection from the called party in response to requesting the first communication session, selecting a second calling-party identifier from the plurality of calling-party identifiers that was previously used in association with a successful communication session with the called party; and directing the communication dialer to request the first communication session with the called party using the second calling-party identifier. 7. The method of claim 1, further comprising: after the communication dialer receives a rejection from the called party in response to requesting the first communication session, determining that no other calling-party identifier of the plurality of calling-party identifiers was previously used in association with a successful communication session with the called party and, responsively, randomly selecting a second calling-party identifier from the plurality of calling-party identifiers; and directing the communication dialer to request the first communication session with the called party using the second calling-party identifier. 8. The method of claim 1, further comprising: after directing the communication dialer to request the first communication session with the called party using the first calling-party identifier, determining that the first calling-party identifier is not available for use with the first communication session; and after determining that the first calling-party identifier is not available, waiting a period of time for the first calling-party identifier to become available; if the first calling-party identifier becomes available within the period of time, requesting the first communication session with the called party using the first calling-party identifier; and if the first calling-party identifier does not become available within the period of time, determining a second calling-party identifier from the plurality of calling-party identifiers and directing the communication dialer to request the first communication session with the called party using the second calling-party identifier. 9. The method of claim 1, wherein each of the plurality of calling-party identifiers comprises a telephone number and the first calling-party identifier comprises one of the calling-party identifiers that was used by the called party to contact the contact center in a previous communication session. 10. The method of claim 1, wherein determining the first calling-party identifier comprises: receiving input, from an agent of the contact center, comprising an instruction for the communication dialer to use the first calling-party identifier for communication sessions with the called party. 11. An apparatus for improving calling-party identifier selection by a communication dialer of a contact center, the apparatus comprising: one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to: identify a called party for a first communication session; determine a first calling-party identifier from a plurality of calling-party identifiers available to the communication dialer for establishing the first communication session with the called party, wherein the first calling-party identifier was previously used in association with a communication session between the contact center and the called party; and direct the communication dialer to request the first communication session with the called party using the first calling-party identifier. 12. The apparatus of claim 11, wherein to determine the first calling-party identifier, the program instructions direct the processing system to: reference a record for the called party that provides a mapping to the called party of one or more calling-party identifiers of the plurality of calling-party identifiers that were each previously used in association with a successful communication session with the called party; and select the first calling-party identifier from the one or more calling-party identifiers. 13. The apparatus of claim 12, wherein the program instructions further direct the processing system to: prior to the first communication session, determine that a second communication session with the called party using the first calling-party identifier is successful; and in response to determining that the second communication session was successful, add the first calling-party identifier to the one or more calling-party identifiers. 14. The apparatus of claim 12, wherein the program instructions further direct the processing system to: after determining that the request was unsuccessful, reference the record for a number of instances where requests for a communication session with the called party using the first calling-party identifier were also unsuccessful; and upon determining that the number of instances is above a threshold number, remove the first calling-party identifier from the mapping. 15. The apparatus of claim 11, wherein the first calling-party identifier comprises one or more of: the one of the plurality of calling-party identifiers that, when used to request a communication session with the called party, resulted in the highest number of established communication sessions; the one of the plurality of calling-party identifiers that, when used to request a communication session with the called party, resulted in a longest duration communication session with the called party; and the one of the plurality of calling-party identifiers that most recently resulted in an established communication session when used to request a communication session with the called party. 16. The apparatus of claim 11, wherein the program instructions further direct the processing system to: after the communication dialer receives a rejection from the called party in response to requesting the first communication session, select a second calling-party identifier from the plurality of calling-party identifiers that was previously used in association with a successful communication session with the called party; and direct the communication dialer to request the first communication session with the called party using the second calling-party identifier. 17. The apparatus of claim 11, wherein the program instructions further direct the processing system to: after the communication dialer receives a rejection from the called party in response to requesting the first communication session, determine that no other calling-party identifier of the plurality of calling-party identifiers was previously used in association with a successful communication session with the called party and, responsively, randomly select a second calling-party identifier from the plurality of calling-party identifiers; and direct the communication dialer to request the first communication session with the called party using the second calling-party identifier. 18. The apparatus of claim 11, wherein the program instructions further direct the processing system to: after directing the communication dialer to request the first communication session with the called party using the first calling-party identifier, determine that the first calling-party identifier is not available for use with the first communication session; and after determining that the first calling-party identifier is not available, wait a period of time for the first calling-party identifier to become available; if the first calling-party identifier becomes available within the period of time, request the first communication session with the called party using the first calling-party identifier; and if the first calling-party identifier does not become available within the period of time, determine a second calling-party identifier from the plurality of calling-party identifiers and direct the communication dialer to request the first communication session with the called party using the second calling-party identifier. 19. The apparatus of claim 11, wherein each of the plurality of calling-party identifiers comprises a telephone number and the first calling-party identifier comprises one of the calling-party identifiers that was used by the called party to contact the contact center in a previous communication session. 20. One or more computer readable storage media having program instructions stored thereon for improving calling-party identifier selection by a communication dialer of a contact center, the program instructions, when reached and executed by a processing system, direct the processing system to: identify a called party for a first communication session; determine a first calling-party identifier from a plurality of calling-party identifiers available to the communication dialer for establishing the first communication session with the called party, wherein the first calling-party identifier was previously used in association with a communication session between the contact center and with the called party; and direct the communication dialer to request the first communication session with the called party using the first calling-party identifier.
2,600
10,914
10,914
15,257,672
2,625
In response to a user selecting a key on a keyboard in a first manner, a first alphanumeric character is displayed on a display device. In response to the user selecting the key on the keyboard in a second manner, a virtual key of a diacritic is displayed on the display device. In response to the user selecting the virtual key of the diacritic on the display device, the diacritic is displayed at a location of a second alphanumeric character on the display device.
1. A method performed by at least one device for operating a keyboard, the method comprising: in response to a user selecting a first key on the keyboard in a first manner, displaying a first alphanumeric character on a display device; in response to the user selecting a second key on the keyboard in the first manner, displaying a second alphanumeric character at a location on the display device, wherein the second alphanumeric character is a first letter; in response to the user selecting the first key on the keyboard in a second manner, displaying a virtual key of a first diacritic on the display device; in response to the user selecting the virtual key of the first diacritic on the display device, displaying the first diacritic at the location of the second alphanumeric character on the display device; in response to the user selecting the second key on the keyboard in the second manner, displaying at least one virtual key of at least one second letter on the display device, wherein the at least one second letter includes a second diacritic and is related to the first letter; and in response to the user selecting the at least one virtual key of the at least one second letter on the display device, displaying the at least one second letter on the display device. 2. The method of claim 1, wherein the keyboard is a virtual keyboard. 3. The method of claim 1, wherein the first manner is a single tap. 4. The method of claim 1, wherein the second manner is a press-and-hold. 5. The method of claim 1, wherein the first alphanumeric character is a numeric character. 6. The method of claim 1, wherein displaying the first diacritic at the location of the second alphanumeric character includes: in response to the user selecting the virtual key of the first diacritic on the display device in the first manner, displaying the first diacritic at the location of the second alphanumeric character on the display device. 7. The method of claim 1, wherein the location of the second alphanumeric character on the display device is indicated by a cursor that is positioned in response to a command from the user. 8. The method of claim 1, wherein the letters are non-Latin letters. 9. The method of claim 8, wherein the letters are Arabic letters. 10. A system for operating a keyboard, the system comprising: a combination of electronic circuitry components for: in response to a user selecting a first key on the keyboard in a first manner, displaying a first alphanumeric character on a display device; in response to the user selecting a second key on the keyboard in the first manner, displaying a second alphanumeric character at a location on the display device, wherein the second alphanumeric character is a first letter; in response to the user selecting the first key on the keyboard in a second manner, displaying a virtual key of a first diacritic on the display device; in response to the user selecting the virtual key of the first diacritic on the display device, displaying the first diacritic at the location of the second alphanumeric character on the display device; in response to the user selecting the second key on the keyboard in the second manner, displaying at least one virtual key of at least one second letter on the display device, wherein the at least one second letter includes a second diacritic and is related to the first letter; and, in response to the user selecting the at least one virtual key of the at least one second letter on the display device, displaying the at least one second letter on the display device. 11. The system of claim 10, wherein the keyboard is a virtual keyboard. 12. The system of claim 10, wherein the first manner is a single tap. 13. The system of claim 10, wherein the second manner is a press-and-hold. 14. The system of claim 10, wherein the first alphanumeric character is a numeric character. 15. The system of claim 10, wherein displaying the first diacritic at the location of the second alphanumeric character includes: in response to the user selecting the virtual key of the first diacritic on the display device in the first manner, displaying the first diacritic at the location of the second alphanumeric character on the display device. 16. The system of claim 10, wherein the location of the second alphanumeric character on the display device is indicated by a cursor that is positioned in response to a command from the user. 17. The system of claim 10, wherein the letters are non-Latin letters. 18. The system of claim 17, wherein the letters are Arabic letters. 19. A non-transitory computer-readable medium storing instructions that are processable by an instruction execution apparatus for causing the apparatus to perform a method comprising: in response to a user selecting a first key on a keyboard in a first manner, displaying a first alphanumeric character on a display device; in response to the user selecting a second key on the keyboard in the first manner, displaying a second alphanumeric character at a location on the display device, wherein the second alphanumeric character is a first letter; in response to the user selecting the first key on the keyboard in a second manner, displaying a virtual key of a first diacritic on the display device; in response to the user selecting the virtual key of the first diacritic on the display device, displaying the first diacritic at the location of the second alphanumeric character on the display device; in response to the user selecting the second key on the keyboard in the second manner, displaying at least one virtual key of at least one second letter on the display device, wherein the at least one second letter includes a second diacritic and is related to the first letter; and, in response to the user selecting the at least one virtual key of the at least one second letter on the display device, displaying the at least one second letter on the display device. 20. The computer-readable medium of claim 19, wherein the keyboard is a virtual keyboard. 21. The computer-readable medium of claim 19, wherein the first manner is a single tap. 22. The computer-readable medium of claim 19, wherein the second manner is a press-and-hold. 23. The computer-readable medium of claim 19, wherein the first alphanumeric character is a numeric character. 24. The computer-readable medium of claim 19, wherein displaying the first diacritic at the location of the second alphanumeric character includes: in response to the user selecting the virtual key of the first diacritic on the display device in the first manner, displaying the first diacritic at the location of the second alphanumeric character on the display device. 25. The computer-readable medium of claim 19, wherein the location of the second alphanumeric character on the display device is indicated by a cursor that is positioned in response to a command from the user. 26. The computer-readable medium of claim 19, wherein the letters are non-Latin letters. 27. The computer-readable medium of claim 26, wherein the letters are Arabic letters.
In response to a user selecting a key on a keyboard in a first manner, a first alphanumeric character is displayed on a display device. In response to the user selecting the key on the keyboard in a second manner, a virtual key of a diacritic is displayed on the display device. In response to the user selecting the virtual key of the diacritic on the display device, the diacritic is displayed at a location of a second alphanumeric character on the display device.1. A method performed by at least one device for operating a keyboard, the method comprising: in response to a user selecting a first key on the keyboard in a first manner, displaying a first alphanumeric character on a display device; in response to the user selecting a second key on the keyboard in the first manner, displaying a second alphanumeric character at a location on the display device, wherein the second alphanumeric character is a first letter; in response to the user selecting the first key on the keyboard in a second manner, displaying a virtual key of a first diacritic on the display device; in response to the user selecting the virtual key of the first diacritic on the display device, displaying the first diacritic at the location of the second alphanumeric character on the display device; in response to the user selecting the second key on the keyboard in the second manner, displaying at least one virtual key of at least one second letter on the display device, wherein the at least one second letter includes a second diacritic and is related to the first letter; and in response to the user selecting the at least one virtual key of the at least one second letter on the display device, displaying the at least one second letter on the display device. 2. The method of claim 1, wherein the keyboard is a virtual keyboard. 3. The method of claim 1, wherein the first manner is a single tap. 4. The method of claim 1, wherein the second manner is a press-and-hold. 5. The method of claim 1, wherein the first alphanumeric character is a numeric character. 6. The method of claim 1, wherein displaying the first diacritic at the location of the second alphanumeric character includes: in response to the user selecting the virtual key of the first diacritic on the display device in the first manner, displaying the first diacritic at the location of the second alphanumeric character on the display device. 7. The method of claim 1, wherein the location of the second alphanumeric character on the display device is indicated by a cursor that is positioned in response to a command from the user. 8. The method of claim 1, wherein the letters are non-Latin letters. 9. The method of claim 8, wherein the letters are Arabic letters. 10. A system for operating a keyboard, the system comprising: a combination of electronic circuitry components for: in response to a user selecting a first key on the keyboard in a first manner, displaying a first alphanumeric character on a display device; in response to the user selecting a second key on the keyboard in the first manner, displaying a second alphanumeric character at a location on the display device, wherein the second alphanumeric character is a first letter; in response to the user selecting the first key on the keyboard in a second manner, displaying a virtual key of a first diacritic on the display device; in response to the user selecting the virtual key of the first diacritic on the display device, displaying the first diacritic at the location of the second alphanumeric character on the display device; in response to the user selecting the second key on the keyboard in the second manner, displaying at least one virtual key of at least one second letter on the display device, wherein the at least one second letter includes a second diacritic and is related to the first letter; and, in response to the user selecting the at least one virtual key of the at least one second letter on the display device, displaying the at least one second letter on the display device. 11. The system of claim 10, wherein the keyboard is a virtual keyboard. 12. The system of claim 10, wherein the first manner is a single tap. 13. The system of claim 10, wherein the second manner is a press-and-hold. 14. The system of claim 10, wherein the first alphanumeric character is a numeric character. 15. The system of claim 10, wherein displaying the first diacritic at the location of the second alphanumeric character includes: in response to the user selecting the virtual key of the first diacritic on the display device in the first manner, displaying the first diacritic at the location of the second alphanumeric character on the display device. 16. The system of claim 10, wherein the location of the second alphanumeric character on the display device is indicated by a cursor that is positioned in response to a command from the user. 17. The system of claim 10, wherein the letters are non-Latin letters. 18. The system of claim 17, wherein the letters are Arabic letters. 19. A non-transitory computer-readable medium storing instructions that are processable by an instruction execution apparatus for causing the apparatus to perform a method comprising: in response to a user selecting a first key on a keyboard in a first manner, displaying a first alphanumeric character on a display device; in response to the user selecting a second key on the keyboard in the first manner, displaying a second alphanumeric character at a location on the display device, wherein the second alphanumeric character is a first letter; in response to the user selecting the first key on the keyboard in a second manner, displaying a virtual key of a first diacritic on the display device; in response to the user selecting the virtual key of the first diacritic on the display device, displaying the first diacritic at the location of the second alphanumeric character on the display device; in response to the user selecting the second key on the keyboard in the second manner, displaying at least one virtual key of at least one second letter on the display device, wherein the at least one second letter includes a second diacritic and is related to the first letter; and, in response to the user selecting the at least one virtual key of the at least one second letter on the display device, displaying the at least one second letter on the display device. 20. The computer-readable medium of claim 19, wherein the keyboard is a virtual keyboard. 21. The computer-readable medium of claim 19, wherein the first manner is a single tap. 22. The computer-readable medium of claim 19, wherein the second manner is a press-and-hold. 23. The computer-readable medium of claim 19, wherein the first alphanumeric character is a numeric character. 24. The computer-readable medium of claim 19, wherein displaying the first diacritic at the location of the second alphanumeric character includes: in response to the user selecting the virtual key of the first diacritic on the display device in the first manner, displaying the first diacritic at the location of the second alphanumeric character on the display device. 25. The computer-readable medium of claim 19, wherein the location of the second alphanumeric character on the display device is indicated by a cursor that is positioned in response to a command from the user. 26. The computer-readable medium of claim 19, wherein the letters are non-Latin letters. 27. The computer-readable medium of claim 26, wherein the letters are Arabic letters.
2,600
10,915
10,915
15,786,903
2,684
Embodiments of the present invention are directed to the field of RFID readers and more specifically to the selective operation of those readers. In an embodiment, the present invention is a system that alternates the operation of an RFID reader between a continuous wave (CW) only state and a read state depending on an absence or a presence of an object of interest within a read-zone of the RFID reader.
1. A system for reading radio frequency (RF) identification (RFID) tags via an interrogation signal, the system comprising: a plurality of RFID readers spaced apart within a venue, each of the RFID readers being operably switchable between a read state and a continuous wave (CW) only state, each of the RFID readers having a respective read-zone; at least one detector operable to detect a presence of at least one object of interest within the respective read-zone of at least one of the RFID readers; and a controller communicatively connected to the plurality of RFID readers and further to the at least one detector, the controller operable to instruct each of the RFID readers to operate in the read state when the at least one object of interest is detected in the respective read-zone, the controller further operable to instruct each of the RFID readers to operate in the CW-only state when the at least one object of interest is not detected in the respective read-zone. 2. The system of claim 1, wherein the at least one detector includes at least one video camera. 3. The system of claim 1, wherein the read state includes a broadcast of a modulated signal. 4. The system of claim 3, wherein the CW-only state excludes the broadcast of the modulated signal. 5. The system of claim 1, wherein the at least one detector is a motion detection. 6. The system of claim 1, wherein the RFID tags are switchable between a first state and a second state, and wherein the RFID tags located within the respective read-zones of each of the plurality of RFID readers operating in the CW state are maintained in their respective state by the each of the plurality of RFID readers operating in the CW state, the respective state being one of the first state or the second state. 7. A method of operating a radio frequency (RF) identification (RFID) reader, the method comprising: providing, within a venue, an RFID reader having a read-zone, the RFID reader being operably switchable between a read state and a continuous wave (CW) only state; monitoring, via a detector, for a presence of an object of interest within the read-zone of the RFID reader; instructing, by a controller communicatively connected to the RFID reader and further to the detector, the RFID reader to operate in the CW-only state when the object of interest is not detected in the read-zone for a first predetermined amount of time; and instructing, by the controller, the RFID reader to operate in the read state when the object of interest is detected in the read-zone. 8. The method of claim 7, wherein the detector includes a video camera. 9. The method of claim 7, wherein instructing the RFID reader to operate in the read state includes broadcasting a modulated signal. 10. The method of claim 9, wherein instructing the RFID reader to operate in the CW-only state excludes broadcasting the modulated signal. 11. The method of claim 7, further comprising: instructing, by the controller, the RFID reader operating in the CW-only state to operate in the read state. 12. The method of claim 11, wherein a period between the instructing the RFID reader to operate in the CW-only state and instructing the RFID reader operating in the CW-only state to operate in the read state is based at least in part on at least one of a second predetermined amount of time or a predetermined number of RFID reader activations. 13. A method of maintaining radio frequency (RF) identification (RFID) tags in a respective state, the RFID tags being switchable between a first state and a second state, and the respective state being one of the first state and the second state, the method comprising: providing a plurality of RFID readers spaced apart within a venue, each of the RFID readers being operably switchable between a read state and a continuous wave (CW) only state, each of the RFID readers having a respective read-zone; operating each of the plurality of RFID readers in a low duty cycle such that for a part of a cycle each of the plurality of RFID readers operates in the CW-only state and for another part of the cycle each of the plurality of RFID readers operates in read state; and instructing at least one of the plurality of RFID readers to operate in a full duty cycle such that the at least one of the plurality of RFID readers is operated in the read state for the entire cycle when a new RFID tag is detected within the respective read-zone of the at least one of the plurality of RFID readers. 14. The method of claim 13, further comprising: instructing the at least one of the plurality of RFID readers to operate in the low duty cycle after no new RFID tags are read for a predetermined amount of time. 15. The method of claim 13, wherein during the low duty cycle, each of the plurality of RFID readers operates in the CW-only state for a greater portion of the cycle than in read state. 16. The method of claim 13, wherein during the low duty cycle, each of the plurality of RFID readers operates in the CW state for approximately 90% of the cycle and in read state for approximately 10% of the cycle. 17. The method of claim 13, wherein the read state includes a broadcast of a modulated signal. 18. The method of claim 17, wherein the CW-only state excludes the broadcast of the modulated signal. 19. The method of claim 17, wherein the modulated signal is broadcast at any one of four separate radio frequencies. 20. A method of maintaining radio frequency (RF) identification (RFID) tags in a respective state, the RFID tags being switchable between a first state and a second state, and the respective state being one of the first state and the second state, the method comprising: providing a plurality of RFID readers spaced apart within a venue, each of the RFID readers being operably switchable between a read state and a continuous wave (CW) only state, each of the RFID readers having a respective read-zone; monitoring, via at least one detector, for a presence of at least one object of interest within the respective read-zone of at least one of the RFID readers; instructing, by a controller communicatively connected to the plurality of RFID readers and further to the at least one detector, each of the RFID readers having the at least one object of interest not detected in the respective read-zone for a first predetermined amount of time, to operate in the CW-only state; and instructing, by the controller, each of the RFID readers having the at least one object of interest detected in the respective read-zone, to operate in the read state.
Embodiments of the present invention are directed to the field of RFID readers and more specifically to the selective operation of those readers. In an embodiment, the present invention is a system that alternates the operation of an RFID reader between a continuous wave (CW) only state and a read state depending on an absence or a presence of an object of interest within a read-zone of the RFID reader.1. A system for reading radio frequency (RF) identification (RFID) tags via an interrogation signal, the system comprising: a plurality of RFID readers spaced apart within a venue, each of the RFID readers being operably switchable between a read state and a continuous wave (CW) only state, each of the RFID readers having a respective read-zone; at least one detector operable to detect a presence of at least one object of interest within the respective read-zone of at least one of the RFID readers; and a controller communicatively connected to the plurality of RFID readers and further to the at least one detector, the controller operable to instruct each of the RFID readers to operate in the read state when the at least one object of interest is detected in the respective read-zone, the controller further operable to instruct each of the RFID readers to operate in the CW-only state when the at least one object of interest is not detected in the respective read-zone. 2. The system of claim 1, wherein the at least one detector includes at least one video camera. 3. The system of claim 1, wherein the read state includes a broadcast of a modulated signal. 4. The system of claim 3, wherein the CW-only state excludes the broadcast of the modulated signal. 5. The system of claim 1, wherein the at least one detector is a motion detection. 6. The system of claim 1, wherein the RFID tags are switchable between a first state and a second state, and wherein the RFID tags located within the respective read-zones of each of the plurality of RFID readers operating in the CW state are maintained in their respective state by the each of the plurality of RFID readers operating in the CW state, the respective state being one of the first state or the second state. 7. A method of operating a radio frequency (RF) identification (RFID) reader, the method comprising: providing, within a venue, an RFID reader having a read-zone, the RFID reader being operably switchable between a read state and a continuous wave (CW) only state; monitoring, via a detector, for a presence of an object of interest within the read-zone of the RFID reader; instructing, by a controller communicatively connected to the RFID reader and further to the detector, the RFID reader to operate in the CW-only state when the object of interest is not detected in the read-zone for a first predetermined amount of time; and instructing, by the controller, the RFID reader to operate in the read state when the object of interest is detected in the read-zone. 8. The method of claim 7, wherein the detector includes a video camera. 9. The method of claim 7, wherein instructing the RFID reader to operate in the read state includes broadcasting a modulated signal. 10. The method of claim 9, wherein instructing the RFID reader to operate in the CW-only state excludes broadcasting the modulated signal. 11. The method of claim 7, further comprising: instructing, by the controller, the RFID reader operating in the CW-only state to operate in the read state. 12. The method of claim 11, wherein a period between the instructing the RFID reader to operate in the CW-only state and instructing the RFID reader operating in the CW-only state to operate in the read state is based at least in part on at least one of a second predetermined amount of time or a predetermined number of RFID reader activations. 13. A method of maintaining radio frequency (RF) identification (RFID) tags in a respective state, the RFID tags being switchable between a first state and a second state, and the respective state being one of the first state and the second state, the method comprising: providing a plurality of RFID readers spaced apart within a venue, each of the RFID readers being operably switchable between a read state and a continuous wave (CW) only state, each of the RFID readers having a respective read-zone; operating each of the plurality of RFID readers in a low duty cycle such that for a part of a cycle each of the plurality of RFID readers operates in the CW-only state and for another part of the cycle each of the plurality of RFID readers operates in read state; and instructing at least one of the plurality of RFID readers to operate in a full duty cycle such that the at least one of the plurality of RFID readers is operated in the read state for the entire cycle when a new RFID tag is detected within the respective read-zone of the at least one of the plurality of RFID readers. 14. The method of claim 13, further comprising: instructing the at least one of the plurality of RFID readers to operate in the low duty cycle after no new RFID tags are read for a predetermined amount of time. 15. The method of claim 13, wherein during the low duty cycle, each of the plurality of RFID readers operates in the CW-only state for a greater portion of the cycle than in read state. 16. The method of claim 13, wherein during the low duty cycle, each of the plurality of RFID readers operates in the CW state for approximately 90% of the cycle and in read state for approximately 10% of the cycle. 17. The method of claim 13, wherein the read state includes a broadcast of a modulated signal. 18. The method of claim 17, wherein the CW-only state excludes the broadcast of the modulated signal. 19. The method of claim 17, wherein the modulated signal is broadcast at any one of four separate radio frequencies. 20. A method of maintaining radio frequency (RF) identification (RFID) tags in a respective state, the RFID tags being switchable between a first state and a second state, and the respective state being one of the first state and the second state, the method comprising: providing a plurality of RFID readers spaced apart within a venue, each of the RFID readers being operably switchable between a read state and a continuous wave (CW) only state, each of the RFID readers having a respective read-zone; monitoring, via at least one detector, for a presence of at least one object of interest within the respective read-zone of at least one of the RFID readers; instructing, by a controller communicatively connected to the plurality of RFID readers and further to the at least one detector, each of the RFID readers having the at least one object of interest not detected in the respective read-zone for a first predetermined amount of time, to operate in the CW-only state; and instructing, by the controller, each of the RFID readers having the at least one object of interest detected in the respective read-zone, to operate in the read state.
2,600
10,916
10,916
16,000,741
2,611
The enhancement of user-to-user communication with augmented reality is described. In one example, a method includes receiving virtual object data at a local device from a remote user, generating a virtual object using the received virtual object data, receiving an image at the local device from a remote image store, augmenting the received image at the local device by adding the generated virtual object to the received image, and displaying the augmented received image on the local device.
1-24. (canceled) 25. A method comprising: generating a virtual vehicle and its association with a message at a local device as a virtual object using a virtual message object data received at a local device from a remote user, the virtual vehicle for virtually conveying the message from the remote user to the local device; augmenting, at the local device, an image of an area that includes a position of a remote user position with the generated virtual object; displaying the augmented image on the local device with a position of the local device depicted based on a real position of the local device; and conveying, at the local device, the message using the virtual vehicle in response to a user command at the local device, by having the message virtually move across the augmented image from the remote user position toward the local device position.
The enhancement of user-to-user communication with augmented reality is described. In one example, a method includes receiving virtual object data at a local device from a remote user, generating a virtual object using the received virtual object data, receiving an image at the local device from a remote image store, augmenting the received image at the local device by adding the generated virtual object to the received image, and displaying the augmented received image on the local device.1-24. (canceled) 25. A method comprising: generating a virtual vehicle and its association with a message at a local device as a virtual object using a virtual message object data received at a local device from a remote user, the virtual vehicle for virtually conveying the message from the remote user to the local device; augmenting, at the local device, an image of an area that includes a position of a remote user position with the generated virtual object; displaying the augmented image on the local device with a position of the local device depicted based on a real position of the local device; and conveying, at the local device, the message using the virtual vehicle in response to a user command at the local device, by having the message virtually move across the augmented image from the remote user position toward the local device position.
2,600
10,917
10,917
15,231,760
2,694
A short-range wireless tag has a conductive footprint which is not rotationally symmetric where this footprint is formed from an antenna within the short-range wireless tag and optionally one or more additional conductive areas within the short-range wireless tag. The orientation of such a short-range wireless tag may be determined by a sensing surface when the tag is placed on the surface and where the surface comprises an array of RF antennas and/or a capacitive sensing electrode array.
1. A short-range wireless tag having a conductive footprint that is not rotationally symmetric. 2. The short-range wireless tag according to claim 1, comprising an antenna that is not rotationally symmetric. 3. The short-range wireless tag according to claim 2, wherein the antenna comprises a first end and a second end wherein the first and second end have different coupling properties and the tag further comprises an IC connected to the antenna between the first and second ends. 4. The short-range wireless tag according to claim 3, wherein the antenna comprises a third end and wherein the IC is connected to the antenna between the first, second and third ends. 5. The short-range wireless tag according to claim 2, wherein the antenna comprises a first portion having a first length and a second portion having a second length, wherein the first and second portions are not arranged in line with each other and the first and second lengths are different. 6. The short-range wireless tag according to claim 2, wherein the antenna comprises a first portion having a first radius of curvature and a second portion having a second radius of curvature, wherein the first and second radii of curvature are different. 7. The short-range wireless tag according to claim 2, wherein the antenna comprises a coil which is not rotationally symmetric. 8. The short-range wireless tag according to claim 7, wherein the coil additionally lacks mirror symmetry. 9. The short-range wireless tag according to claim 1, comprising an antenna that is rotationally symmetric and one or more conductive regions that alone or in combination with the antenna provide the conductive footprint that is not rotationally symmetric. 10. The short-range wireless tag according to claim 9, wherein the conductive footprint additionally lacks mirror symmetry and the one or more conductive regions alone or in combination with the antenna provide the conductive footprint. 11. A method of detecting orientation of a short-range wireless tag using a sensing surface, the short-range wireless tag comprising an antenna that is not rotationally symmetric, the sensing surface comprising an array of RF antennas connected to a sensing module and the method comprising: activating, in the sensing module, an RF antenna from the array of RF antennas and deactivating, in the sensing module, one or more other RF antennas from the array of RF antennas; reading, at the sensing module, any proximate short-range wireless tags using the activated RF antenna; storing, at the sensing module, a signal strength metric associated with each short-range wireless tag read by the activated RF antenna; repeating the activating and deactivating, reading and storing for a different activated RF antenna; and determining an orientation of one of the short-range wireless tags based at least in part on a plurality of signal strength metrics for the same short-range wireless tag when activated by different RF antennas from the array of RF antennas. 12. The method according to claim 11, wherein the signal strength metrics are generated in the sensing module based on measurements of signal strength made in the sensing surface. 13. The method according to claim 11, wherein the signal strength metrics are generated in the proximate short-range wireless tags based on measurements of voltages generated within the short-range wireless tags in response to activation of an RF antenna in the sensing surface. 14. The method according to claim 11, further comprising: receiving, at the sensing surface from a proximate short-range wireless tag, signal strength metrics generated in the short-range wireless tag when activated by different RF antennas from the array of RF antennas. 15. The method according to claim 11, wherein the orientation of one of the short-range wireless tags is determined based on an identifier read from the short-range wireless tag and the plurality of signal strength metrics for the same short-range wireless tag when activated by different RF antennas from the array of RF antennas. 16. The method according to claim 11, further comprising: providing the determined orientation of one of the short-range wireless tags as an input to a computer program. 17. A sensing surface comprising: a sensing array; and a sensing module arranged to detect an orientation of a short-range wireless tag having a conductive footprint which is not rotationally symmetric. 18. The sensing surface according to claim 17, wherein the sensing array comprises an array of RF antennas and the sensing module is arranged to: selectively activate an RF antenna from the array of RF antennas and to deactivate one or more other RF antennas from the array of RF antennas; read any proximate short-range wireless tags using the activated RF antenna; store a signal strength metric associated with each short-range wireless tag read by the activated RF antenna; repeat the selective activating and deactivating, reading and storing for a different activated RF antenna; and determine an orientation of one of the short-range wireless tags based at least in part on a plurality of signal strength metrics for the same short-range wireless tag when activated by different RF antennas from the array of RF antennas. 19. The sensing surface according to claim 17, wherein the sensing array comprises a capacitive sensing electrode array and the sensing module is arranged to: detect an area of increased capacitance using the capacitive sensing electrode array; and determine an orientation of a short-range wireless tag based on a shape of the area of increased capacitance. 20. The sensing surface according to claim 17, wherein the sensing array comprises a capacitive sensing electrode array and an array of RF antennas and the sensing module comprises a first module coupled to the capacitive sensing electrode array and a second module coupled to the array of RF antennas and wherein the sensing module is arranged to: detect, in the first module, changes in capacitance between electrodes in the capacitive sensing electrode array; in response to detecting, in the first module, an increase in capacitance between the electrodes at a first location, to identify, based on the first location, an RF antenna in the array of RF antennas, to detune, in the second module, one or more adjacent RF antennas in the array of RF antennas and to read, by the second module and via the identified RF antenna, data from any proximate wireless tags; and determine an orientation of a proximate short-range wireless tag based on the detected increase in capacitance and the data read from the proximate short-range wireless tag.
A short-range wireless tag has a conductive footprint which is not rotationally symmetric where this footprint is formed from an antenna within the short-range wireless tag and optionally one or more additional conductive areas within the short-range wireless tag. The orientation of such a short-range wireless tag may be determined by a sensing surface when the tag is placed on the surface and where the surface comprises an array of RF antennas and/or a capacitive sensing electrode array.1. A short-range wireless tag having a conductive footprint that is not rotationally symmetric. 2. The short-range wireless tag according to claim 1, comprising an antenna that is not rotationally symmetric. 3. The short-range wireless tag according to claim 2, wherein the antenna comprises a first end and a second end wherein the first and second end have different coupling properties and the tag further comprises an IC connected to the antenna between the first and second ends. 4. The short-range wireless tag according to claim 3, wherein the antenna comprises a third end and wherein the IC is connected to the antenna between the first, second and third ends. 5. The short-range wireless tag according to claim 2, wherein the antenna comprises a first portion having a first length and a second portion having a second length, wherein the first and second portions are not arranged in line with each other and the first and second lengths are different. 6. The short-range wireless tag according to claim 2, wherein the antenna comprises a first portion having a first radius of curvature and a second portion having a second radius of curvature, wherein the first and second radii of curvature are different. 7. The short-range wireless tag according to claim 2, wherein the antenna comprises a coil which is not rotationally symmetric. 8. The short-range wireless tag according to claim 7, wherein the coil additionally lacks mirror symmetry. 9. The short-range wireless tag according to claim 1, comprising an antenna that is rotationally symmetric and one or more conductive regions that alone or in combination with the antenna provide the conductive footprint that is not rotationally symmetric. 10. The short-range wireless tag according to claim 9, wherein the conductive footprint additionally lacks mirror symmetry and the one or more conductive regions alone or in combination with the antenna provide the conductive footprint. 11. A method of detecting orientation of a short-range wireless tag using a sensing surface, the short-range wireless tag comprising an antenna that is not rotationally symmetric, the sensing surface comprising an array of RF antennas connected to a sensing module and the method comprising: activating, in the sensing module, an RF antenna from the array of RF antennas and deactivating, in the sensing module, one or more other RF antennas from the array of RF antennas; reading, at the sensing module, any proximate short-range wireless tags using the activated RF antenna; storing, at the sensing module, a signal strength metric associated with each short-range wireless tag read by the activated RF antenna; repeating the activating and deactivating, reading and storing for a different activated RF antenna; and determining an orientation of one of the short-range wireless tags based at least in part on a plurality of signal strength metrics for the same short-range wireless tag when activated by different RF antennas from the array of RF antennas. 12. The method according to claim 11, wherein the signal strength metrics are generated in the sensing module based on measurements of signal strength made in the sensing surface. 13. The method according to claim 11, wherein the signal strength metrics are generated in the proximate short-range wireless tags based on measurements of voltages generated within the short-range wireless tags in response to activation of an RF antenna in the sensing surface. 14. The method according to claim 11, further comprising: receiving, at the sensing surface from a proximate short-range wireless tag, signal strength metrics generated in the short-range wireless tag when activated by different RF antennas from the array of RF antennas. 15. The method according to claim 11, wherein the orientation of one of the short-range wireless tags is determined based on an identifier read from the short-range wireless tag and the plurality of signal strength metrics for the same short-range wireless tag when activated by different RF antennas from the array of RF antennas. 16. The method according to claim 11, further comprising: providing the determined orientation of one of the short-range wireless tags as an input to a computer program. 17. A sensing surface comprising: a sensing array; and a sensing module arranged to detect an orientation of a short-range wireless tag having a conductive footprint which is not rotationally symmetric. 18. The sensing surface according to claim 17, wherein the sensing array comprises an array of RF antennas and the sensing module is arranged to: selectively activate an RF antenna from the array of RF antennas and to deactivate one or more other RF antennas from the array of RF antennas; read any proximate short-range wireless tags using the activated RF antenna; store a signal strength metric associated with each short-range wireless tag read by the activated RF antenna; repeat the selective activating and deactivating, reading and storing for a different activated RF antenna; and determine an orientation of one of the short-range wireless tags based at least in part on a plurality of signal strength metrics for the same short-range wireless tag when activated by different RF antennas from the array of RF antennas. 19. The sensing surface according to claim 17, wherein the sensing array comprises a capacitive sensing electrode array and the sensing module is arranged to: detect an area of increased capacitance using the capacitive sensing electrode array; and determine an orientation of a short-range wireless tag based on a shape of the area of increased capacitance. 20. The sensing surface according to claim 17, wherein the sensing array comprises a capacitive sensing electrode array and an array of RF antennas and the sensing module comprises a first module coupled to the capacitive sensing electrode array and a second module coupled to the array of RF antennas and wherein the sensing module is arranged to: detect, in the first module, changes in capacitance between electrodes in the capacitive sensing electrode array; in response to detecting, in the first module, an increase in capacitance between the electrodes at a first location, to identify, based on the first location, an RF antenna in the array of RF antennas, to detune, in the second module, one or more adjacent RF antennas in the array of RF antennas and to read, by the second module and via the identified RF antenna, data from any proximate wireless tags; and determine an orientation of a proximate short-range wireless tag based on the detected increase in capacitance and the data read from the proximate short-range wireless tag.
2,600
10,918
10,918
15,862,382
2,619
A method and system for culling a patch of surface data from one or more tiles in a tile based computer graphics system. A rendering space is divided into a plurality of tiles and a patch of surface data read. Then, at least a portion of the patch is analysed to determine data representing a bounding depth value evaluated over at least one tile. This may comprise tessellating the patch of surface data to derive a plurality of tessellated primitives and analysing at least some of the tessellated primitives. For each tile within which the patch is located, the data representing the bounding depth value is then used to determine whether the patch is hidden in the tile, and at least a portion of the patch is rendered, if the patch is determined not to be hidden in at least one tile.
1. A method in a tile based graphics system having a rendering space subdivided into a plurality of tiles, comprising: tessellating a patch of surface data; determining that the patch is present in a tile but not needed to render that tile, prior to rendering the tile; and rendering the tile without using the patch. 2. The method according to claim 1, wherein said determination is performed on a non-per-pixel basis. 3. The method according to claim 1, wherein the determining step is performed before it is determined whether to tile the patch. 4. The method according to claim 1, wherein the determining step is performed after the patch is tiled but before hidden surface removal is performed on the tile. 5. The method according to claim 4, wherein the method comprises: storing an indication of the patch in a display list for the tile in which the patch is determined to be present; reading the display list for the tile; and determining that the patch indicated in the read display list is not needed to render the tile. 6. The method according to claim 1, wherein the determining step comprises: performing a first depth comparison test to determine whether the patch is needed to render the tile, before it is determined whether to tile the patch; only if it is not determined from the first depth comparison test that the patch is not needed to render the tile, tiling the patch and performing a second depth comparison test after tiling but before hidden surface removal is performed on the tile. 7. The method according to claim 1, the method further comprising determining the maximum and/or minimum depth value for the patch; and wherein the depth value for the patch is used to determine the patch is not needed to render the tile. 8. The method according to claim 7, wherein the determining step comprises comparing the depth value for the patch with a depth threshold for that tile to determine the patch is not needed to render the tile. 9. The method according to claim 1, wherein it is determined the patch is present in more than one tile, the method further comprising: determining a maximum and/or minimum depth value for the patch; for each tile in which the patch is present: comparing the depth value with a depth threshold for that tile to determine whether the patch is needed to render that tile, prior to rendering that tile; and rendering that tile without using the patch if it is determined that the patch is not needed to render that tile. 10. The method according to claim 1, wherein the determining step comprises determining that the tessellated patch is present in the tile but not needed to render the tile. 11. The method according to claim 10, the method further comprising: determining the minimum and/or maximum depth of the tessellated patch; and comparing the depth value of the tessellated patch with a depth threshold for that tile; wherein the method comprises determining that the tessellated patch is present in the tile but not needed to render the tile from the comparison of the depth value with the depth threshold. 12. The method according to claim 1, wherein the method comprises culling the patch that is not needed to render the tile. 13. A graphics system for having a rendering space subdivided into a plurality of tiles, the system comprising: tessellation logic configured to tesselate a patch of surface data; processing logic configured to determine that the patch is present in the tile but not needed to render the tile, prior to the tile being rendered; rendering logic configured to render the tile without using the patch. 14. The graphics system according to claim 13, wherein the processing logic is configured to determine that the patch is present in the tile but not needed to render the tile, prior to the tile being rendered before it is determined whether to file the patch. 15. The graphics system according to claim 13, further comprising hidden surface removal logic configured to perform hidden surface removal on the tile, wherein the processing logic is configured to determine that the patch is present in the tile but not needed to render the tile, after the patch is tiled but before hidden surface removal is performed on the tile. 16. The graphics system according to claim 13, wherein the processing logic is configured to: perform a first depth comparison test to determine whether the patch is needed to render the tile, before it is determined whether to tile the patch; perform a second depth comparison test after tiling but before hidden surface removal is performed on the tile, only if it is not determined from the first depth comparison test that the patch is not needed to render the tile. 17. The graphics system according to claim 13, further comprising tiling logic configured to store an indication of the patch in a display list for the tile in which the patch is determined to be present; the processing logic being configured to read the display list for the tile, and determine that the patch indicated in the read display list is not needed to render the tile. 18. The graphics system according to claim 13, wherein the processing logic is configured to determine that the tessellated patch is present in the tile but not needed to render the tile. 19. The graphics system according to claim 18, further comprising depth-calculating logic configured to determine the minimum and/or maximum depth of the tessellated patch; wherein the processing logic is configured to compare the depth value of the tessellated patch with a depth threshold for that tile and determine that the tessellated patch is present in the tile but not needed to render the tile from the comparison of the depth value with the depth threshold. 20. A non-transitory computer-readable medium having stored thereon computer-readable program code defining an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture a graphics system having a rendering space subdivided into a plurality of tiles, the graphics system comprising: tessellating logic configured to tesselate a patch of surface data; processing logic configured to determine that the patch is present in the tile but not needed to render the tile, prior to the tile being rendered; rendering logic configured to render the tile without using the patch.
A method and system for culling a patch of surface data from one or more tiles in a tile based computer graphics system. A rendering space is divided into a plurality of tiles and a patch of surface data read. Then, at least a portion of the patch is analysed to determine data representing a bounding depth value evaluated over at least one tile. This may comprise tessellating the patch of surface data to derive a plurality of tessellated primitives and analysing at least some of the tessellated primitives. For each tile within which the patch is located, the data representing the bounding depth value is then used to determine whether the patch is hidden in the tile, and at least a portion of the patch is rendered, if the patch is determined not to be hidden in at least one tile.1. A method in a tile based graphics system having a rendering space subdivided into a plurality of tiles, comprising: tessellating a patch of surface data; determining that the patch is present in a tile but not needed to render that tile, prior to rendering the tile; and rendering the tile without using the patch. 2. The method according to claim 1, wherein said determination is performed on a non-per-pixel basis. 3. The method according to claim 1, wherein the determining step is performed before it is determined whether to tile the patch. 4. The method according to claim 1, wherein the determining step is performed after the patch is tiled but before hidden surface removal is performed on the tile. 5. The method according to claim 4, wherein the method comprises: storing an indication of the patch in a display list for the tile in which the patch is determined to be present; reading the display list for the tile; and determining that the patch indicated in the read display list is not needed to render the tile. 6. The method according to claim 1, wherein the determining step comprises: performing a first depth comparison test to determine whether the patch is needed to render the tile, before it is determined whether to tile the patch; only if it is not determined from the first depth comparison test that the patch is not needed to render the tile, tiling the patch and performing a second depth comparison test after tiling but before hidden surface removal is performed on the tile. 7. The method according to claim 1, the method further comprising determining the maximum and/or minimum depth value for the patch; and wherein the depth value for the patch is used to determine the patch is not needed to render the tile. 8. The method according to claim 7, wherein the determining step comprises comparing the depth value for the patch with a depth threshold for that tile to determine the patch is not needed to render the tile. 9. The method according to claim 1, wherein it is determined the patch is present in more than one tile, the method further comprising: determining a maximum and/or minimum depth value for the patch; for each tile in which the patch is present: comparing the depth value with a depth threshold for that tile to determine whether the patch is needed to render that tile, prior to rendering that tile; and rendering that tile without using the patch if it is determined that the patch is not needed to render that tile. 10. The method according to claim 1, wherein the determining step comprises determining that the tessellated patch is present in the tile but not needed to render the tile. 11. The method according to claim 10, the method further comprising: determining the minimum and/or maximum depth of the tessellated patch; and comparing the depth value of the tessellated patch with a depth threshold for that tile; wherein the method comprises determining that the tessellated patch is present in the tile but not needed to render the tile from the comparison of the depth value with the depth threshold. 12. The method according to claim 1, wherein the method comprises culling the patch that is not needed to render the tile. 13. A graphics system for having a rendering space subdivided into a plurality of tiles, the system comprising: tessellation logic configured to tesselate a patch of surface data; processing logic configured to determine that the patch is present in the tile but not needed to render the tile, prior to the tile being rendered; rendering logic configured to render the tile without using the patch. 14. The graphics system according to claim 13, wherein the processing logic is configured to determine that the patch is present in the tile but not needed to render the tile, prior to the tile being rendered before it is determined whether to file the patch. 15. The graphics system according to claim 13, further comprising hidden surface removal logic configured to perform hidden surface removal on the tile, wherein the processing logic is configured to determine that the patch is present in the tile but not needed to render the tile, after the patch is tiled but before hidden surface removal is performed on the tile. 16. The graphics system according to claim 13, wherein the processing logic is configured to: perform a first depth comparison test to determine whether the patch is needed to render the tile, before it is determined whether to tile the patch; perform a second depth comparison test after tiling but before hidden surface removal is performed on the tile, only if it is not determined from the first depth comparison test that the patch is not needed to render the tile. 17. The graphics system according to claim 13, further comprising tiling logic configured to store an indication of the patch in a display list for the tile in which the patch is determined to be present; the processing logic being configured to read the display list for the tile, and determine that the patch indicated in the read display list is not needed to render the tile. 18. The graphics system according to claim 13, wherein the processing logic is configured to determine that the tessellated patch is present in the tile but not needed to render the tile. 19. The graphics system according to claim 18, further comprising depth-calculating logic configured to determine the minimum and/or maximum depth of the tessellated patch; wherein the processing logic is configured to compare the depth value of the tessellated patch with a depth threshold for that tile and determine that the tessellated patch is present in the tile but not needed to render the tile from the comparison of the depth value with the depth threshold. 20. A non-transitory computer-readable medium having stored thereon computer-readable program code defining an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture a graphics system having a rendering space subdivided into a plurality of tiles, the graphics system comprising: tessellating logic configured to tesselate a patch of surface data; processing logic configured to determine that the patch is present in the tile but not needed to render the tile, prior to the tile being rendered; rendering logic configured to render the tile without using the patch.
2,600
10,919
10,919
16,416,905
2,611
Embodiments of the invention are directed to methods and systems for using local positioning beacons to create precise augmented reality images. Embodiments of the invention are also directed to providing different types of augmented reality images to different types of devices and/or individuals.
1.-20. (canceled) 21. A method comprising: surveying, with surveying equipment, a real environment including existing physical elements, with one or more targets positioned at one or more control points in the real environment to determine coordinates of the one or more control points in relation to a real-world coordinate system; importing the coordinates of the one or more surveyed control points into a 3D modeling computer; generating, using the 3D modeling computer, a 3D digital model including the existing physical elements in relation to the real-world coordinate system; generating, using the 3D modeling computer, a 3D digital model including a future physical element; incorporating, using the 3D modeling computer, the 3D digital model including the future physical element at a location within or proximate the 3D digital model including the existing physical elements, such that the future physical element is associated with a future physical element location in the real-world coordinate system; storing a data file comprising the 3D digital model including the future physical element and future physical element location data; placing a plurality of beacon devices at a plurality of beacon device locations in the real environment; surveying, with the surveying equipment, the plurality of beacon devices, with a target positioned at each beacon device, to determine coordinates of the plurality of beacon devices in relation to the real-world coordinate system, the coordinates including Northing, Easting, and Elevation coordinates; and storing the coordinates including Northing, Easting, and Elevation coordinates of the plurality of beacon devices in a database, wherein a mobile device position of a mobile device is determined based on communications between the plurality of beacon devices and the mobile device, wherein, when the determined mobile device position is within a predetermined region associated with the future physical element, an augmented reality image based on the data file is displayed at the mobile device, the augmented reality image comprising a real view of the real environment seen through a camera of the mobile device in real-time overlaid with the 3D digital model including the future physical element at the future physical element location. 22. The method of claim 21, further comprising: providing the data file and the coordinates including the Northing, Easting, and Elevation coordinates of the plurality of beacon devices to the mobile device. 23. The method of claim 21, wherein the determined mobile device position has at least ⅛th of an inch accuracy, and wherein the future physical element is displayed at the future physical element location in the augmented reality image with at least ⅛th of an inch accuracy. 24. The method of claim 21, further comprising: receiving, at the mobile device, a plurality of signals from the plurality of beacon devices; determining a position of the mobile device in the real-world coordinate system based on the plurality of received signals and the coordinates of the plurality of beacon devices; and providing, on a display screen of the mobile device, the augmented reality image comprising the real view of the real environment seen through the camera of the mobile device in real-time overlaid with the 3D digital model including the future physical element at the future physical element location. 25. The method of claim 21, wherein the future physical element is a first future physical element, wherein the mobile device is a first mobile device associated with a first category, wherein the data file is a first data file, and wherein the method further comprises: associating the first future physical element with the first category, wherein the first data file further includes a first category indicator, and wherein the augmented reality image based on the first data file was displayed at the first mobile device because the first mobile device is associated with the first category; generating, using the 3D modeling computer, the 3D digital model including a second future physical element; incorporating, using the 3D modeling computer, the 3D digital model including the second future physical element at a second location within or proximate the 3D digital model including the existing physical elements, such that the second future physical element is associated with a second future physical element location in the real-world coordinate system; associating the second future physical element with a second category; and storing a second data file comprising the 3D digital model including the second future physical element and second future physical element location data, wherein a second mobile device position of a second mobile device is determined based on communications between the plurality of beacon devices and the second mobile device, and wherein, when the determined second mobile device position is within a predetermined region associated with the second future physical element, a second augmented reality image based on the second data file is displayed at the second mobile device, the second augmented reality image comprising a real view of the real environment seen through the camera of the second mobile device in real-time overlaid with the 3D digital model including the second future physical element at the second future physical element location. 26. The method of claim 21, comprising: receiving, by the mobile device, a plurality of signals from the plurality of beacon devices; determining a position of the mobile device in the real-world coordinate system based on the plurality of received signals; capturing, using a camera of the mobile device, an image of the real environment including the existing physical elements; and providing, on a display screen of the mobile device, the augmented reality image comprising the real view of the real environment seen through the camera of the mobile device in real-time overlaid with the 3D digital model including the future physical element at the future physical element location in the real-world coordinate system. 27. The method of claim 26, further comprising: receiving the Northing, Easting, and Elevation coordinates associated with the plurality of beacon devices in the real-world coordinate system, the coordinates having been obtained with the surveying equipment, wherein the position of the mobile device is determined further based on the received Northing, Easting, and Elevation coordinates of the plurality of the beacon devices. 28. The method of claim 26, further comprising: retrieving, by the mobile device, the data file based on the determined mobile device position, the data file comprising the 3D digital model including the future physical element and future physical element location data. 29. The method of claim 26, wherein the determined mobile device position has at least ⅛th of an inch accuracy, and wherein the future physical element is displayed at the future physical element location in the augmented reality image with at least ⅛th of an inch accuracy. 30. The method of claim 26, wherein the mobile device position is determined using triangulation. 31. A method comprising: surveying, using surveying equipment, a real environment including existing physical elements, with one or more targets positioned at one or more control points in the real environment to determine coordinates including Northing, Easting, and Elevation coordinates of the one or more control points in relation to a real-world coordinate system; importing the coordinates of the one or more surveyed control points into a 3D modeling computer; generating, using the 3D modeling computer, a 3D digital model including the existing physical elements in relation to the real-world coordinate system; generating, using the 3D modeling computer, a first 3D digital model including a first future physical element; incorporating, using the 3D modeling computer, the first 3D digital model including the first future physical element at a first location within or proximate the 3D digital model including the existing physical elements, such that the first future physical element is associated with a first future physical element location in the real-world coordinate system; associating the first future physical element with a first category; storing a first data file comprising the first 3D digital model including the first future physical element, first future physical element location data, and a first category indicator; generating, using the 3D modeling computer, a second 3D digital model including a second future physical element; incorporating, using the 3D modeling computer, the second 3D digital model including the second future physical element at a second location within or proximate the 3D digital model including the existing physical elements, such that the second future physical element is associated with a second future physical element location in the real-world coordinate system; associating the second future physical element with a second category; and storing a second data file comprising the second 3D digital model including the second future physical element, second future physical element location data, and a second category indicator, wherein a first mobile device position of a first mobile device associated with the first category is determined based on communications between a plurality of beacon devices and the first mobile device, wherein, when the determined first mobile device position is within a predetermined region associated with the first future physical element, an augmented reality image based on the first data file is displayed at the first mobile device, the augmented reality image comprising a real view of the real environment seen through a camera of the first mobile device in real-time overlaid with the first 3D digital model including the first future physical element at the first future physical element location, wherein a second mobile device position of a second mobile device associated with the second category is determined based on communications between the plurality of beacon devices and the second mobile device, and wherein, when the determined second mobile device position is within a predetermined region associated with the second future physical element, an augmented reality image based on the second data file is displayed at the second mobile device, the augmented reality image comprising a real view of the real environment seen through the camera of the second mobile device in real-time overlaid with the second 3D digital model including the second future physical element at the second future physical element location. 32. The method of claim 31, wherein the first category is associated with a first profession, and wherein the second category is associated with a second profession. 33. The method of claim 32, wherein the first profession is one of a plumber, an electrician, a construction worker, a welder, a manager, a maintenance worker, or a painter. 34. The method of claim 31, wherein the first category is associated with a first set of mobile devices, and wherein the second category is associated with a second set of mobile devices. 35. The method of claim 34, wherein each mobile device in the first set of mobile devices includes a first type of physical marking, and wherein each mobile device in the second set of mobile devices includes a second type of physical marking. 36. A system comprising: a plurality of beacon devices at a plurality of beacon device locations in a real environment; a data acquisition device comprising surveying equipment configured to survey the real environment including existing physical elements, with one or more targets positioned at one or more control points in the real environment to determine coordinates of the one or more control points in relation to a real-world coordinate system, and survey the plurality of beacon devices, with a target positioned at each beacon device, to determine coordinates of the plurality of beacon devices in relation to the real-world coordinate system, the coordinates being Northing, Easting, and Elevation coordinates; and a 3D modeling computer comprising a processor, and a computer readable medium programmed to cause the processor to import the coordinates of the one or more surveyed control points, generate a 3D digital model including the existing physical elements in relation to the real-world coordinate system, generate a 3D digital model including a future physical element, incorporate the 3D digital model including the future physical element at a location within or proximate the 3D digital model including the existing physical elements, such that the future physical element is associated with a future physical element location in the real-world coordinate system, store a data file comprising the 3D digital model including the future physical element and future physical element location data, and store the coordinates including Northing, Easting, and Elevation coordinates of the plurality of beacon devices in a database; and a mobile device, wherein a mobile device position of the mobile device is determined based on communications between the plurality of beacon devices and the mobile device, and wherein when the determined mobile device position is within a predetermined region associated with the future physical element, an augmented reality image based on the data file is displayed at the mobile device, the augmented reality image comprising a real view of the real environment seen through a camera of the mobile device in real-time overlaid with the 3D digital model including the future physical element at the future physical element location. 37. The system of claim 36, wherein the 3D modeling computer comprises a project database comprising different project data, which is respectively obtainable by different mobile devices. 38. The system of claim 36, further comprising the existing physical elements and control points in the real environment that are not specifically associated with the beacon devices. 39. The system of claim 36, wherein the real environment is a construction site.
Embodiments of the invention are directed to methods and systems for using local positioning beacons to create precise augmented reality images. Embodiments of the invention are also directed to providing different types of augmented reality images to different types of devices and/or individuals.1.-20. (canceled) 21. A method comprising: surveying, with surveying equipment, a real environment including existing physical elements, with one or more targets positioned at one or more control points in the real environment to determine coordinates of the one or more control points in relation to a real-world coordinate system; importing the coordinates of the one or more surveyed control points into a 3D modeling computer; generating, using the 3D modeling computer, a 3D digital model including the existing physical elements in relation to the real-world coordinate system; generating, using the 3D modeling computer, a 3D digital model including a future physical element; incorporating, using the 3D modeling computer, the 3D digital model including the future physical element at a location within or proximate the 3D digital model including the existing physical elements, such that the future physical element is associated with a future physical element location in the real-world coordinate system; storing a data file comprising the 3D digital model including the future physical element and future physical element location data; placing a plurality of beacon devices at a plurality of beacon device locations in the real environment; surveying, with the surveying equipment, the plurality of beacon devices, with a target positioned at each beacon device, to determine coordinates of the plurality of beacon devices in relation to the real-world coordinate system, the coordinates including Northing, Easting, and Elevation coordinates; and storing the coordinates including Northing, Easting, and Elevation coordinates of the plurality of beacon devices in a database, wherein a mobile device position of a mobile device is determined based on communications between the plurality of beacon devices and the mobile device, wherein, when the determined mobile device position is within a predetermined region associated with the future physical element, an augmented reality image based on the data file is displayed at the mobile device, the augmented reality image comprising a real view of the real environment seen through a camera of the mobile device in real-time overlaid with the 3D digital model including the future physical element at the future physical element location. 22. The method of claim 21, further comprising: providing the data file and the coordinates including the Northing, Easting, and Elevation coordinates of the plurality of beacon devices to the mobile device. 23. The method of claim 21, wherein the determined mobile device position has at least ⅛th of an inch accuracy, and wherein the future physical element is displayed at the future physical element location in the augmented reality image with at least ⅛th of an inch accuracy. 24. The method of claim 21, further comprising: receiving, at the mobile device, a plurality of signals from the plurality of beacon devices; determining a position of the mobile device in the real-world coordinate system based on the plurality of received signals and the coordinates of the plurality of beacon devices; and providing, on a display screen of the mobile device, the augmented reality image comprising the real view of the real environment seen through the camera of the mobile device in real-time overlaid with the 3D digital model including the future physical element at the future physical element location. 25. The method of claim 21, wherein the future physical element is a first future physical element, wherein the mobile device is a first mobile device associated with a first category, wherein the data file is a first data file, and wherein the method further comprises: associating the first future physical element with the first category, wherein the first data file further includes a first category indicator, and wherein the augmented reality image based on the first data file was displayed at the first mobile device because the first mobile device is associated with the first category; generating, using the 3D modeling computer, the 3D digital model including a second future physical element; incorporating, using the 3D modeling computer, the 3D digital model including the second future physical element at a second location within or proximate the 3D digital model including the existing physical elements, such that the second future physical element is associated with a second future physical element location in the real-world coordinate system; associating the second future physical element with a second category; and storing a second data file comprising the 3D digital model including the second future physical element and second future physical element location data, wherein a second mobile device position of a second mobile device is determined based on communications between the plurality of beacon devices and the second mobile device, and wherein, when the determined second mobile device position is within a predetermined region associated with the second future physical element, a second augmented reality image based on the second data file is displayed at the second mobile device, the second augmented reality image comprising a real view of the real environment seen through the camera of the second mobile device in real-time overlaid with the 3D digital model including the second future physical element at the second future physical element location. 26. The method of claim 21, comprising: receiving, by the mobile device, a plurality of signals from the plurality of beacon devices; determining a position of the mobile device in the real-world coordinate system based on the plurality of received signals; capturing, using a camera of the mobile device, an image of the real environment including the existing physical elements; and providing, on a display screen of the mobile device, the augmented reality image comprising the real view of the real environment seen through the camera of the mobile device in real-time overlaid with the 3D digital model including the future physical element at the future physical element location in the real-world coordinate system. 27. The method of claim 26, further comprising: receiving the Northing, Easting, and Elevation coordinates associated with the plurality of beacon devices in the real-world coordinate system, the coordinates having been obtained with the surveying equipment, wherein the position of the mobile device is determined further based on the received Northing, Easting, and Elevation coordinates of the plurality of the beacon devices. 28. The method of claim 26, further comprising: retrieving, by the mobile device, the data file based on the determined mobile device position, the data file comprising the 3D digital model including the future physical element and future physical element location data. 29. The method of claim 26, wherein the determined mobile device position has at least ⅛th of an inch accuracy, and wherein the future physical element is displayed at the future physical element location in the augmented reality image with at least ⅛th of an inch accuracy. 30. The method of claim 26, wherein the mobile device position is determined using triangulation. 31. A method comprising: surveying, using surveying equipment, a real environment including existing physical elements, with one or more targets positioned at one or more control points in the real environment to determine coordinates including Northing, Easting, and Elevation coordinates of the one or more control points in relation to a real-world coordinate system; importing the coordinates of the one or more surveyed control points into a 3D modeling computer; generating, using the 3D modeling computer, a 3D digital model including the existing physical elements in relation to the real-world coordinate system; generating, using the 3D modeling computer, a first 3D digital model including a first future physical element; incorporating, using the 3D modeling computer, the first 3D digital model including the first future physical element at a first location within or proximate the 3D digital model including the existing physical elements, such that the first future physical element is associated with a first future physical element location in the real-world coordinate system; associating the first future physical element with a first category; storing a first data file comprising the first 3D digital model including the first future physical element, first future physical element location data, and a first category indicator; generating, using the 3D modeling computer, a second 3D digital model including a second future physical element; incorporating, using the 3D modeling computer, the second 3D digital model including the second future physical element at a second location within or proximate the 3D digital model including the existing physical elements, such that the second future physical element is associated with a second future physical element location in the real-world coordinate system; associating the second future physical element with a second category; and storing a second data file comprising the second 3D digital model including the second future physical element, second future physical element location data, and a second category indicator, wherein a first mobile device position of a first mobile device associated with the first category is determined based on communications between a plurality of beacon devices and the first mobile device, wherein, when the determined first mobile device position is within a predetermined region associated with the first future physical element, an augmented reality image based on the first data file is displayed at the first mobile device, the augmented reality image comprising a real view of the real environment seen through a camera of the first mobile device in real-time overlaid with the first 3D digital model including the first future physical element at the first future physical element location, wherein a second mobile device position of a second mobile device associated with the second category is determined based on communications between the plurality of beacon devices and the second mobile device, and wherein, when the determined second mobile device position is within a predetermined region associated with the second future physical element, an augmented reality image based on the second data file is displayed at the second mobile device, the augmented reality image comprising a real view of the real environment seen through the camera of the second mobile device in real-time overlaid with the second 3D digital model including the second future physical element at the second future physical element location. 32. The method of claim 31, wherein the first category is associated with a first profession, and wherein the second category is associated with a second profession. 33. The method of claim 32, wherein the first profession is one of a plumber, an electrician, a construction worker, a welder, a manager, a maintenance worker, or a painter. 34. The method of claim 31, wherein the first category is associated with a first set of mobile devices, and wherein the second category is associated with a second set of mobile devices. 35. The method of claim 34, wherein each mobile device in the first set of mobile devices includes a first type of physical marking, and wherein each mobile device in the second set of mobile devices includes a second type of physical marking. 36. A system comprising: a plurality of beacon devices at a plurality of beacon device locations in a real environment; a data acquisition device comprising surveying equipment configured to survey the real environment including existing physical elements, with one or more targets positioned at one or more control points in the real environment to determine coordinates of the one or more control points in relation to a real-world coordinate system, and survey the plurality of beacon devices, with a target positioned at each beacon device, to determine coordinates of the plurality of beacon devices in relation to the real-world coordinate system, the coordinates being Northing, Easting, and Elevation coordinates; and a 3D modeling computer comprising a processor, and a computer readable medium programmed to cause the processor to import the coordinates of the one or more surveyed control points, generate a 3D digital model including the existing physical elements in relation to the real-world coordinate system, generate a 3D digital model including a future physical element, incorporate the 3D digital model including the future physical element at a location within or proximate the 3D digital model including the existing physical elements, such that the future physical element is associated with a future physical element location in the real-world coordinate system, store a data file comprising the 3D digital model including the future physical element and future physical element location data, and store the coordinates including Northing, Easting, and Elevation coordinates of the plurality of beacon devices in a database; and a mobile device, wherein a mobile device position of the mobile device is determined based on communications between the plurality of beacon devices and the mobile device, and wherein when the determined mobile device position is within a predetermined region associated with the future physical element, an augmented reality image based on the data file is displayed at the mobile device, the augmented reality image comprising a real view of the real environment seen through a camera of the mobile device in real-time overlaid with the 3D digital model including the future physical element at the future physical element location. 37. The system of claim 36, wherein the 3D modeling computer comprises a project database comprising different project data, which is respectively obtainable by different mobile devices. 38. The system of claim 36, further comprising the existing physical elements and control points in the real environment that are not specifically associated with the beacon devices. 39. The system of claim 36, wherein the real environment is a construction site.
2,600
10,920
10,920
15,862,428
2,626
Implementations include methods of controlling a haptic response comprising receiving a force signal from a force sensor; determining a force magnitude associated with the force signal; comparing the force magnitude with an initial threshold force amount to determine whether the force magnitude exceeds the initial threshold force amount; measuring an elapsed time that the force magnitude exceeds the initial threshold force amount; comparing the elapsed time to a minimum elapsed time; if the elapsed time being greater than the minimum elapsed time, generating a haptic feedback control signal, the haptic feedback control signal causing a haptic actuator to propagate a plurality of pressure waves at a propagation frequency, the propagation frequency being proportional to the force magnitude; and generating a scroll control signal that causes a menu system to scroll through a plurality of menu options provided by the menu system at a scroll frequency associated with the propagation frequency.
1. An electronic device comprising: a touch sensitive interface comprising one or more force sensors and a touch surface, the touch surface transferring force received by the touch surface to the one or more force sensors; a haptic actuator; a memory; and a processor, the processor in electrical communication with the one or more force sensors, the haptic actuator, and the memory, wherein the processor executes instructions stored on the memory, the instructions causing the processor to: receive a force signal from the one or more force sensors; determine a force magnitude associated with the force signal; compare the force magnitude to an initial threshold force amount o determine if the force magnitude exceeds the initial threshold force amount; measure an elapsed time that the force magnitude exceeds the initial threshold force amount; compare the elapsed time to a minimum elapsed time; and in response to the elapsed time being greater than the minimum elapsed time, generate a haptic feedback control signal for communicating to the haptic actuator, the haptic feedback control signal causing the haptic actuator to propagate a plurality of pressure waves at a propagation frequency, the propagation frequency being proportional to the force magnitude. 2. The electronic device of claim 1, wherein the instructions further cause the processor to generate a scroll control signal for communicating to a menu system, the menu system having a plurality of menu options, the scroll control signal causing the menu system to scroll through the plurality of menu options at a scroll frequency. 3. The electronic device of claim 2, wherein the scroll frequency is associated with the propagation frequency. 4. (canceled) 5. (canceled) 6. (canceled) 7. (canceled) 8. (canceled) 9. (canceled) 10. (canceled) 11. (canceled) 12. (canceled) 13. (canceled) 14. (canceled) 15. The electronic device of claim 2, wherein the scroll frequency is prohibited from exceeding a maximum scroll frequency. 16. The electronic device of claim 3, wherein the scroll frequency is the same as the propagation frequency. 17. The electronic device of claim 16, wherein the propagation frequency is selected from a range of propagation frequencies between a minimum propagation frequency and a maximum propagation frequency, and wherein a difference between the maximum propagation frequency and the minimum propagation frequency is proportional to a number of menu options of the menu system. 18. The electronic device of claim 2, further comprising a display for displaying the plurality of menu options. 19. The electronic device of claim 18, wherein the display is disposed adjacent the touch surface. 20. (canceled) 21. The electronic device of claim 1, wherein the pressure wave comprises an inaudible pressure wave. 22. The electronic device of claim 1, wherein the pressure comprises an audible pressure wave. 23. The electronic device of claim 1, wherein the haptic output comprises inaudible and audible pressure waves propagated sequentially. 24. (canceled) 25. The electronic device of claim 23, wherein at least one inaudible pressure wave is propagated after at least one audible pressure wave is propagated. 26. (canceled) 27. The electronic device of claim 23, wherein at least one audible pressure wave is propagated after at least one inaudible pressure wave is propagated. 28. (canceled) 29. A method of controlling a haptic response comprising: receiving a force signal from one or more force sensors; determining a force magnitude associated with the force signal; comparing the force magnitude with an initial threshold force amount to determine whether the force magnitude exceeds the initial threshold force amount; measuring an elapsed time that the force magnitude exceeds the initial threshold force amount; comparing the elapsed time to a minimum elapsed time; and in response to the elapsed time being greater than the minimum elapsed time, generating a haptic feedback control signal, the haptic feedback control signal causing a haptic actuator to propagate a plurality of pressure waves at a propagation frequency, the propagation frequency being proportional to the force magnitude. 30. The method of claim 29, further comprising generating a scroll control signal, the scroll control signal causing a menu system to scroll through a plurality of menu options provided by the menu system at a scroll frequency. 31. The method of claim 30, wherein the scroll frequency is associated with the propagation frequency. 32. The method of claim 30, wherein the scroll control signal is generated over a period of time while the force signal is being received from the one or more force sensors, said period of time comprising at least two time periods, a first time period and a second time period. 33. The method of claim 32, wherein a first scroll frequency is associated with the first time period and a second scroll frequency is associated with the second time period. 34. The method of claim 33, wherein the first scroll frequency and the second scroll frequency are not the same. 35. (canceled) 36. (canceled) 37. (canceled) 38. (canceled) 39. (canceled) 40. (canceled) 41. (canceled) 42. (canceled) 43. (canceled) 44. The method of claim 31, wherein the scroll frequency is the same as the propagation frequency of the haptic outputs. 45. (canceled) 46. (canceled) 47. (canceled) 48. (canceled) 49. (canceled) 50. (canceled) 51. (canceled) 52. (canceled) 53. (canceled)
Implementations include methods of controlling a haptic response comprising receiving a force signal from a force sensor; determining a force magnitude associated with the force signal; comparing the force magnitude with an initial threshold force amount to determine whether the force magnitude exceeds the initial threshold force amount; measuring an elapsed time that the force magnitude exceeds the initial threshold force amount; comparing the elapsed time to a minimum elapsed time; if the elapsed time being greater than the minimum elapsed time, generating a haptic feedback control signal, the haptic feedback control signal causing a haptic actuator to propagate a plurality of pressure waves at a propagation frequency, the propagation frequency being proportional to the force magnitude; and generating a scroll control signal that causes a menu system to scroll through a plurality of menu options provided by the menu system at a scroll frequency associated with the propagation frequency.1. An electronic device comprising: a touch sensitive interface comprising one or more force sensors and a touch surface, the touch surface transferring force received by the touch surface to the one or more force sensors; a haptic actuator; a memory; and a processor, the processor in electrical communication with the one or more force sensors, the haptic actuator, and the memory, wherein the processor executes instructions stored on the memory, the instructions causing the processor to: receive a force signal from the one or more force sensors; determine a force magnitude associated with the force signal; compare the force magnitude to an initial threshold force amount o determine if the force magnitude exceeds the initial threshold force amount; measure an elapsed time that the force magnitude exceeds the initial threshold force amount; compare the elapsed time to a minimum elapsed time; and in response to the elapsed time being greater than the minimum elapsed time, generate a haptic feedback control signal for communicating to the haptic actuator, the haptic feedback control signal causing the haptic actuator to propagate a plurality of pressure waves at a propagation frequency, the propagation frequency being proportional to the force magnitude. 2. The electronic device of claim 1, wherein the instructions further cause the processor to generate a scroll control signal for communicating to a menu system, the menu system having a plurality of menu options, the scroll control signal causing the menu system to scroll through the plurality of menu options at a scroll frequency. 3. The electronic device of claim 2, wherein the scroll frequency is associated with the propagation frequency. 4. (canceled) 5. (canceled) 6. (canceled) 7. (canceled) 8. (canceled) 9. (canceled) 10. (canceled) 11. (canceled) 12. (canceled) 13. (canceled) 14. (canceled) 15. The electronic device of claim 2, wherein the scroll frequency is prohibited from exceeding a maximum scroll frequency. 16. The electronic device of claim 3, wherein the scroll frequency is the same as the propagation frequency. 17. The electronic device of claim 16, wherein the propagation frequency is selected from a range of propagation frequencies between a minimum propagation frequency and a maximum propagation frequency, and wherein a difference between the maximum propagation frequency and the minimum propagation frequency is proportional to a number of menu options of the menu system. 18. The electronic device of claim 2, further comprising a display for displaying the plurality of menu options. 19. The electronic device of claim 18, wherein the display is disposed adjacent the touch surface. 20. (canceled) 21. The electronic device of claim 1, wherein the pressure wave comprises an inaudible pressure wave. 22. The electronic device of claim 1, wherein the pressure comprises an audible pressure wave. 23. The electronic device of claim 1, wherein the haptic output comprises inaudible and audible pressure waves propagated sequentially. 24. (canceled) 25. The electronic device of claim 23, wherein at least one inaudible pressure wave is propagated after at least one audible pressure wave is propagated. 26. (canceled) 27. The electronic device of claim 23, wherein at least one audible pressure wave is propagated after at least one inaudible pressure wave is propagated. 28. (canceled) 29. A method of controlling a haptic response comprising: receiving a force signal from one or more force sensors; determining a force magnitude associated with the force signal; comparing the force magnitude with an initial threshold force amount to determine whether the force magnitude exceeds the initial threshold force amount; measuring an elapsed time that the force magnitude exceeds the initial threshold force amount; comparing the elapsed time to a minimum elapsed time; and in response to the elapsed time being greater than the minimum elapsed time, generating a haptic feedback control signal, the haptic feedback control signal causing a haptic actuator to propagate a plurality of pressure waves at a propagation frequency, the propagation frequency being proportional to the force magnitude. 30. The method of claim 29, further comprising generating a scroll control signal, the scroll control signal causing a menu system to scroll through a plurality of menu options provided by the menu system at a scroll frequency. 31. The method of claim 30, wherein the scroll frequency is associated with the propagation frequency. 32. The method of claim 30, wherein the scroll control signal is generated over a period of time while the force signal is being received from the one or more force sensors, said period of time comprising at least two time periods, a first time period and a second time period. 33. The method of claim 32, wherein a first scroll frequency is associated with the first time period and a second scroll frequency is associated with the second time period. 34. The method of claim 33, wherein the first scroll frequency and the second scroll frequency are not the same. 35. (canceled) 36. (canceled) 37. (canceled) 38. (canceled) 39. (canceled) 40. (canceled) 41. (canceled) 42. (canceled) 43. (canceled) 44. The method of claim 31, wherein the scroll frequency is the same as the propagation frequency of the haptic outputs. 45. (canceled) 46. (canceled) 47. (canceled) 48. (canceled) 49. (canceled) 50. (canceled) 51. (canceled) 52. (canceled) 53. (canceled)
2,600
10,921
10,921
16,308,770
2,611
A visual diagnostics (VIDX) system comprises a visualization assembly, a data analytic assembly, and a data storage assembly communicatively coupled to each other via one or more buses. Other computer assembly may be integrated into or coupled to the system. Various features comprises a calendar based visualization, a timeline for multi-scale temporal exploration, a visual diagnostics graph, and multiples of histograms showing the distribution of the cycle time one each station with a quantile range selector such as a quantiles brush and/or a samples brush, of a visual diagnostics (VIDX) graph is displayed for interaction.
1. A visual analytics system comprising: a visualization assembly, the visualization assembly comprising: a historical data analytic device; and a real-time tracking device; a data analytic assembly; and a computer readable medium; wherein the real-time tracking device directly retrieves data from the data storage assembly for display. 2. The visual analytics system of claim 1 wherein the real-time tracking device further label a set of normal processes, propagate the labels, and detect the outliers from the labeled normal processes. 3. The visual analytics system of claim 2 wherein an outlier detection configured to detect outliers from the labeled normal processes. 4. The visual analytics system of claim 3 wherein the system comprising a data aggregation configured to aggregate the normal processes based on temporal proximity. 5. The visual analytics system of claim 4 wherein the system comprising a display for displaying the detected outliers. 6. The visual analytics system of claim 5 further comprising a controller communicatively coupled to the visual analytics system for control and monitoring a plurality of parts on one or more assembly lines and one or more stations. 7. The visual analytics system of claim 6 wherein the controller records a cycle time and a fault codes when the parts being processed on the stations.
A visual diagnostics (VIDX) system comprises a visualization assembly, a data analytic assembly, and a data storage assembly communicatively coupled to each other via one or more buses. Other computer assembly may be integrated into or coupled to the system. Various features comprises a calendar based visualization, a timeline for multi-scale temporal exploration, a visual diagnostics graph, and multiples of histograms showing the distribution of the cycle time one each station with a quantile range selector such as a quantiles brush and/or a samples brush, of a visual diagnostics (VIDX) graph is displayed for interaction.1. A visual analytics system comprising: a visualization assembly, the visualization assembly comprising: a historical data analytic device; and a real-time tracking device; a data analytic assembly; and a computer readable medium; wherein the real-time tracking device directly retrieves data from the data storage assembly for display. 2. The visual analytics system of claim 1 wherein the real-time tracking device further label a set of normal processes, propagate the labels, and detect the outliers from the labeled normal processes. 3. The visual analytics system of claim 2 wherein an outlier detection configured to detect outliers from the labeled normal processes. 4. The visual analytics system of claim 3 wherein the system comprising a data aggregation configured to aggregate the normal processes based on temporal proximity. 5. The visual analytics system of claim 4 wherein the system comprising a display for displaying the detected outliers. 6. The visual analytics system of claim 5 further comprising a controller communicatively coupled to the visual analytics system for control and monitoring a plurality of parts on one or more assembly lines and one or more stations. 7. The visual analytics system of claim 6 wherein the controller records a cycle time and a fault codes when the parts being processed on the stations.
2,600
10,922
10,922
16,929,962
2,637
Methods and devices for power imbalance compensation and calibration of a coherent transmitter or transceiver are described. A pilot tone is combined with a digital data signal such that relative amplitudes of the pilot tone in each of four transmitted optical data channels may be detected by a pilot tone detector and used to calculate any power imbalances between I/Q phase channels and/or X/Y polarized channels of the transmitter. The pilot tone detector applies gain to the data signal of the transmitter to compensate for any calculated power imbalances. Because the pilot tone is combined with the digital data signal, its amplitude in each received channel is proportional to the data signal power of that channel.
1. A device comprising: a pilot tone generator configured to combine a pilot tone with a digital data signal, thereby generating a modified digital data signal; a digital to analog converter (DAC) for generating an analog data signal based on the digital data signal; an electro-optic modulator (EOM) configured to generate an optical signal based on the modified digital data signal, the optical signal including at least one in-phase modulation component and at least one quadrature modulation component; a gain control unit; and a pilot tone detector configured to: receive the optical signal; generate a pilot tone detector digital signal based on the optical signal; detect pilot tone signal characteristics based on the pilot tone detector digital signal; use the pilot tone signal characteristics to determine at least one modulation component power imbalance between at least one of the in-phase modulation components and at least one of the quadrature modulation components of the optical signal; and provide feedback to the gain control unit based on the at least one modulation component power imbalance, the gain control unit being configured to apply gain to the digital data signal or the analog data signal based on the feedback to compensate for the at least one modulation component power imbalance. 2. The device of claim 1, wherein: the at least one in-phase modulation component comprises a first in-phase modulation component at a first polarization direction and a second in-phase modulation component at a second polarization direction; the at least one quadrature modulation component comprises a first quadrature modulation component at a first polarization direction and a second quadrature modulation component at a second polarization direction; the pilot tone detector is further configured to use the pilot tone signal characteristics to determine at least one polarization direction power imbalance between at least one of the modulation components at the first polarization direction and at least one of the modulation components at the second polarization direction; and the feedback is also based on the polarization direction power imbalance. 3. The device of claim 2, wherein: the modified digital data signal is encoded in four orthogonal data channels: an XI channel encoding the first in-phase modulation component, a XQ channel encoding the first quadrature modulation component, an YI channel encoding the second in-phase modulation component, and a YQ channel encoding the second quadrature modulation component; and the electro-optic modulator comprises a dual-polarization quad-parallel Mach-Zehnder modulator having four channel paths, each channel path being modulated by one of the four orthogonal data channels. 4. The device of claim 3, wherein: the pilot tone is combined with the digital data signal using amplitude modulation; and detecting pilot tone signal characteristics based on the pilot tone detector digital signal comprises detecting an amplitude of the pilot tone in each of four channels of the pilot tone detector digital signal: a pilot tone detector XI channel, a pilot tone detector XQ channel, a pilot tone detector YI channel, and a pilot tone detector YQ channel. 5. The device of claim 4, wherein the pilot tone detector comprises: a low-speed photodetector for receiving the optical signal and generating a pilot tone detector analog signal based on the optical signal; an analog-to-digital converter (ADC) for generating the pilot tone detector digital signal based on the pilot tone detector analog signal; and a pilot tone detector digital signal processing (DSP) unit for: detecting the amplitude of the pilot tone in each of the four channels of the pilot tone detector digital signal by applying a fast Fourier transform to the pilot tone detector digital signal; and using the pilot tone signal characteristics to determine the at least one modulation component power imbalance and the at least one polarization direction power imbalance by calculating one or more amplitude ratios between the pilot tone detected in two or more of the four channels of the pilot tone detector digital signal. 6. The device of claim 4, wherein: the pilot tone comprises four pilot tone channels, each pilot tone channel having a different modulation frequency from the modulation frequency of each other pilot tone channel; and each pilot tone channel is combined with one of the four orthogonal data channels to generate the modified digital data signal. 7. The device of claim 4, wherein: combining the pilot tone with the digital data signal comprises: for each of four predetermined time periods, combining the pilot tone with a respective channel of the four orthogonal data channels using amplitude modulation; and detecting pilot tone signal characteristics comprises: identifying the four channels of the pilot tone detector digital signal; and for each of four sampling time periods, each sampling time period corresponding to one of the four predetermined time periods, detecting pilot tone signal characteristics of a respective channel of the pilot tone detector digital signal. 8. The device of claim 1, further comprising: an amplifier for amplifying the analog data signal to generate an amplified analog data signal, the amplified analog data signal driving the electro-optic modulator, the optical signal being based on the amplified analog data signal, wherein: the gain control unit applies gain adjustment based on the feedback to the amplifier to change a power level of the amplified analog data signal; and generating an analog data signal based on the digital data signal comprises converting the modified electrical digital signal into the analog data signal. 9. The device of claim 8, further comprising a digital signal processing (DSP) unit for setting a power level of the digital data signal, wherein the gain control unit further applies digital gain based on the feedback to the electrical digital signal using the DSP unit. 10. The device of claim 1, further comprising a digital signal processing (DSP) unit for setting a power level of the digital data signal, wherein the gain control unit applies digital gain based on the feedback to the electrical digital signal using the DSP unit. 11. A method comprising: using a pilot tone generator to combine a pilot tone with a digital data signal, thereby generating a modified digital data signal; using a digital to analog converter (DAC) to generate an analog data signal based on the digital data signal; using an electro-optic modulator (EOM) to generate an optical signal based on the modified digital data signal, the optical signal including at least one in-phase modulation component and at least one quadrature modulation component; using a pilot tone detector to: receive the optical signal; generate a pilot tone detector digital signal based on the optical signal; detect pilot tone signal characteristics based on the pilot tone detector digital signal; use the pilot tone signal characteristics to determine at least one modulation component power imbalance between at least one of the in-phase modulation components and at least one of the quadrature modulation components of the optical signal; and provide feedback to a gain control unit based on the at least one modulation component power imbalance; using the gain control unit to apply gain to the digital data signal or the analog data signal based on the feedback to compensate for the at least one modulation component power imbalance. 12. The method of claim 11, wherein: the at least one in-phase modulation component comprises a first in-phase modulation component at a first polarization direction and a second in-phase modulation component at a second polarization direction; the at least one quadrature modulation component comprises a first quadrature modulation component at a first polarization direction and a second quadrature modulation component at a second polarization direction; the pilot tone detector is further configured to use the pilot tone signal characteristics to determine at least one polarization direction power imbalance between at least one of the modulation components at the first polarization direction and at least one of the modulation components at the second polarization direction; and the feedback is also based on the polarization direction power imbalance. 13. The method of claim 12, wherein: the modified digital data signal is encoded in four orthogonal data channels: an XI channel encoding the first in-phase modulation component, a XQ channel encoding the first quadrature modulation component, an YI channel encoding the second in-phase modulation component, and a YQ channel encoding the second quadrature modulation component; and the electro-optic modulator comprises a dual-polarization quadrature-phase Mach-Zehnder modulator having four channel paths, each channel path being modulated by one of the four orthogonal data channels. 14. The method of claim 13, wherein: the pilot tone comprises four pilot tone channels, each pilot tone channel having a different modulation frequency from the modulation frequency of each other pilot tone channel; and each pilot tone channel is combined with one of the four orthogonal data channels to generate the modified electrical digital signal. 15. The method of claim 13, wherein using the pilot tone signal characteristics to determine the at least one modulation component power imbalance and the at least one polarization direction power imbalance comprises: detecting the amplitude of the pilot tone in each of the four channels of the pilot tone detector digital signal by applying a fast Fourier transform to the pilot tone detector digital signal; and calculating one or more amplitude ratios between the pilot tone detected in two or more of the four channels of the pilot tone detector digital signal. 16. The method of claim 13, wherein: combining the pilot tone with the digital data signal comprises: for each of four predetermined time periods, combining the pilot tone with a respective channel of the four orthogonal data channels using amplitude modulation; and detecting pilot tone signal characteristics comprises: identifying four channels of the pilot tone detector digital signal; and for each of four sampling time periods, each sampling time period corresponding to one of the four predetermined time periods, detecting pilot tone signal characteristics of a respective channel of the pilot tone detector digital signal. 17. A device comprising a pilot tone detector, the pilot tone detector comprising: a low-speed photodetector for receiving an optical signal and generating a pilot tone detector analog signal based on the optical signal; an analog-to-digital converter (ADC) for generating a pilot tone detector digital signal based on the pilot tone detector analog signal; and a pilot tone detector digital signal processor (DSP) for: detecting an amplitude of a pilot tone in each of four channels of the pilot tone detector digital signal by applying a fast Fourier transform to the pilot tone detector digital signal; and using signal characteristics of the pilot tone to determine at least one power imbalance by calculating one or more amplitude ratios between the pilot tone detected in two or more of the four channels of the pilot tone detector digital signal. 18. The device of claim 17, wherein the at least one power imbalance includes a modulation component power imbalance between an in-phase modulation component and a quadrature modulation component of the optical signal. 19. The device of claim 17, wherein the at least one power imbalance includes a polarization direction power imbalance between a first polarization direction and a second polarization direction of the optical signal. 20. The device of claim 17, further comprising: a pilot tone generator configured to combine the pilot tone with a digital data signal, thereby generating a modified digital data signal; a digital-to-analog converter (DAC) for converting the modified electrical digital signal into an analog data signal; an amplifier for amplifying the analog data signal to generate an amplified analog data signal for driving an electro-optic modulator; an electro-optic modulator (EOM) configured to generate an optical signal based on the modified digital data signal; and a gain control unit, wherein: the pilot tone detector DSP is further configured to provide feedback to the gain control unit based on the at least one power imbalance; and the gain control unit is configured to: apply digital gain based on the feedback to the digital data signal; and apply analog gain based on the feedback to the amplified analog data signal.
Methods and devices for power imbalance compensation and calibration of a coherent transmitter or transceiver are described. A pilot tone is combined with a digital data signal such that relative amplitudes of the pilot tone in each of four transmitted optical data channels may be detected by a pilot tone detector and used to calculate any power imbalances between I/Q phase channels and/or X/Y polarized channels of the transmitter. The pilot tone detector applies gain to the data signal of the transmitter to compensate for any calculated power imbalances. Because the pilot tone is combined with the digital data signal, its amplitude in each received channel is proportional to the data signal power of that channel.1. A device comprising: a pilot tone generator configured to combine a pilot tone with a digital data signal, thereby generating a modified digital data signal; a digital to analog converter (DAC) for generating an analog data signal based on the digital data signal; an electro-optic modulator (EOM) configured to generate an optical signal based on the modified digital data signal, the optical signal including at least one in-phase modulation component and at least one quadrature modulation component; a gain control unit; and a pilot tone detector configured to: receive the optical signal; generate a pilot tone detector digital signal based on the optical signal; detect pilot tone signal characteristics based on the pilot tone detector digital signal; use the pilot tone signal characteristics to determine at least one modulation component power imbalance between at least one of the in-phase modulation components and at least one of the quadrature modulation components of the optical signal; and provide feedback to the gain control unit based on the at least one modulation component power imbalance, the gain control unit being configured to apply gain to the digital data signal or the analog data signal based on the feedback to compensate for the at least one modulation component power imbalance. 2. The device of claim 1, wherein: the at least one in-phase modulation component comprises a first in-phase modulation component at a first polarization direction and a second in-phase modulation component at a second polarization direction; the at least one quadrature modulation component comprises a first quadrature modulation component at a first polarization direction and a second quadrature modulation component at a second polarization direction; the pilot tone detector is further configured to use the pilot tone signal characteristics to determine at least one polarization direction power imbalance between at least one of the modulation components at the first polarization direction and at least one of the modulation components at the second polarization direction; and the feedback is also based on the polarization direction power imbalance. 3. The device of claim 2, wherein: the modified digital data signal is encoded in four orthogonal data channels: an XI channel encoding the first in-phase modulation component, a XQ channel encoding the first quadrature modulation component, an YI channel encoding the second in-phase modulation component, and a YQ channel encoding the second quadrature modulation component; and the electro-optic modulator comprises a dual-polarization quad-parallel Mach-Zehnder modulator having four channel paths, each channel path being modulated by one of the four orthogonal data channels. 4. The device of claim 3, wherein: the pilot tone is combined with the digital data signal using amplitude modulation; and detecting pilot tone signal characteristics based on the pilot tone detector digital signal comprises detecting an amplitude of the pilot tone in each of four channels of the pilot tone detector digital signal: a pilot tone detector XI channel, a pilot tone detector XQ channel, a pilot tone detector YI channel, and a pilot tone detector YQ channel. 5. The device of claim 4, wherein the pilot tone detector comprises: a low-speed photodetector for receiving the optical signal and generating a pilot tone detector analog signal based on the optical signal; an analog-to-digital converter (ADC) for generating the pilot tone detector digital signal based on the pilot tone detector analog signal; and a pilot tone detector digital signal processing (DSP) unit for: detecting the amplitude of the pilot tone in each of the four channels of the pilot tone detector digital signal by applying a fast Fourier transform to the pilot tone detector digital signal; and using the pilot tone signal characteristics to determine the at least one modulation component power imbalance and the at least one polarization direction power imbalance by calculating one or more amplitude ratios between the pilot tone detected in two or more of the four channels of the pilot tone detector digital signal. 6. The device of claim 4, wherein: the pilot tone comprises four pilot tone channels, each pilot tone channel having a different modulation frequency from the modulation frequency of each other pilot tone channel; and each pilot tone channel is combined with one of the four orthogonal data channels to generate the modified digital data signal. 7. The device of claim 4, wherein: combining the pilot tone with the digital data signal comprises: for each of four predetermined time periods, combining the pilot tone with a respective channel of the four orthogonal data channels using amplitude modulation; and detecting pilot tone signal characteristics comprises: identifying the four channels of the pilot tone detector digital signal; and for each of four sampling time periods, each sampling time period corresponding to one of the four predetermined time periods, detecting pilot tone signal characteristics of a respective channel of the pilot tone detector digital signal. 8. The device of claim 1, further comprising: an amplifier for amplifying the analog data signal to generate an amplified analog data signal, the amplified analog data signal driving the electro-optic modulator, the optical signal being based on the amplified analog data signal, wherein: the gain control unit applies gain adjustment based on the feedback to the amplifier to change a power level of the amplified analog data signal; and generating an analog data signal based on the digital data signal comprises converting the modified electrical digital signal into the analog data signal. 9. The device of claim 8, further comprising a digital signal processing (DSP) unit for setting a power level of the digital data signal, wherein the gain control unit further applies digital gain based on the feedback to the electrical digital signal using the DSP unit. 10. The device of claim 1, further comprising a digital signal processing (DSP) unit for setting a power level of the digital data signal, wherein the gain control unit applies digital gain based on the feedback to the electrical digital signal using the DSP unit. 11. A method comprising: using a pilot tone generator to combine a pilot tone with a digital data signal, thereby generating a modified digital data signal; using a digital to analog converter (DAC) to generate an analog data signal based on the digital data signal; using an electro-optic modulator (EOM) to generate an optical signal based on the modified digital data signal, the optical signal including at least one in-phase modulation component and at least one quadrature modulation component; using a pilot tone detector to: receive the optical signal; generate a pilot tone detector digital signal based on the optical signal; detect pilot tone signal characteristics based on the pilot tone detector digital signal; use the pilot tone signal characteristics to determine at least one modulation component power imbalance between at least one of the in-phase modulation components and at least one of the quadrature modulation components of the optical signal; and provide feedback to a gain control unit based on the at least one modulation component power imbalance; using the gain control unit to apply gain to the digital data signal or the analog data signal based on the feedback to compensate for the at least one modulation component power imbalance. 12. The method of claim 11, wherein: the at least one in-phase modulation component comprises a first in-phase modulation component at a first polarization direction and a second in-phase modulation component at a second polarization direction; the at least one quadrature modulation component comprises a first quadrature modulation component at a first polarization direction and a second quadrature modulation component at a second polarization direction; the pilot tone detector is further configured to use the pilot tone signal characteristics to determine at least one polarization direction power imbalance between at least one of the modulation components at the first polarization direction and at least one of the modulation components at the second polarization direction; and the feedback is also based on the polarization direction power imbalance. 13. The method of claim 12, wherein: the modified digital data signal is encoded in four orthogonal data channels: an XI channel encoding the first in-phase modulation component, a XQ channel encoding the first quadrature modulation component, an YI channel encoding the second in-phase modulation component, and a YQ channel encoding the second quadrature modulation component; and the electro-optic modulator comprises a dual-polarization quadrature-phase Mach-Zehnder modulator having four channel paths, each channel path being modulated by one of the four orthogonal data channels. 14. The method of claim 13, wherein: the pilot tone comprises four pilot tone channels, each pilot tone channel having a different modulation frequency from the modulation frequency of each other pilot tone channel; and each pilot tone channel is combined with one of the four orthogonal data channels to generate the modified electrical digital signal. 15. The method of claim 13, wherein using the pilot tone signal characteristics to determine the at least one modulation component power imbalance and the at least one polarization direction power imbalance comprises: detecting the amplitude of the pilot tone in each of the four channels of the pilot tone detector digital signal by applying a fast Fourier transform to the pilot tone detector digital signal; and calculating one or more amplitude ratios between the pilot tone detected in two or more of the four channels of the pilot tone detector digital signal. 16. The method of claim 13, wherein: combining the pilot tone with the digital data signal comprises: for each of four predetermined time periods, combining the pilot tone with a respective channel of the four orthogonal data channels using amplitude modulation; and detecting pilot tone signal characteristics comprises: identifying four channels of the pilot tone detector digital signal; and for each of four sampling time periods, each sampling time period corresponding to one of the four predetermined time periods, detecting pilot tone signal characteristics of a respective channel of the pilot tone detector digital signal. 17. A device comprising a pilot tone detector, the pilot tone detector comprising: a low-speed photodetector for receiving an optical signal and generating a pilot tone detector analog signal based on the optical signal; an analog-to-digital converter (ADC) for generating a pilot tone detector digital signal based on the pilot tone detector analog signal; and a pilot tone detector digital signal processor (DSP) for: detecting an amplitude of a pilot tone in each of four channels of the pilot tone detector digital signal by applying a fast Fourier transform to the pilot tone detector digital signal; and using signal characteristics of the pilot tone to determine at least one power imbalance by calculating one or more amplitude ratios between the pilot tone detected in two or more of the four channels of the pilot tone detector digital signal. 18. The device of claim 17, wherein the at least one power imbalance includes a modulation component power imbalance between an in-phase modulation component and a quadrature modulation component of the optical signal. 19. The device of claim 17, wherein the at least one power imbalance includes a polarization direction power imbalance between a first polarization direction and a second polarization direction of the optical signal. 20. The device of claim 17, further comprising: a pilot tone generator configured to combine the pilot tone with a digital data signal, thereby generating a modified digital data signal; a digital-to-analog converter (DAC) for converting the modified electrical digital signal into an analog data signal; an amplifier for amplifying the analog data signal to generate an amplified analog data signal for driving an electro-optic modulator; an electro-optic modulator (EOM) configured to generate an optical signal based on the modified digital data signal; and a gain control unit, wherein: the pilot tone detector DSP is further configured to provide feedback to the gain control unit based on the at least one power imbalance; and the gain control unit is configured to: apply digital gain based on the feedback to the digital data signal; and apply analog gain based on the feedback to the amplified analog data signal.
2,600
10,923
10,923
16,447,559
2,628
A method for controlling a computer display cursor in an interaction region includes projecting an image of the computer display to create the interaction region. A distance is established between a first point and a second point. The first point has a predetermined relation to the projection device, and the second point has a predetermined relation to the interaction region. At least one of an orientation and a position of a pointing line is measured. The pointing line has a predetermined relation to a pointing device. The established distance and the at least one of measured position and orientation are used to control the cursor position on the interaction region.
1. A non-transitory computer-readable medium or media storing computer-executable instructions for directing a computer to perform a method for controlling the contents of a computer-generated image in conjunction with an enclosure comprising a plurality of sensors and configured to generate a pattern of light representing the image, the plurality of sensors comprising: an accelerometer; a gyro; an image capturing device; a magnetometer; and a sensing device that is sensitive to a position of the enclosure and orientation of the enclosure; the method comprising the steps of: receiving data that is dependent on an output of at least one of the plurality of sensors; and controlling the contents of the computer-generated image based on the data. 2. The non-transitory computer-readable medium or media according to claim 1, wherein the plurality of sensors further comprises a user input device configured to allow the user to manually provide two dimensional input, the user input device being insensitive to position and orientation of the enclosure. 3. The non-transitory computer-readable medium or media according to claim 2, wherein the plurality of sensors further comprises a pressure sensor. 4. The non-transitory computer-readable medium or media according to claim 2, wherein the pattern of light is generated by a display for displaying said image, the display being part of the enclosure. 5. The non-transitory computer-readable medium or media according to claim 2, wherein the enclosure further comprises a laser configured to project a light spot in a direction that lies at a non-zero angle with the optical axis of the image capturing device. 6. The non-transitory computer-readable medium or media according to claim 5, wherein the laser is configured to project infrared light, and the enclosure contains a computer-readable identification code that distinguishes the enclosure among a plurality of like enclosures. 7. The non-transitory computer-readable medium or media according to claim 1, wherein the sensing device comprises the accelerometer, the gyro, the image capturing device and the magnetometer. 8. An apparatus for generating first data and second data for use in controlling a computer-generated image, the first data being dependent on a position of a first device relative to a second device, the second data being dependent on a position of the second device, the first device configured to be handheld and wielded in mid-air by a user, the second device configured to be taken along by the user, the apparatus comprising: a first sensing device for generating the first data, the first sensing device being sensitive to a position of the first device relative to the second device; and a second sensing device for generating the second data, the second sensing device being sensitive to a posit ion of the second device; wherein the first device contains at least one element of the first sensing device. 9. The apparatus according to claim 8, wherein the first data is dependent on an orientation of the first device, and the second data is dependent on an orientation of the second device. 10. The apparatus according to claim 9, wherein the first sensing device is sensitive to three independent positional coordinates of the first device relative to the second device and three independent orientational coordinates of the first device, and the second sensing device comprises a magnetometer and is sensitive to three independent positional coordinates of the second device and three independent orientational coordinates of the second device. 11. The apparatus according to claim 10, wherein the first sensing device comprises a first accelerometer and a first gyro, and the second sensing device comprises a second accelerometer and a second gyro and is sensitive to three independent positional coordinates of the second device relative to a point external to the first device and to the second device. 12. The apparatus according to claim 11, wherein the first sensing device comprises a digital camera. 13. A non-transitory computer-readable medium or media storing computer-executable instructions for directing a computer to perform a method for controlling the contents of a computer-generated image in conjunction with an enclosure that contains a plurality of sensors and that contains a projection device, the plurality of sensors comprising an accelerometer and a gyro and a magnetometer and an image capturing device and a sensing device that is sensitive to a position of the enclosure and orientation of the enclosure; the method comprising the steps of: causing the projection device to project a plurality of distinct points, causing the image capturing device to capture a first image, receiving data that is dependent on an output of at least one of the plurality of sensors, and using the data to control the contents of the computer-generated image when the data depends on at least a part of the first image and the first image contains a second image of at least one of the plurality of distinct points. 14. The non-transitory computer-readable medium or media according to claim 13, wherein each of the plurality of distinct points is projected substantially simultaneously with the plurality of distinct points. 15. The non-transitory computer-readable medium or media according to claim 13, wherein each of the plurality of distinct points is projected in a direction that has a non-zero angle relative to the optical axis of the image capturing device. 16. The non-transitory computer-readable medium or media according to claim 15, wherein the projection device comprises a laser. 17. The non-transitory computer-readable medium or media according to claim 16, wherein the laser is for projecting infrared light, and the enclosure contains a computer-readable identification code that distinguishes the enclosure among a plurality of like enclosures. 18. The non-transitory computer-readable medium or media according to claim 13, wherein the enclosure contains a computer-readable identification code that distinguishes the enclosure among a plurality of like enclosures, and the plurality of sensors comprises a pressure sensor. 19. The non-transitory computer-readable medium or media according to claim 13, wherein the sensing device is sensitive to two independent positional coordinates of the enclosure and three independent orientational coordinates of the enclosure. 20. The non-transitory computer-readable medium or media according to claim 13, wherein the sensing device is sensitive to three independent positional coordinates of the enclosure and three independent orientational coordinates of the enclosure.
A method for controlling a computer display cursor in an interaction region includes projecting an image of the computer display to create the interaction region. A distance is established between a first point and a second point. The first point has a predetermined relation to the projection device, and the second point has a predetermined relation to the interaction region. At least one of an orientation and a position of a pointing line is measured. The pointing line has a predetermined relation to a pointing device. The established distance and the at least one of measured position and orientation are used to control the cursor position on the interaction region.1. A non-transitory computer-readable medium or media storing computer-executable instructions for directing a computer to perform a method for controlling the contents of a computer-generated image in conjunction with an enclosure comprising a plurality of sensors and configured to generate a pattern of light representing the image, the plurality of sensors comprising: an accelerometer; a gyro; an image capturing device; a magnetometer; and a sensing device that is sensitive to a position of the enclosure and orientation of the enclosure; the method comprising the steps of: receiving data that is dependent on an output of at least one of the plurality of sensors; and controlling the contents of the computer-generated image based on the data. 2. The non-transitory computer-readable medium or media according to claim 1, wherein the plurality of sensors further comprises a user input device configured to allow the user to manually provide two dimensional input, the user input device being insensitive to position and orientation of the enclosure. 3. The non-transitory computer-readable medium or media according to claim 2, wherein the plurality of sensors further comprises a pressure sensor. 4. The non-transitory computer-readable medium or media according to claim 2, wherein the pattern of light is generated by a display for displaying said image, the display being part of the enclosure. 5. The non-transitory computer-readable medium or media according to claim 2, wherein the enclosure further comprises a laser configured to project a light spot in a direction that lies at a non-zero angle with the optical axis of the image capturing device. 6. The non-transitory computer-readable medium or media according to claim 5, wherein the laser is configured to project infrared light, and the enclosure contains a computer-readable identification code that distinguishes the enclosure among a plurality of like enclosures. 7. The non-transitory computer-readable medium or media according to claim 1, wherein the sensing device comprises the accelerometer, the gyro, the image capturing device and the magnetometer. 8. An apparatus for generating first data and second data for use in controlling a computer-generated image, the first data being dependent on a position of a first device relative to a second device, the second data being dependent on a position of the second device, the first device configured to be handheld and wielded in mid-air by a user, the second device configured to be taken along by the user, the apparatus comprising: a first sensing device for generating the first data, the first sensing device being sensitive to a position of the first device relative to the second device; and a second sensing device for generating the second data, the second sensing device being sensitive to a posit ion of the second device; wherein the first device contains at least one element of the first sensing device. 9. The apparatus according to claim 8, wherein the first data is dependent on an orientation of the first device, and the second data is dependent on an orientation of the second device. 10. The apparatus according to claim 9, wherein the first sensing device is sensitive to three independent positional coordinates of the first device relative to the second device and three independent orientational coordinates of the first device, and the second sensing device comprises a magnetometer and is sensitive to three independent positional coordinates of the second device and three independent orientational coordinates of the second device. 11. The apparatus according to claim 10, wherein the first sensing device comprises a first accelerometer and a first gyro, and the second sensing device comprises a second accelerometer and a second gyro and is sensitive to three independent positional coordinates of the second device relative to a point external to the first device and to the second device. 12. The apparatus according to claim 11, wherein the first sensing device comprises a digital camera. 13. A non-transitory computer-readable medium or media storing computer-executable instructions for directing a computer to perform a method for controlling the contents of a computer-generated image in conjunction with an enclosure that contains a plurality of sensors and that contains a projection device, the plurality of sensors comprising an accelerometer and a gyro and a magnetometer and an image capturing device and a sensing device that is sensitive to a position of the enclosure and orientation of the enclosure; the method comprising the steps of: causing the projection device to project a plurality of distinct points, causing the image capturing device to capture a first image, receiving data that is dependent on an output of at least one of the plurality of sensors, and using the data to control the contents of the computer-generated image when the data depends on at least a part of the first image and the first image contains a second image of at least one of the plurality of distinct points. 14. The non-transitory computer-readable medium or media according to claim 13, wherein each of the plurality of distinct points is projected substantially simultaneously with the plurality of distinct points. 15. The non-transitory computer-readable medium or media according to claim 13, wherein each of the plurality of distinct points is projected in a direction that has a non-zero angle relative to the optical axis of the image capturing device. 16. The non-transitory computer-readable medium or media according to claim 15, wherein the projection device comprises a laser. 17. The non-transitory computer-readable medium or media according to claim 16, wherein the laser is for projecting infrared light, and the enclosure contains a computer-readable identification code that distinguishes the enclosure among a plurality of like enclosures. 18. The non-transitory computer-readable medium or media according to claim 13, wherein the enclosure contains a computer-readable identification code that distinguishes the enclosure among a plurality of like enclosures, and the plurality of sensors comprises a pressure sensor. 19. The non-transitory computer-readable medium or media according to claim 13, wherein the sensing device is sensitive to two independent positional coordinates of the enclosure and three independent orientational coordinates of the enclosure. 20. The non-transitory computer-readable medium or media according to claim 13, wherein the sensing device is sensitive to three independent positional coordinates of the enclosure and three independent orientational coordinates of the enclosure.
2,600
10,924
10,924
16,048,590
2,689
Novel tools and techniques for low-power wireless access control are provided. A system includes an access control server, network device, and a low-power wireless device. The low-power wireless device may include a low-power wireless transceiver configured to communicate with a mobile device, a processor, and non-transitory computer readable media comprising instructions executable by the processor to establish a low-power wireless connection with the mobile device, obtain authorization information from the mobile device, transmit the authorization information to the access control server, receive an access determination from the access control server, and perform a secure function based on the access determination.
1. A system comprising: an access control server; a network device in communication with the access control server; a low-power wireless device in communication with the network device, the low-power wireless device comprising: a low-power wireless transceiver configured to communicate with a mobile device; a processor; non-transitory computer readable media comprising instructions executable by the processor to: establish, via the low-power wireless transceiver, a low-power wireless connection with the mobile device; obtain, via the low-power wireless connection to the mobile device, authorization information associated with a user of the mobile device; transmit, via the network device, the authorization information to the access control server; receive, via the network device, an access determination from the access control server; and perform a secure function based on the access determination, wherein the access determination is indicative of whether the user of the mobile device is authorized to access the secure function; wherein the mobile device is configured to interface with the low-power wireless device, and to transmit authorization information associated with the user of the mobile device. 2. The system of claim 1, wherein the access control server is configured to receive, from the low-power wireless device, the authorization information; determine whether the user of the mobile device is an authorized user; and transmit the access authorization to the low-power wireless device. 3. The system of claim 1, wherein the network device is a router, switch, or modem coupled to the low-power wireless device via a communication network. 4. The system of claim 3, wherein the communication network is a low-power wireless area network 5. The system of claim 3, wherein the communication network a powerline communication network. 6. The system of claim 3, wherein the access control server is remotely accessible by the network device via an external network separate from the communication network through which the network device is coupled to the low-power wireless device. 7. The system of claim 1, wherein mobile device is further configured to receive, via the access control server, a secondary authorization request, and transmit a secondary authorization confirmation to the access control server responsive to the secondary authorization request, wherein the access authorization indicates whether the user is authorized to access the secure feature based, at least in part, on receipt, by the access control server, of the secondary authorization confirmation. 8. The system of claim 1, wherein the mobile device is further configured to obtain authorization information based on authentication information provided by the user. 9. The system of claim 8, wherein the mobile device is communicatively coupled to the access control server, wherein the mobile device is configured to transmit the authentication information to the access control server, and receive authorization information from the access control server, based on the authentication information. 10. An apparatus comprising: a low-power wireless transceiver configured to communicate with a mobile device; a processor; non-transitory computer readable media comprising instructions executable by the processor to: establish, via the low-power wireless transceiver, a low-power wireless connection with the mobile device; obtain, via the low-power wireless connection to the mobile device, authorization information associated with a user of the mobile device; transmit, via a network device, the authorization information to an access control server; receive, via the network device, an access determination from the access control server; and perform a secure function based on the access determination, wherein the access determination is indicative of whether the user of the mobile device is authorized to access the secure function. 11. The apparatus of claim 10, wherein the instructions are further executable by the processor to: receive, via the mobile device, authentication information associated with the user of the mobile device; and obtain authorization information associated with the user based on the authentication information. 12. The apparatus of claim 10, wherein the instructions are further executable by the processor to: establish, via a first communication network, a second connection to the network device, wherein the network device is coupled to the access control server via a second communication network. 13. The apparatus of claim 12, wherein the first communication network is a powerline communication network. 14. The apparatus of claim 12, wherein the first communication network is a low-power wireless area network. 15. The apparatus of claim 10, wherein the instructions are further executable by the processor to: transmit, via the low-power wireless transceiver, a secondary authorization request to the mobile device; and determine whether a secondary authorization confirmation responsive to the secondary authorization request has been sent, by the mobile device, to the access control server. 16. The apparatus of claim 10, wherein the low-power wireless transceiver includes at least one of a Bluetooth LE transceiver, LPWAN transceiver, low-power Wi-Fi transceiver, or Zigbee transceiver. 17. A method comprising: establishing, via a low-power wireless device, a low-power wireless connection with the mobile device; obtaining, via the low-power wireless device, authorization information associated with a user of the mobile device from the mobile device over the low-power wireless connection; transmitting, via the low-power wireless device, the authorization information to a network device; transmitting, via the network device, the authorization information to an access control server; receiving, via the network device, an access determination from the access control server; transmitting, via the network device, the access determination to the low-power wireless device; and performing, via the low-power wireless device, a secure function based on the access determination, wherein the access determination is indicative of whether the user of the mobile device is authorized to access the secure function. 18. The method of claim 17 further comprising: receiving, via the access control server and from the low-power wireless device, the authorization information; determining, via the access control server, whether the user of the mobile device is an authorized user; and transmitting, via the access control server, the access authorization to the low-power wireless device. 19. The method of claim 17 further comprising: establishing, with the low-power wireless device, a second connection to the network device via a first communication network, wherein the network device is coupled to the access control server via a second communication network different from the first. 20. The method of claim 17 further comprising: transmitting, via the access control server, a secondary authorization request to the mobile device; receiving, via the access control server, a secondary authorization confirmation responsive to the secondary authorization request; and generating, via the access control server, the access determination, wherein the access determination is based, at least in part, on whether the secondary authorization confirmation was received from the mobile device.
Novel tools and techniques for low-power wireless access control are provided. A system includes an access control server, network device, and a low-power wireless device. The low-power wireless device may include a low-power wireless transceiver configured to communicate with a mobile device, a processor, and non-transitory computer readable media comprising instructions executable by the processor to establish a low-power wireless connection with the mobile device, obtain authorization information from the mobile device, transmit the authorization information to the access control server, receive an access determination from the access control server, and perform a secure function based on the access determination.1. A system comprising: an access control server; a network device in communication with the access control server; a low-power wireless device in communication with the network device, the low-power wireless device comprising: a low-power wireless transceiver configured to communicate with a mobile device; a processor; non-transitory computer readable media comprising instructions executable by the processor to: establish, via the low-power wireless transceiver, a low-power wireless connection with the mobile device; obtain, via the low-power wireless connection to the mobile device, authorization information associated with a user of the mobile device; transmit, via the network device, the authorization information to the access control server; receive, via the network device, an access determination from the access control server; and perform a secure function based on the access determination, wherein the access determination is indicative of whether the user of the mobile device is authorized to access the secure function; wherein the mobile device is configured to interface with the low-power wireless device, and to transmit authorization information associated with the user of the mobile device. 2. The system of claim 1, wherein the access control server is configured to receive, from the low-power wireless device, the authorization information; determine whether the user of the mobile device is an authorized user; and transmit the access authorization to the low-power wireless device. 3. The system of claim 1, wherein the network device is a router, switch, or modem coupled to the low-power wireless device via a communication network. 4. The system of claim 3, wherein the communication network is a low-power wireless area network 5. The system of claim 3, wherein the communication network a powerline communication network. 6. The system of claim 3, wherein the access control server is remotely accessible by the network device via an external network separate from the communication network through which the network device is coupled to the low-power wireless device. 7. The system of claim 1, wherein mobile device is further configured to receive, via the access control server, a secondary authorization request, and transmit a secondary authorization confirmation to the access control server responsive to the secondary authorization request, wherein the access authorization indicates whether the user is authorized to access the secure feature based, at least in part, on receipt, by the access control server, of the secondary authorization confirmation. 8. The system of claim 1, wherein the mobile device is further configured to obtain authorization information based on authentication information provided by the user. 9. The system of claim 8, wherein the mobile device is communicatively coupled to the access control server, wherein the mobile device is configured to transmit the authentication information to the access control server, and receive authorization information from the access control server, based on the authentication information. 10. An apparatus comprising: a low-power wireless transceiver configured to communicate with a mobile device; a processor; non-transitory computer readable media comprising instructions executable by the processor to: establish, via the low-power wireless transceiver, a low-power wireless connection with the mobile device; obtain, via the low-power wireless connection to the mobile device, authorization information associated with a user of the mobile device; transmit, via a network device, the authorization information to an access control server; receive, via the network device, an access determination from the access control server; and perform a secure function based on the access determination, wherein the access determination is indicative of whether the user of the mobile device is authorized to access the secure function. 11. The apparatus of claim 10, wherein the instructions are further executable by the processor to: receive, via the mobile device, authentication information associated with the user of the mobile device; and obtain authorization information associated with the user based on the authentication information. 12. The apparatus of claim 10, wherein the instructions are further executable by the processor to: establish, via a first communication network, a second connection to the network device, wherein the network device is coupled to the access control server via a second communication network. 13. The apparatus of claim 12, wherein the first communication network is a powerline communication network. 14. The apparatus of claim 12, wherein the first communication network is a low-power wireless area network. 15. The apparatus of claim 10, wherein the instructions are further executable by the processor to: transmit, via the low-power wireless transceiver, a secondary authorization request to the mobile device; and determine whether a secondary authorization confirmation responsive to the secondary authorization request has been sent, by the mobile device, to the access control server. 16. The apparatus of claim 10, wherein the low-power wireless transceiver includes at least one of a Bluetooth LE transceiver, LPWAN transceiver, low-power Wi-Fi transceiver, or Zigbee transceiver. 17. A method comprising: establishing, via a low-power wireless device, a low-power wireless connection with the mobile device; obtaining, via the low-power wireless device, authorization information associated with a user of the mobile device from the mobile device over the low-power wireless connection; transmitting, via the low-power wireless device, the authorization information to a network device; transmitting, via the network device, the authorization information to an access control server; receiving, via the network device, an access determination from the access control server; transmitting, via the network device, the access determination to the low-power wireless device; and performing, via the low-power wireless device, a secure function based on the access determination, wherein the access determination is indicative of whether the user of the mobile device is authorized to access the secure function. 18. The method of claim 17 further comprising: receiving, via the access control server and from the low-power wireless device, the authorization information; determining, via the access control server, whether the user of the mobile device is an authorized user; and transmitting, via the access control server, the access authorization to the low-power wireless device. 19. The method of claim 17 further comprising: establishing, with the low-power wireless device, a second connection to the network device via a first communication network, wherein the network device is coupled to the access control server via a second communication network different from the first. 20. The method of claim 17 further comprising: transmitting, via the access control server, a secondary authorization request to the mobile device; receiving, via the access control server, a secondary authorization confirmation responsive to the secondary authorization request; and generating, via the access control server, the access determination, wherein the access determination is based, at least in part, on whether the secondary authorization confirmation was received from the mobile device.
2,600
10,925
10,925
16,516,281
2,631
A method for determining a noise shaped quantized parameter contributing to generation of an output signal comprises estimating an error within the output signal using a quantization of the parameter and a quantization of a further parameter contributing to generation of the output signal. The quantization of the parameter is used as the noise shaped quantized parameter according to a selection criterion.
1. (canceled) 2. A method for determining a noise shaped quantized parameter contributing to generation of an output signal, comprising: calculating a first estimated error within the output signal using a first quantization of the parameter and a first quantization of the further parameter; calculating a second estimated error within the output signal using a second quantization of the parameter and the first quantization of the further parameter; and using the quantization of the parameter resulting in the lower estimated error as the noise shaped quantized parameter. 3. The method of claim 2, further comprising: calculating a third estimated error within the output signal using the first quantization of the parameter and a second quantization of the further parameter; calculating a fourth estimated error within the output signal using the second quantization of the parameter and the second quantization of the further parameter; and using the quantization of the parameter and the quantization of the further parameter resulting in the lowest estimated error as the noise shaped quantized parameter and as a noise shaped quantized further parameter. 4. The method of claim 2, wherein the first quantization corresponds to the smallest possible distance between the first quantization of the parameter and the parameter; and wherein the second quantization corresponds to the second smallest possible distance between the second quantization of the parameter and the parameter for the quantization scheme used. 5. The method of claim 2, wherein calculating the first estimated error and the second estimated error comprises: using an information on a notch frequency at which the error within a spectrum of the output signal shall be small. 6. The method of claim 2, wherein the parameter corresponds to a phase value of an output signal and wherein the further parameter corresponds to a radius value of the output signal. 7. The method of claim 6, wherein the calculation of the first estimated error Qnew1 and of the second estimated error Qnew2 is based on the following expressions: Q new1 =Q old+radclosest(t)*2*(T real −T closest)*[cos(w 0 t)−I*sin(w 0 t)]; Q new2 =Q old+radclosest(t)*2*(T real −T SecondClosest)*[cos(w 0 t)−I*sin(w 0 t)]; wherein radclosest corresponds to the radius value, Treal corresponds to the parameter, Tclosest corresponds to the first quantization of the parameter Treal, TSecondClosest corresponds to the second quantization of the parameter Treal, w0 corresponds to the notch frequency and Qold corresponds to a remaining error contribution of an antecedent quantization. 8. A method for generating a radio frequency signal based on a phase value as a parameter and a radius value as a further parameter, comprising: calculating a first estimated error within the radio frequency signal using a first quantization of the phase value and a quantization of the radius value; calculating a second estimated error within the radio frequency signal using a second quantization of the phase value and the quantization of the radius value; and using the quantization of the phase value resulting in the lower estimated error as a noise shaped quantized phase value. 9. The method of claim 7, wherein the calculation of the first estimated error and the calculation of the second estimated error use a remaining error contribution of an antecedent quantization. 10. The method of claim 8, further comprising: using the noise shaped quantized phase value to determine a first analog signal having a phase characteristic depending on the quantized phase value; and using the quantization of the radius value to determine a second analog signal having an amplitude depending on the quantized radius value. 11. The method of claim 10, further comprising: combining the first analog signal and the second analog signal to generate the radio frequency signal. 12. A noise shaper for determining a noise shaped quantized parameter contributing to generation of an output signal, comprising: an error estimation component configured to receive a parameter and a further parameter contributing to the generation of the output signal, the error estimation component comprising: quantization circuitry configured to calculate a first quantization of the parameter, a second quantization of the parameter, and a quantization of the further parameter; error calculation circuitry configured to calculate a first estimated error within the output signal using the first quantization of the parameter and the quantization of the further parameter and to calculate a second estimated error within the output signal using the second quantization of the parameter and the quantization of the further parameter; and a decision making component configured to compare the first estimated error and the second estimated error to decide on the use of the quantization of the parameter resulting in the lower estimated error. 13. The noise shaper of claim 12, wherein the quantization circuitry is further configured to calculate a first quantization of the further parameter and a second quantization of the further parameter; the error calculation circuitry is further configured to calculate a third estimated error within the output signal using the first quantization of the parameter and the second quantization of the further parameter and a fourth estimated error within the output signal using the second quantization of the parameter and the second quantization of the further parameter; and the decision making component is further configured to decide on the use of the quantization of the parameter and the quantization of the further parameter resulting in the lowest estimated error. 14. A radio frequency generator for generating a radio frequency signal based on a parameter and on a further parameter, comprising: an error estimation component configured to receive a parameter and a further parameter contributing to the generation of the output signal, the error estimation component comprising: quantization circuitry configured to calculate a first quantization of the parameter, a second quantization of the parameter, and a quantization of the further parameter; error calculation circuitry configured to calculate a first estimated error within the output signal using the first quantization of the parameter and the quantization of the further parameter and to calculate a second estimated error within the output signal using the second quantization of the parameter and the quantization of the further parameter; and a decision making component configured to compare the first estimated error and the second estimated error to decide on the use of the quantization of the parameter resulting in the lower estimated error as a noise shaped quantized parameter; and a signal combiner configured to combine a first signal depending on the noise shaped quantized parameter and a second signal depending on the quantization of the further parameter to provide the output signal. 15. The radio frequency generator of claim 14, wherein the signal combiner comprises: a first digital to analog converter configured to provide an analog representation of the noise shaped quantized parameter; a second digital to analog converter configured to provide an analog representation of the quantization of the further parameter; and a signal generator configured to combine the analog representation of the noise shaped quantized parameter and the analog representation of the quantized further parameter. 16. The radio frequency generator of claim 14, wherein the parameter indicates a phase component of the radio frequency signal and the further parameter indicates a radius component of the radio frequency signal.
A method for determining a noise shaped quantized parameter contributing to generation of an output signal comprises estimating an error within the output signal using a quantization of the parameter and a quantization of a further parameter contributing to generation of the output signal. The quantization of the parameter is used as the noise shaped quantized parameter according to a selection criterion.1. (canceled) 2. A method for determining a noise shaped quantized parameter contributing to generation of an output signal, comprising: calculating a first estimated error within the output signal using a first quantization of the parameter and a first quantization of the further parameter; calculating a second estimated error within the output signal using a second quantization of the parameter and the first quantization of the further parameter; and using the quantization of the parameter resulting in the lower estimated error as the noise shaped quantized parameter. 3. The method of claim 2, further comprising: calculating a third estimated error within the output signal using the first quantization of the parameter and a second quantization of the further parameter; calculating a fourth estimated error within the output signal using the second quantization of the parameter and the second quantization of the further parameter; and using the quantization of the parameter and the quantization of the further parameter resulting in the lowest estimated error as the noise shaped quantized parameter and as a noise shaped quantized further parameter. 4. The method of claim 2, wherein the first quantization corresponds to the smallest possible distance between the first quantization of the parameter and the parameter; and wherein the second quantization corresponds to the second smallest possible distance between the second quantization of the parameter and the parameter for the quantization scheme used. 5. The method of claim 2, wherein calculating the first estimated error and the second estimated error comprises: using an information on a notch frequency at which the error within a spectrum of the output signal shall be small. 6. The method of claim 2, wherein the parameter corresponds to a phase value of an output signal and wherein the further parameter corresponds to a radius value of the output signal. 7. The method of claim 6, wherein the calculation of the first estimated error Qnew1 and of the second estimated error Qnew2 is based on the following expressions: Q new1 =Q old+radclosest(t)*2*(T real −T closest)*[cos(w 0 t)−I*sin(w 0 t)]; Q new2 =Q old+radclosest(t)*2*(T real −T SecondClosest)*[cos(w 0 t)−I*sin(w 0 t)]; wherein radclosest corresponds to the radius value, Treal corresponds to the parameter, Tclosest corresponds to the first quantization of the parameter Treal, TSecondClosest corresponds to the second quantization of the parameter Treal, w0 corresponds to the notch frequency and Qold corresponds to a remaining error contribution of an antecedent quantization. 8. A method for generating a radio frequency signal based on a phase value as a parameter and a radius value as a further parameter, comprising: calculating a first estimated error within the radio frequency signal using a first quantization of the phase value and a quantization of the radius value; calculating a second estimated error within the radio frequency signal using a second quantization of the phase value and the quantization of the radius value; and using the quantization of the phase value resulting in the lower estimated error as a noise shaped quantized phase value. 9. The method of claim 7, wherein the calculation of the first estimated error and the calculation of the second estimated error use a remaining error contribution of an antecedent quantization. 10. The method of claim 8, further comprising: using the noise shaped quantized phase value to determine a first analog signal having a phase characteristic depending on the quantized phase value; and using the quantization of the radius value to determine a second analog signal having an amplitude depending on the quantized radius value. 11. The method of claim 10, further comprising: combining the first analog signal and the second analog signal to generate the radio frequency signal. 12. A noise shaper for determining a noise shaped quantized parameter contributing to generation of an output signal, comprising: an error estimation component configured to receive a parameter and a further parameter contributing to the generation of the output signal, the error estimation component comprising: quantization circuitry configured to calculate a first quantization of the parameter, a second quantization of the parameter, and a quantization of the further parameter; error calculation circuitry configured to calculate a first estimated error within the output signal using the first quantization of the parameter and the quantization of the further parameter and to calculate a second estimated error within the output signal using the second quantization of the parameter and the quantization of the further parameter; and a decision making component configured to compare the first estimated error and the second estimated error to decide on the use of the quantization of the parameter resulting in the lower estimated error. 13. The noise shaper of claim 12, wherein the quantization circuitry is further configured to calculate a first quantization of the further parameter and a second quantization of the further parameter; the error calculation circuitry is further configured to calculate a third estimated error within the output signal using the first quantization of the parameter and the second quantization of the further parameter and a fourth estimated error within the output signal using the second quantization of the parameter and the second quantization of the further parameter; and the decision making component is further configured to decide on the use of the quantization of the parameter and the quantization of the further parameter resulting in the lowest estimated error. 14. A radio frequency generator for generating a radio frequency signal based on a parameter and on a further parameter, comprising: an error estimation component configured to receive a parameter and a further parameter contributing to the generation of the output signal, the error estimation component comprising: quantization circuitry configured to calculate a first quantization of the parameter, a second quantization of the parameter, and a quantization of the further parameter; error calculation circuitry configured to calculate a first estimated error within the output signal using the first quantization of the parameter and the quantization of the further parameter and to calculate a second estimated error within the output signal using the second quantization of the parameter and the quantization of the further parameter; and a decision making component configured to compare the first estimated error and the second estimated error to decide on the use of the quantization of the parameter resulting in the lower estimated error as a noise shaped quantized parameter; and a signal combiner configured to combine a first signal depending on the noise shaped quantized parameter and a second signal depending on the quantization of the further parameter to provide the output signal. 15. The radio frequency generator of claim 14, wherein the signal combiner comprises: a first digital to analog converter configured to provide an analog representation of the noise shaped quantized parameter; a second digital to analog converter configured to provide an analog representation of the quantization of the further parameter; and a signal generator configured to combine the analog representation of the noise shaped quantized parameter and the analog representation of the quantized further parameter. 16. The radio frequency generator of claim 14, wherein the parameter indicates a phase component of the radio frequency signal and the further parameter indicates a radius component of the radio frequency signal.
2,600
10,926
10,926
15,694,884
2,689
A portable emergency alert device, comprising: a sensor for receiving an indication of an emergency situation of a user; a first transmitter for transmitting the alert to an emergency center via a stationary relay device; and a second transmitter for transmitting the alert to the emergency center via a portable relay device; a processor for processing said indication to instruct transmission of the alert by at least one of the a first transmitter and the second transmitter.
1. A portable emergency alert device, comprising: a sensor for receiving an indication of an emergency situation of a user; a first transmitter for transmitting said alert to an emergency center via a stationary relay device; a second transmitter for transmitting said alert to said emergency center via a portable relay device; and a processor for processing said indication to instruct transmission of said alert by at least one of said first transmitter and said second transmitter. 2. The portable emergency alert device of claim 1, wherein said second transmitter is a Bluetooth® transmitter; said portable relay device is a mobile phone; said first transmitter is a radio transmitter; and said stationary relay device is a radio hub device. 3. The portable emergency alert device of claim 1, wherein said first transmitter and said stationary relay device use different communication protocol than said second transmitter and said portable relay device. 4. The portable emergency alert device of claim 1, wherein said first transmitter and said stationary relay device use different communication method than said second transmitter and said portable relay device. 5. The portable emergency alert device of claim 1, wherein said portable emergency alert device is a wearable device. 6. The portable emergency alert device of claim 5, wherein said portable emergency is one of a bracelet and a pendant. 7. The portable emergency alert device of claim 1, wherein said sensor includes at least one of a physical button; a touch screen; and a voice sensor and analysis component, for receiving said indication from said user. 8. The portable emergency alert device of claim 1, wherein said sensor includes at least one sensor element for monitoring health parameters of said user. 9. The emergency alert device of claim 1, further comprising a battery which provides energy for said first transmitter and said second transmitter for at least one year. 10. A method for alerting emergency situations via dual-communication, comprising: receiving an indication of an emergency situation of a user of a portable emergency alert device, said portable emergency alert device includes at least two transmitters each configured to communicate with one of at least two relay devices; processing said indication and instructing transmission of an alert of said emergency situation by at least one of said at least two transmitters; and transmitting said alert from one of said at least two transmitters to a corresponding one of said at least two relay devices which is configured to transmit said alert to an emergency center. 11. The method of claim 10, wherein said processing includes selecting one of said at least two transmitters to transmit said alert. 12. The method of claim 10, further comprising: transmitting said alert of from another one of said at least two transmitters to a corresponding another one of said at least two relay devices which is configured to transmit said alert to said emergency center. 13. A system for alerting emergency situations via dual-communication, comprising: a portable emergency alert device, comprising: a sensor for receiving an indication of an emergency situation of a user; a first transmitter for transmitting said alert; a second transmitter for transmitting said alert; and a processor for processing said indication to instruct transmission of said alert by at least one of said a first transmitter and said second transmitter; a stationary relay device for receiving said alert from said first transmitter, and transmitting said alert to an emergency center; and a portable relay device for receiving said alert from said second transmitter, and transmitting said alert to said emergency center.
A portable emergency alert device, comprising: a sensor for receiving an indication of an emergency situation of a user; a first transmitter for transmitting the alert to an emergency center via a stationary relay device; and a second transmitter for transmitting the alert to the emergency center via a portable relay device; a processor for processing said indication to instruct transmission of the alert by at least one of the a first transmitter and the second transmitter.1. A portable emergency alert device, comprising: a sensor for receiving an indication of an emergency situation of a user; a first transmitter for transmitting said alert to an emergency center via a stationary relay device; a second transmitter for transmitting said alert to said emergency center via a portable relay device; and a processor for processing said indication to instruct transmission of said alert by at least one of said first transmitter and said second transmitter. 2. The portable emergency alert device of claim 1, wherein said second transmitter is a Bluetooth® transmitter; said portable relay device is a mobile phone; said first transmitter is a radio transmitter; and said stationary relay device is a radio hub device. 3. The portable emergency alert device of claim 1, wherein said first transmitter and said stationary relay device use different communication protocol than said second transmitter and said portable relay device. 4. The portable emergency alert device of claim 1, wherein said first transmitter and said stationary relay device use different communication method than said second transmitter and said portable relay device. 5. The portable emergency alert device of claim 1, wherein said portable emergency alert device is a wearable device. 6. The portable emergency alert device of claim 5, wherein said portable emergency is one of a bracelet and a pendant. 7. The portable emergency alert device of claim 1, wherein said sensor includes at least one of a physical button; a touch screen; and a voice sensor and analysis component, for receiving said indication from said user. 8. The portable emergency alert device of claim 1, wherein said sensor includes at least one sensor element for monitoring health parameters of said user. 9. The emergency alert device of claim 1, further comprising a battery which provides energy for said first transmitter and said second transmitter for at least one year. 10. A method for alerting emergency situations via dual-communication, comprising: receiving an indication of an emergency situation of a user of a portable emergency alert device, said portable emergency alert device includes at least two transmitters each configured to communicate with one of at least two relay devices; processing said indication and instructing transmission of an alert of said emergency situation by at least one of said at least two transmitters; and transmitting said alert from one of said at least two transmitters to a corresponding one of said at least two relay devices which is configured to transmit said alert to an emergency center. 11. The method of claim 10, wherein said processing includes selecting one of said at least two transmitters to transmit said alert. 12. The method of claim 10, further comprising: transmitting said alert of from another one of said at least two transmitters to a corresponding another one of said at least two relay devices which is configured to transmit said alert to said emergency center. 13. A system for alerting emergency situations via dual-communication, comprising: a portable emergency alert device, comprising: a sensor for receiving an indication of an emergency situation of a user; a first transmitter for transmitting said alert; a second transmitter for transmitting said alert; and a processor for processing said indication to instruct transmission of said alert by at least one of said a first transmitter and said second transmitter; a stationary relay device for receiving said alert from said first transmitter, and transmitting said alert to an emergency center; and a portable relay device for receiving said alert from said second transmitter, and transmitting said alert to said emergency center.
2,600
10,927
10,927
16,445,853
2,689
A vehicle accessory device operating system includes a plurality of accessory devices ( 28, 30, 32 ) housed or to be housed in a vehicle, a communication unit ( 36 ), an accessory device data transmission system ( 38 ) and at least one mobile operating unit ( 40 ). The accessory device data transmission system ( 38 ) connects the plurality of accessory devices ( 28, 30, 32 ) for the exchange of data with the communication unit ( 36 ). The communication unit ( 36 ) is configured for the exchange of data with the at least one mobile operating unit ( 40 ) via a first wireless data transmission system ( 42 ). The communication unit ( 36 ) is configured for the exchange of data with a remote access system ( 46 ) via a second wireless data transmission system ( 44 ).
1. A vehicle accessory device operating system comprising: a plurality of accessory devices housed in a vehicle or to be housed in a vehicle; a communication unit; an accessory device data transmission system connecting the plurality of accessory devices to the communication unit for an exchange of data between the plurality of accessory devices and the communication unit; and at least one mobile operating unit, wherein the communication unit is configured to exchange data with the at least one mobile operating unit via a first wireless data transmission system, and the communication unit is configured to exchange data with a remote access system via a second wireless data transmission system. 2. A vehicle accessory device operating system in accordance with claim 1, wherein the accessory device data transmission system comprises a third wireless data transmission system. 3. A vehicle accessory device operating system in accordance with claim 2, wherein the third wireless data transmission system comprises a Bluetooth data transmission system. 4. A vehicle accessory device operating system in accordance with claim 1, wherein the accessory device data transmission system comprises a wired data transmission system. 5. A vehicle accessory device operating system in accordance with claim 1, wherein the first wireless data transmission system comprises a Bluetooth data transmission system. 6. A vehicle accessory device operating system in accordance with claim 1, wherein the second wireless data transmission system comprises a mobile wireless data transmission system. 7. A vehicle accessory device operating system in accordance with claim 1, wherein the remote access system is configured for access over the Internet. 8. A vehicle comprising: a plurality of vehicle accessory devices; a plurality of vehicle device actuating units or/and a plurality of vehicle device operating units; a vehicle device data transmission system for the exchange of data between the plurality of vehicle devices and the plurality of vehicle device actuating units or/and the plurality of vehicle device operating units; and a vehicle accessory device operating system comprising: a plurality of accessory devices housed in the vehicle or connectable with the vehicle; a communication unit; an accessory device data transmission system connecting the plurality of accessory devices to the communication unit for an exchange of data between the plurality of accessory devices and the communication unit; and at least one mobile operating unit, wherein the communication unit is configured to exchange data with the at least one mobile operating unit via a first wireless data transmission system, and the communication unit is configured to exchange data with a remote access system via a second wireless data transmission system. 9. A vehicle in accordance with claim 8, wherein no exchange of data takes place between the accessory device data transmission system and the vehicle device data transmission system. 10. A vehicle in accordance with claim 8, wherein the vehicle device data transmission system comprises a wired data transmission system including a data bus. 11. A vehicle accessory device operating system comprising: a plurality of vehicle accessory devices; an accessory device data transmission system; a mobile operating unit; and a communication unit comprising a data transmission system transceiver arrangement, wherein the accessory device data transmission system connects the plurality of accessory devices to the communication unit via the data transmission system transceiver arrangement for an exchange of data between the plurality of accessory devices and the communication unit, a first wireless data transmission system connects the at least one mobile operating unit to the communication unit via the data transmission system transceiver arrangement for an exchange of data between the at least one mobile operating unit and the communication unit and a second wireless data transmission system connects a remote access system to the communication unit via the data transmission system transceiver arrangement for an exchange of data between the remote access system and the communication unit. 12. A vehicle accessory device operating system in accordance with claim 11, wherein: the wireless data transmission system transceiver arrangement comprises a Bluetooth transceiver for data exchange with the at least one mobile operating unit via a Bluetooth wireless data transmission system as the first wireless data transmission system; and the wireless data transmission system transceiver arrangement comprises a cellular network transceiver for data exchange with the remote access system via a cellular network as the second wireless data transmission system. 13. A vehicle accessory device operating system in accordance with claim 12, wherein the accessory device data transmission system comprises a third wireless data transmission system. 14. A vehicle accessory device operating system in accordance with claim 13, wherein the third wireless data transmission system comprises a Bluetooth data transmission system. 15. A vehicle accessory device operating system in accordance with claim 11, wherein the accessory device data transmission system comprises a wired data transmission system. 16. A vehicle accessory device operating system in accordance with claim 11, wherein the first wireless data transmission system comprises a Bluetooth data transmission system. 17. A vehicle accessory device operating system in accordance with claim 11, wherein the second wireless data transmission system comprises a mobile wireless data transmission system. 18. A vehicle accessory device operating system in accordance with claim 11, wherein the remote access system is configured for access over the Internet.
A vehicle accessory device operating system includes a plurality of accessory devices ( 28, 30, 32 ) housed or to be housed in a vehicle, a communication unit ( 36 ), an accessory device data transmission system ( 38 ) and at least one mobile operating unit ( 40 ). The accessory device data transmission system ( 38 ) connects the plurality of accessory devices ( 28, 30, 32 ) for the exchange of data with the communication unit ( 36 ). The communication unit ( 36 ) is configured for the exchange of data with the at least one mobile operating unit ( 40 ) via a first wireless data transmission system ( 42 ). The communication unit ( 36 ) is configured for the exchange of data with a remote access system ( 46 ) via a second wireless data transmission system ( 44 ).1. A vehicle accessory device operating system comprising: a plurality of accessory devices housed in a vehicle or to be housed in a vehicle; a communication unit; an accessory device data transmission system connecting the plurality of accessory devices to the communication unit for an exchange of data between the plurality of accessory devices and the communication unit; and at least one mobile operating unit, wherein the communication unit is configured to exchange data with the at least one mobile operating unit via a first wireless data transmission system, and the communication unit is configured to exchange data with a remote access system via a second wireless data transmission system. 2. A vehicle accessory device operating system in accordance with claim 1, wherein the accessory device data transmission system comprises a third wireless data transmission system. 3. A vehicle accessory device operating system in accordance with claim 2, wherein the third wireless data transmission system comprises a Bluetooth data transmission system. 4. A vehicle accessory device operating system in accordance with claim 1, wherein the accessory device data transmission system comprises a wired data transmission system. 5. A vehicle accessory device operating system in accordance with claim 1, wherein the first wireless data transmission system comprises a Bluetooth data transmission system. 6. A vehicle accessory device operating system in accordance with claim 1, wherein the second wireless data transmission system comprises a mobile wireless data transmission system. 7. A vehicle accessory device operating system in accordance with claim 1, wherein the remote access system is configured for access over the Internet. 8. A vehicle comprising: a plurality of vehicle accessory devices; a plurality of vehicle device actuating units or/and a plurality of vehicle device operating units; a vehicle device data transmission system for the exchange of data between the plurality of vehicle devices and the plurality of vehicle device actuating units or/and the plurality of vehicle device operating units; and a vehicle accessory device operating system comprising: a plurality of accessory devices housed in the vehicle or connectable with the vehicle; a communication unit; an accessory device data transmission system connecting the plurality of accessory devices to the communication unit for an exchange of data between the plurality of accessory devices and the communication unit; and at least one mobile operating unit, wherein the communication unit is configured to exchange data with the at least one mobile operating unit via a first wireless data transmission system, and the communication unit is configured to exchange data with a remote access system via a second wireless data transmission system. 9. A vehicle in accordance with claim 8, wherein no exchange of data takes place between the accessory device data transmission system and the vehicle device data transmission system. 10. A vehicle in accordance with claim 8, wherein the vehicle device data transmission system comprises a wired data transmission system including a data bus. 11. A vehicle accessory device operating system comprising: a plurality of vehicle accessory devices; an accessory device data transmission system; a mobile operating unit; and a communication unit comprising a data transmission system transceiver arrangement, wherein the accessory device data transmission system connects the plurality of accessory devices to the communication unit via the data transmission system transceiver arrangement for an exchange of data between the plurality of accessory devices and the communication unit, a first wireless data transmission system connects the at least one mobile operating unit to the communication unit via the data transmission system transceiver arrangement for an exchange of data between the at least one mobile operating unit and the communication unit and a second wireless data transmission system connects a remote access system to the communication unit via the data transmission system transceiver arrangement for an exchange of data between the remote access system and the communication unit. 12. A vehicle accessory device operating system in accordance with claim 11, wherein: the wireless data transmission system transceiver arrangement comprises a Bluetooth transceiver for data exchange with the at least one mobile operating unit via a Bluetooth wireless data transmission system as the first wireless data transmission system; and the wireless data transmission system transceiver arrangement comprises a cellular network transceiver for data exchange with the remote access system via a cellular network as the second wireless data transmission system. 13. A vehicle accessory device operating system in accordance with claim 12, wherein the accessory device data transmission system comprises a third wireless data transmission system. 14. A vehicle accessory device operating system in accordance with claim 13, wherein the third wireless data transmission system comprises a Bluetooth data transmission system. 15. A vehicle accessory device operating system in accordance with claim 11, wherein the accessory device data transmission system comprises a wired data transmission system. 16. A vehicle accessory device operating system in accordance with claim 11, wherein the first wireless data transmission system comprises a Bluetooth data transmission system. 17. A vehicle accessory device operating system in accordance with claim 11, wherein the second wireless data transmission system comprises a mobile wireless data transmission system. 18. A vehicle accessory device operating system in accordance with claim 11, wherein the remote access system is configured for access over the Internet.
2,600
10,928
10,928
16,354,679
2,667
A method of training a lifeguard to properly view an area of a swimming pool or body of water and recognize a swimmer/bather in distress. The method includes: positioning submersible devices or other objects on a bottom of the swimming pool or body of water according to an established grid or pattern; observing the submersible devices to make observations; analyzing the observations to evaluate the ability to see the submersible devices under varying environmental and density conditions. The observation trains the lifeguard to recognize the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather drowning.
1. A method of facilitating a lifeguard to supervise swimmers/bathers in a swimming pool or body of water and search for a swimmer/bather in distress, the method comprising: mapping the shape of the swimming pool or body of water to establish a grid or pattern; positioning submersible devices on a bottom of the swimming pool or body of water according to the established grid or pattern, the submersible devices are positioned on the bottom of the swimming pool or body of water at the same time as the swimmers/bathers are in the swimming pool or body of water, the submersible devices simulate submerged swimmers/bathers; observing the submersible devices to make observations; analyzing the observations to evaluate the ability to see the submersible devices under varying environmental and density conditions; wherein the observation trains the lifeguard to effectively search and recognize the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather suffering a fatal drowning. 2. The method of facilitating a lifeguard as recited in claim 1, wherein the submersible devices have base sections and movable members, the movable members have a buoyancy and are moved by water in the swimming pool or body of water to simulate the swimmer/bather in distress at the bottom of the swimming pool or body of water. 3. The method of facilitating a lifeguard as recited in claim 1, wherein the submersible devices are sized to approximate the size of a 2½ to 3 year old child in a fetal position. 4. The method of facilitating a lifeguard as recited in claim 1, wherein the submersible devices have multiple colors to preclude false positive or false negative results when training with the submersible devices. 5. The method of facilitating a lifeguard as recited in claim 1, further comprising including any blind spots that may be present in the swimming pool or body of water to establish the grid or pattern. 6. The method of facilitating a lifeguard as recited in claim 1, further observing the submersible devices under different conditions consisting of surface turbulence created by swimmers/bathers, waves, wind, water spray features; turbidity caused by air entrainment from waves or bubble features; reflected images; glare from light; lighting levels, design and positioning of light or lighting sources; water depth; background color of the pool walls and bottoms; and sightline obstructions such as in-water fixtures, support pillars, on-deck play structures; or combinations thereof. 7. The method of facilitating a lifeguard as recited in claim 1, wherein the submersible devices have base sections with weighted portions and movable members extending from the base sections, the movable members have less weight than the base sections, the movable members have sufficient buoyancy to move and sway as current of water in the swimming pool or body of water changes. 8. The method of facilitating a lifeguard as recited in claim 7, wherein the movable members have a different color than the base sections to preclude false positive or false negative results when observed. 9. The method of facilitating a lifeguard as recited in claim 8, wherein two movable members extend from each base section of the base sections. 10. A method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for a swimmer/bather in distress, the method comprising: establishing a grid or pattern for the swimming pool or body of water, the grid or pattern indicating where submersible devices are to be positioned; positioning the submersible devices in the swimming pool or body of water according to the established grid or pattern, the submersible devices are positioned in the swimming pool or body of water at the same time as the swimmers/bathers are in the swimming pool or body of water, the submersible devices simulate submerged swimmers/bathers; observing the submersible devices from different locations and under varying environmental and density conditions; wherein the observations facilitates training of the lifeguard to search for the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather in distress suffering a fatal drowning. 11. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 10, further comprising determining the locations for observing by establishing the grid or pattern based on the shape of the swimming pool or body of water, including any blind spots that may be present. 12. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 11, wherein the varying environmental and density conditions consists of surface turbulence created by swimmers/bathers, waves, wind, water spray features; turbidity caused by air entrainment from waves or bubble features; reflected images; glare from light; lighting levels, design and positioning of light or lighting sources; water depth; background color of the pool walls and bottoms; sightline obstructions such as in-water fixtures, support pillars, on-deck play structures; or a combination thereof. 13. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 10, wherein the submersible devices are sized to approximate the size of a 2½ to 3 year old child in a fetal position. 14. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 10, wherein the submersible devices have multiple colors to preclude false positive or false negative results when training with the submersible devices. 15. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 10, wherein the submersible devices have base sections and movable members, the movable members have a buoyancy and are moved by water in the swimming pool or body of water to simulate the swimmer/bather in distress at the bottom of the swimming pool or body of water. 16. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 15, wherein the base sections have weighted portions, the movable members extend from the base sections, the movable members have less weight than the base sections. 17. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 16, wherein the movable members have a different color than the base sections to preclude false positive or false negative results when observed. 18. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 17, wherein two movable members extend from the base section of the base sections. 19. A method of training a lifeguard to properly view an area of a swimming pool or body of water, the method comprising: mapping the shape of the swimming pool or body of water to establish a grid or pattern; positioning submersible devices on a bottom of the swimming pool or body of water according to the established grid or pattern, the submersible devices are positioned on the bottom of the swimming pool or body of water, the submersible devices have the movable members which are moved by water in the swimming pool or body of water to simulate a swimmer/bather in distress at the bottom of the swimming pool or body of water; viewing all the submersible devices according to the grid or pattern; analyzing time required to view all of the submersible devices to evaluate the ability of the lifeguard to timely see the submersible devices; wherein the viewing trains the lifeguard to search for the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather in distress suffering a fatal drowning. 20. The method of training a lifeguard to properly view an area of a swimming pool or body of water as recited in claim 19, comprising viewing the submersible devices under varying environmental conditions.
A method of training a lifeguard to properly view an area of a swimming pool or body of water and recognize a swimmer/bather in distress. The method includes: positioning submersible devices or other objects on a bottom of the swimming pool or body of water according to an established grid or pattern; observing the submersible devices to make observations; analyzing the observations to evaluate the ability to see the submersible devices under varying environmental and density conditions. The observation trains the lifeguard to recognize the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather drowning.1. A method of facilitating a lifeguard to supervise swimmers/bathers in a swimming pool or body of water and search for a swimmer/bather in distress, the method comprising: mapping the shape of the swimming pool or body of water to establish a grid or pattern; positioning submersible devices on a bottom of the swimming pool or body of water according to the established grid or pattern, the submersible devices are positioned on the bottom of the swimming pool or body of water at the same time as the swimmers/bathers are in the swimming pool or body of water, the submersible devices simulate submerged swimmers/bathers; observing the submersible devices to make observations; analyzing the observations to evaluate the ability to see the submersible devices under varying environmental and density conditions; wherein the observation trains the lifeguard to effectively search and recognize the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather suffering a fatal drowning. 2. The method of facilitating a lifeguard as recited in claim 1, wherein the submersible devices have base sections and movable members, the movable members have a buoyancy and are moved by water in the swimming pool or body of water to simulate the swimmer/bather in distress at the bottom of the swimming pool or body of water. 3. The method of facilitating a lifeguard as recited in claim 1, wherein the submersible devices are sized to approximate the size of a 2½ to 3 year old child in a fetal position. 4. The method of facilitating a lifeguard as recited in claim 1, wherein the submersible devices have multiple colors to preclude false positive or false negative results when training with the submersible devices. 5. The method of facilitating a lifeguard as recited in claim 1, further comprising including any blind spots that may be present in the swimming pool or body of water to establish the grid or pattern. 6. The method of facilitating a lifeguard as recited in claim 1, further observing the submersible devices under different conditions consisting of surface turbulence created by swimmers/bathers, waves, wind, water spray features; turbidity caused by air entrainment from waves or bubble features; reflected images; glare from light; lighting levels, design and positioning of light or lighting sources; water depth; background color of the pool walls and bottoms; and sightline obstructions such as in-water fixtures, support pillars, on-deck play structures; or combinations thereof. 7. The method of facilitating a lifeguard as recited in claim 1, wherein the submersible devices have base sections with weighted portions and movable members extending from the base sections, the movable members have less weight than the base sections, the movable members have sufficient buoyancy to move and sway as current of water in the swimming pool or body of water changes. 8. The method of facilitating a lifeguard as recited in claim 7, wherein the movable members have a different color than the base sections to preclude false positive or false negative results when observed. 9. The method of facilitating a lifeguard as recited in claim 8, wherein two movable members extend from each base section of the base sections. 10. A method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for a swimmer/bather in distress, the method comprising: establishing a grid or pattern for the swimming pool or body of water, the grid or pattern indicating where submersible devices are to be positioned; positioning the submersible devices in the swimming pool or body of water according to the established grid or pattern, the submersible devices are positioned in the swimming pool or body of water at the same time as the swimmers/bathers are in the swimming pool or body of water, the submersible devices simulate submerged swimmers/bathers; observing the submersible devices from different locations and under varying environmental and density conditions; wherein the observations facilitates training of the lifeguard to search for the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather in distress suffering a fatal drowning. 11. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 10, further comprising determining the locations for observing by establishing the grid or pattern based on the shape of the swimming pool or body of water, including any blind spots that may be present. 12. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 11, wherein the varying environmental and density conditions consists of surface turbulence created by swimmers/bathers, waves, wind, water spray features; turbidity caused by air entrainment from waves or bubble features; reflected images; glare from light; lighting levels, design and positioning of light or lighting sources; water depth; background color of the pool walls and bottoms; sightline obstructions such as in-water fixtures, support pillars, on-deck play structures; or a combination thereof. 13. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 10, wherein the submersible devices are sized to approximate the size of a 2½ to 3 year old child in a fetal position. 14. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 10, wherein the submersible devices have multiple colors to preclude false positive or false negative results when training with the submersible devices. 15. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 10, wherein the submersible devices have base sections and movable members, the movable members have a buoyancy and are moved by water in the swimming pool or body of water to simulate the swimmer/bather in distress at the bottom of the swimming pool or body of water. 16. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 15, wherein the base sections have weighted portions, the movable members extend from the base sections, the movable members have less weight than the base sections. 17. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 16, wherein the movable members have a different color than the base sections to preclude false positive or false negative results when observed. 18. The method of training a lifeguard to properly supervise swimmers/bathers in a swimming pool or body of water and search for the swimmer/bather in distress as recited in claim 17, wherein two movable members extend from the base section of the base sections. 19. A method of training a lifeguard to properly view an area of a swimming pool or body of water, the method comprising: mapping the shape of the swimming pool or body of water to establish a grid or pattern; positioning submersible devices on a bottom of the swimming pool or body of water according to the established grid or pattern, the submersible devices are positioned on the bottom of the swimming pool or body of water, the submersible devices have the movable members which are moved by water in the swimming pool or body of water to simulate a swimmer/bather in distress at the bottom of the swimming pool or body of water; viewing all the submersible devices according to the grid or pattern; analyzing time required to view all of the submersible devices to evaluate the ability of the lifeguard to timely see the submersible devices; wherein the viewing trains the lifeguard to search for the swimmer/bather in distress in the swimming pool or body of water to minimize the risk of the swimmer/bather in distress suffering a fatal drowning. 20. The method of training a lifeguard to properly view an area of a swimming pool or body of water as recited in claim 19, comprising viewing the submersible devices under varying environmental conditions.
2,600
10,929
10,929
16,418,607
2,685
A cooking assistance appliance can include a body, at least one camera, at least one sensor, and an imaging device for capturing an image of a cooking appliance. The cooking appliance can provide for monitoring use of the cooking appliance, as well as assisting the user in operating the cooking appliance. Additionally, the cooking assistance appliance can be integrated into a hood or a microwave oven and vent combination above a stovetop.
1-80. (canceled) 81. A cooking assistance appliance for use with a cooking appliance, the cooking assistance appliance comprising: a housing configured to be positioned adjacent to a cooking appliance within a line of sight of a cook surface, wherein the housing includes a head connected to a neck extending from a base; at least one image sensor provided in the head for generating image data of the cooking appliance; a controller including a processor configured to process the image data to determine a condition of a food item in a cooking vessel on the cooking appliance, said condition relating to a liquid volume or a liquid level measurement; and a communication interface for communicating the determined condition of the food item in human understandable form. 82. The cooking assistance appliance of claim 81 further comprising a temperature sensor in communication with the controller for generating thermal data of the food item or the cook surface, and wherein the controller is configured to process the thermal data to determine a condition of the food item in the cooking vessel on the cook surface. 83. The cooking assistance appliance of claim 81 wherein said communication interface comprises a projector. 84. The cooking assistance appliance of claim 81 wherein said communication interface comprises a speaker for audible communication with a user. 85. The cooking assistance appliance of claim 84 wherein the communication interface also comprises a microphone. 86. The cooking assistance appliance of claim 81 further comprising a proximity sensor for sensing a local position of a user. 87. The cooking assistance appliance of claim 81 further comprising a temperature probe in communication with the controller and an infrared sensor to generate temperature data for determining the condition of the food item. 88. The cooking assistance appliance of claim 87 wherein the temperature probe can generate internal temperature data for the food item and the infrared sensor can determine external temperature date for the food item, with both the temperature probe and the infrared sensor used determine a doneness of the food item. 89. The cooking assistance appliance of claim 81 wherein the controller further includes a wireless communication module for communication between the cooking assistance appliance and the cooking appliance. 90. The cooking assistance appliance of claim 89 wherein the cooking assistance appliance can operate the cooking appliance via the wireless communication module. 91. The cooking assistance appliance of claim 81 wherein the at least one image sensor comprises at least two, 2D visible light image sensors in spaced relationship, each 2D visible light image sensor outputting 2D image data, and wherein the processor is configured to process the 2D image data from both 2D visible light image sensors into a 3D image. 92. The cooking assistance appliance of claim 81 wherein the cooking assistance appliance can map a cook surface based upon a setup algorithm. 93. The cooking assistance appliance of claim 92 wherein the mapped cook surface can include the cooking vessel. 94. The cooking assistance appliance of claim 81 wherein the condition is a liquid mess condition associated with the cooking vessel on the cook surface or a sous-vide status for a food item contained within a volume of liquid within the cooking vessel, and wherein the processor is configured with a setup algorithm that uses the image data to determine a boundary of the cooking vessel and a location of one or more heating zones. 95. The cooking assistance appliance of claim 81 wherein the cooking assistance appliance is a stand-alone appliance, or an appliance mountable to an environment above or near the cooking appliance by way of a movable support, a fixed support or a pivotable support. 96. The cooking assistance appliance of claim 95 wherein the cooking assistance appliance is the stand-alone appliance and includes a body positionable adjacent to the cooking appliance for measuring a physical state of the food item being cooked in the cooking vessel, a condition of the food item, or an attribute of the food item or the cooking vessel, with the at least one image sensor. 97. The cooking assistance appliance of claim 96 wherein the head and the base are rotatable relative to each other via the neck. 98. The cooking assistance appliance of claim 97 wherein the image sensor and an infrared sensor are provided in the head for a downward view of the cooking appliance. 99. The cooking assistance appliance of claim 98 further comprising a light array arranged on the neck. 100. The cooking assistance appliance of claim 99 wherein the light array is operable by the controller to communicate a status of operation of the cooking appliance to a user through visual patterning or colors formed by the light array. 101. A cooking assistance appliance for use with a cooking appliance, the cooking assistance appliance comprising: a housing sized to be positioned on a counter adjacent to the cooking appliance, positionable within a line of sight of a cook surface of the cooking appliance, with the housing including a head, a neck, and a base; at least one image sensor for generating image data of a food item on the cooking appliance provided in the head to view the cook surface via the line of sight; at least one temperature sensor for generating temperature data of the food item on the cooking appliance provided in the head to view the cook surface via the line of sight; a controller including a processor configured to process the image data and the temperature data to determine a condition of a food item, said condition relating to a cooking status of the food item; and a communication interface for communicating the determined condition of the food item in human understandable form. 102. A cooking assistance appliance for use with a cooking appliance, the cooking assistance appliance comprising: a housing including a head positionable within a line of sight of a cook surface of the cooking appliance; at least one image sensor for generating image data provided in the head; a controller including a processor configured to process the image data to determine a condition of a food item on the cook surface, said condition relating to a liquid volume or a liquid level measurement; and a communication interface for communicating the determined condition of the food item to a user.
A cooking assistance appliance can include a body, at least one camera, at least one sensor, and an imaging device for capturing an image of a cooking appliance. The cooking appliance can provide for monitoring use of the cooking appliance, as well as assisting the user in operating the cooking appliance. Additionally, the cooking assistance appliance can be integrated into a hood or a microwave oven and vent combination above a stovetop.1-80. (canceled) 81. A cooking assistance appliance for use with a cooking appliance, the cooking assistance appliance comprising: a housing configured to be positioned adjacent to a cooking appliance within a line of sight of a cook surface, wherein the housing includes a head connected to a neck extending from a base; at least one image sensor provided in the head for generating image data of the cooking appliance; a controller including a processor configured to process the image data to determine a condition of a food item in a cooking vessel on the cooking appliance, said condition relating to a liquid volume or a liquid level measurement; and a communication interface for communicating the determined condition of the food item in human understandable form. 82. The cooking assistance appliance of claim 81 further comprising a temperature sensor in communication with the controller for generating thermal data of the food item or the cook surface, and wherein the controller is configured to process the thermal data to determine a condition of the food item in the cooking vessel on the cook surface. 83. The cooking assistance appliance of claim 81 wherein said communication interface comprises a projector. 84. The cooking assistance appliance of claim 81 wherein said communication interface comprises a speaker for audible communication with a user. 85. The cooking assistance appliance of claim 84 wherein the communication interface also comprises a microphone. 86. The cooking assistance appliance of claim 81 further comprising a proximity sensor for sensing a local position of a user. 87. The cooking assistance appliance of claim 81 further comprising a temperature probe in communication with the controller and an infrared sensor to generate temperature data for determining the condition of the food item. 88. The cooking assistance appliance of claim 87 wherein the temperature probe can generate internal temperature data for the food item and the infrared sensor can determine external temperature date for the food item, with both the temperature probe and the infrared sensor used determine a doneness of the food item. 89. The cooking assistance appliance of claim 81 wherein the controller further includes a wireless communication module for communication between the cooking assistance appliance and the cooking appliance. 90. The cooking assistance appliance of claim 89 wherein the cooking assistance appliance can operate the cooking appliance via the wireless communication module. 91. The cooking assistance appliance of claim 81 wherein the at least one image sensor comprises at least two, 2D visible light image sensors in spaced relationship, each 2D visible light image sensor outputting 2D image data, and wherein the processor is configured to process the 2D image data from both 2D visible light image sensors into a 3D image. 92. The cooking assistance appliance of claim 81 wherein the cooking assistance appliance can map a cook surface based upon a setup algorithm. 93. The cooking assistance appliance of claim 92 wherein the mapped cook surface can include the cooking vessel. 94. The cooking assistance appliance of claim 81 wherein the condition is a liquid mess condition associated with the cooking vessel on the cook surface or a sous-vide status for a food item contained within a volume of liquid within the cooking vessel, and wherein the processor is configured with a setup algorithm that uses the image data to determine a boundary of the cooking vessel and a location of one or more heating zones. 95. The cooking assistance appliance of claim 81 wherein the cooking assistance appliance is a stand-alone appliance, or an appliance mountable to an environment above or near the cooking appliance by way of a movable support, a fixed support or a pivotable support. 96. The cooking assistance appliance of claim 95 wherein the cooking assistance appliance is the stand-alone appliance and includes a body positionable adjacent to the cooking appliance for measuring a physical state of the food item being cooked in the cooking vessel, a condition of the food item, or an attribute of the food item or the cooking vessel, with the at least one image sensor. 97. The cooking assistance appliance of claim 96 wherein the head and the base are rotatable relative to each other via the neck. 98. The cooking assistance appliance of claim 97 wherein the image sensor and an infrared sensor are provided in the head for a downward view of the cooking appliance. 99. The cooking assistance appliance of claim 98 further comprising a light array arranged on the neck. 100. The cooking assistance appliance of claim 99 wherein the light array is operable by the controller to communicate a status of operation of the cooking appliance to a user through visual patterning or colors formed by the light array. 101. A cooking assistance appliance for use with a cooking appliance, the cooking assistance appliance comprising: a housing sized to be positioned on a counter adjacent to the cooking appliance, positionable within a line of sight of a cook surface of the cooking appliance, with the housing including a head, a neck, and a base; at least one image sensor for generating image data of a food item on the cooking appliance provided in the head to view the cook surface via the line of sight; at least one temperature sensor for generating temperature data of the food item on the cooking appliance provided in the head to view the cook surface via the line of sight; a controller including a processor configured to process the image data and the temperature data to determine a condition of a food item, said condition relating to a cooking status of the food item; and a communication interface for communicating the determined condition of the food item in human understandable form. 102. A cooking assistance appliance for use with a cooking appliance, the cooking assistance appliance comprising: a housing including a head positionable within a line of sight of a cook surface of the cooking appliance; at least one image sensor for generating image data provided in the head; a controller including a processor configured to process the image data to determine a condition of a food item on the cook surface, said condition relating to a liquid volume or a liquid level measurement; and a communication interface for communicating the determined condition of the food item to a user.
2,600
10,930
10,930
16,493,020
2,675
In one example in accordance with the present disclosure, a method of identifying failing components using an ultrasonic microphone is described. According to the method, an ultrasonic audio signal generated during operation of a device is received at an ultrasonic microphone disposed within the device. The received ultrasonic audio signal is compared against a baseline ultrasonic audio signal for the device to detect deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal. Based on detected deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal being greater than a threshold amount, a failing component within the device is identified.
1. A method comprising: receiving, at an ultrasonic microphone disposed in a device, an ultrasonic audio signal generated during an operation of the device; comparing the received ultrasonic audio signal against a baseline ultrasonic audio signal for the device to detect deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal; and based on detected deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal being greater than a threshold amount, identify a failing component within the device. 2. The method of claim 1: further comprising converting the received ultrasonic audio signal from a time domain representation into a frequency domain representation; and wherein comparing the received ultrasonic audio signal against the baseline ultrasonic audio signal comprises comparing frequencies and amplitudes found in the frequency domain representation of the received ultrasonic audio signal against frequencies and amplitudes found in the frequency domain representation of the baseline ultrasonic audio signal to detect deviations. 3. The method of claim 2, wherein the deviations comprise deviations in amplitude, deviations in frequency, or combinations thereof. 4. The method of claim 2, wherein the deviations comprise at least one of: an unexpected frequency having an amplitude greater than a predetermined amount as compared against expected frequencies in the baseline ultrasonic audio signal; and an unexpected amplitude of an expected frequency as found in the baseline ultrasonic audio signal. 5. The method of claim 1, further comprising collecting ultrasonic audio signals from a number of similar devices to form the baseline ultrasonic audio signal. 6. The method of claim 1, wherein identifying a failing component within the device comprises, upon detection of a deviation between the received ultrasonic audio signal and the baseline ultrasonic audio signal for the device being greater than a threshold amount, executing a localization operation to identify the failing component. 7. The method of claim 6, wherein the localization operation comprises at least one operation selected from the group consisting of: iteratively operating various components of the printing device to identify the failing component; and analyzing at least one of: characteristics of the deviation; and a timing of the deviation to identify the failing component. 8. A printing system comprising: a printing device to form printed marks on a medium by depositing a printing compound on the medium; at least one ultrasonic microphone, disposed within the printing device, to receive an ultrasonic audio signal generated during the operation of the printing device; a database comprising a number of baseline ultrasonic audio signals for the printing device; and a controller to: compare the received ultrasonic audio signal against a baseline ultrasonic audio signal for the printing device; and based on detected deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal for the printing device, identify a failing component within the printing device. 9. The printing system of claim 8, wherein the printing system comprises multiple ultrasonic microphones. 10. The printing system of claim 9, wherein the multiple ultrasonic microphones are positioned at different locations within the printing device. 11. The printing system of claim 9, wherein the multiple ultrasonic microphones are tuned to different frequencies within the ultrasonic spectrum. 12. The printing system of claim 8, wherein the database is indexed based on at least one of: an age of the printing device; and a period of operation of the printing device. 13. The printing system of claim 8, wherein the database identifies deviations based on at least one of: a timing of the detected deviation; and characteristics of the deviation. 14. A computing system comprising: a processor; a machine-readable storage medium coupled to the processor; and an instruction set, the instruction set being stored in the machine-readable storage medium to be executed by the processor, wherein the instruction set comprises: instructions to receive, at an ultrasonic microphone disposed in a printing device, an ultrasonic audio signal generated during the operation of the printing device; instructions to convert the received ultrasonic audio signal from a time domain into a frequency domain; instructions to compare a frequency domain representation of the received ultrasonic audio signal against a frequency domain representation of a baseline ultrasonic audio signal for the printing device; and instructions to identify deviations between the frequency domain representation of the received ultrasonic audio signal and the frequency domain representation of the baseline ultrasonic audio signal, wherein the deviations comprises deviations in amplitude, frequency, or combinations thereof; instructions to, based on characteristics of the identified deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal for the printing device being greater than a threshold amount, identify a faulty component within the printing device; and instructions to, provide a notification of the failing component within the printing device. 15. The system of claim 14, wherein the instruction set further comprises instructions to update the baseline ultrasonic audio signal in the database.
In one example in accordance with the present disclosure, a method of identifying failing components using an ultrasonic microphone is described. According to the method, an ultrasonic audio signal generated during operation of a device is received at an ultrasonic microphone disposed within the device. The received ultrasonic audio signal is compared against a baseline ultrasonic audio signal for the device to detect deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal. Based on detected deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal being greater than a threshold amount, a failing component within the device is identified.1. A method comprising: receiving, at an ultrasonic microphone disposed in a device, an ultrasonic audio signal generated during an operation of the device; comparing the received ultrasonic audio signal against a baseline ultrasonic audio signal for the device to detect deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal; and based on detected deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal being greater than a threshold amount, identify a failing component within the device. 2. The method of claim 1: further comprising converting the received ultrasonic audio signal from a time domain representation into a frequency domain representation; and wherein comparing the received ultrasonic audio signal against the baseline ultrasonic audio signal comprises comparing frequencies and amplitudes found in the frequency domain representation of the received ultrasonic audio signal against frequencies and amplitudes found in the frequency domain representation of the baseline ultrasonic audio signal to detect deviations. 3. The method of claim 2, wherein the deviations comprise deviations in amplitude, deviations in frequency, or combinations thereof. 4. The method of claim 2, wherein the deviations comprise at least one of: an unexpected frequency having an amplitude greater than a predetermined amount as compared against expected frequencies in the baseline ultrasonic audio signal; and an unexpected amplitude of an expected frequency as found in the baseline ultrasonic audio signal. 5. The method of claim 1, further comprising collecting ultrasonic audio signals from a number of similar devices to form the baseline ultrasonic audio signal. 6. The method of claim 1, wherein identifying a failing component within the device comprises, upon detection of a deviation between the received ultrasonic audio signal and the baseline ultrasonic audio signal for the device being greater than a threshold amount, executing a localization operation to identify the failing component. 7. The method of claim 6, wherein the localization operation comprises at least one operation selected from the group consisting of: iteratively operating various components of the printing device to identify the failing component; and analyzing at least one of: characteristics of the deviation; and a timing of the deviation to identify the failing component. 8. A printing system comprising: a printing device to form printed marks on a medium by depositing a printing compound on the medium; at least one ultrasonic microphone, disposed within the printing device, to receive an ultrasonic audio signal generated during the operation of the printing device; a database comprising a number of baseline ultrasonic audio signals for the printing device; and a controller to: compare the received ultrasonic audio signal against a baseline ultrasonic audio signal for the printing device; and based on detected deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal for the printing device, identify a failing component within the printing device. 9. The printing system of claim 8, wherein the printing system comprises multiple ultrasonic microphones. 10. The printing system of claim 9, wherein the multiple ultrasonic microphones are positioned at different locations within the printing device. 11. The printing system of claim 9, wherein the multiple ultrasonic microphones are tuned to different frequencies within the ultrasonic spectrum. 12. The printing system of claim 8, wherein the database is indexed based on at least one of: an age of the printing device; and a period of operation of the printing device. 13. The printing system of claim 8, wherein the database identifies deviations based on at least one of: a timing of the detected deviation; and characteristics of the deviation. 14. A computing system comprising: a processor; a machine-readable storage medium coupled to the processor; and an instruction set, the instruction set being stored in the machine-readable storage medium to be executed by the processor, wherein the instruction set comprises: instructions to receive, at an ultrasonic microphone disposed in a printing device, an ultrasonic audio signal generated during the operation of the printing device; instructions to convert the received ultrasonic audio signal from a time domain into a frequency domain; instructions to compare a frequency domain representation of the received ultrasonic audio signal against a frequency domain representation of a baseline ultrasonic audio signal for the printing device; and instructions to identify deviations between the frequency domain representation of the received ultrasonic audio signal and the frequency domain representation of the baseline ultrasonic audio signal, wherein the deviations comprises deviations in amplitude, frequency, or combinations thereof; instructions to, based on characteristics of the identified deviations between the received ultrasonic audio signal and the baseline ultrasonic audio signal for the printing device being greater than a threshold amount, identify a faulty component within the printing device; and instructions to, provide a notification of the failing component within the printing device. 15. The system of claim 14, wherein the instruction set further comprises instructions to update the baseline ultrasonic audio signal in the database.
2,600
10,931
10,931
16,312,262
2,642
Systems and methods of determining a reporting configuration associated with a coverage level of a wireless device are provided. In one exemplary embodiment, a method performed by a wireless device (105, 200, 300, 400, 1000) in a wireless communication system (100) includes obtaining (503) information indicating a coverage level (113a-d) of the wireless device. Further, the method includes determining (507), from amongst different reporting configurations (115a-d) respectively associated with different coverage levels of the wireless device, the reporting configuration associated with the coverage level indicated by the obtained information. Also, the method includes reporting (511) a measurement result using the determined reporting configuration.
1-46. (canceled) 47. A method performed by a wireless device in a wireless communication system, the method comprising: obtaining information indicating a coverage level of the wireless device; determining, from amongst different power headroom report mappings respectively associated with different coverage levels of the wireless device and differing from each other with respect to reporting resolutions and/or reporting ranges of power headroom information, the power headroom report mapping associated with the coverage level indicated by the obtained information; and reporting the power headroom information using the determined power headroom report mapping. 48. The method of claim 47, wherein the reporting includes: generating an indication of the power headroom information using the determined reporting configuration; and transmitting, to a network node in the wireless communication system, the indication of the power headroom information. 49. The method of claim 47, further comprising transmitting, to a network node in the wireless communication system, an indication of the coverage level of the wireless device. 50. The method of claim 47, wherein the obtaining includes determining the coverage level of the wireless device based on the information. 51. The method of claim 47, further comprising receiving, from a network node in the wireless communication system, the information indicating the coverage level of the wireless device. 52. The method of claim 47: wherein the obtaining includes performing a measurement of a signal transmitted or received by the wireless device; and wherein the information includes the signal measurement. 53. The method of claim 47: wherein the obtaining includes determining a number of repetitions used for random access transmissions by the wireless device based on a random access configuration of the wireless device; and wherein the information includes the number of repetitions used for the random access transmissions. 54. The method of claim 47, wherein the determining the reporting configuration includes receiving, from a network node in the wireless communication system, an indication of the different power headroom report mappings. 55. The method of claim 47, wherein the information includes an indication that a network node serving the wireless device is using or supports the coverage level. 56. The method of claim 47, wherein the information includes an indication that a network node serving the wireless device supports the different coverage levels. 57. The method of claim 47, wherein the information includes a measurement of a signal transmitted or received by the wireless device. 58. The method of claim 47, wherein the information includes a random access configuration associated with the wireless device performing random access transmissions to a network node. 59. The method of claim 47, wherein the information includes a capability of the wireless device to support the different coverage levels. 60. The method of claim 47, wherein the information includes an indication of the different coverage levels of the wireless device. 61. The method of claim 47, wherein the determining the power headroom report mapping is based on predefined time periods associated with a measurement of a signal received by the wireless device from a network node. 62. The method of claim 47, wherein the determining the power headroom report mapping is based on one or more resources associated with the different power headroom report mappings being available for use by the wireless device. 63. The method of claim 47, wherein the determining the power headroom report mapping is based on data provided by a network node to assist the wireless device in the determining the power headroom report mapping. 64. The method of claim 47, wherein the different coverage levels include one or more normal coverage levels and one or more enhanced coverage levels. 65. The method of claim 47, wherein: the wireless device is capable of operating as a Long Term Evolution (LTE) Category Narrowband 1 (LTE Cat NB1) device; and the determined power headroom report mapping includes a power headroom report mapping for the LTE Cat NB1 device. 66. A wireless device in a wireless communication system, the wireless device comprising processing circuitry; memory containing instructions executable by the processing circuitry whereby the wireless device is operative to: obtain information indicating a coverage level of the wireless device; determine, from amongst different power headroom report mappings respectively associated with different coverage levels of the wireless device and differing from each other with respect to reporting resolutions and/or reporting ranges of power headroom information, the power headroom report mapping associated with the coverage level indicated by the obtained information; and report power headroom information using the determined power headroom report mapping. 67. A method performed by a network node in a wireless communication system, comprising: obtaining information indicating a coverage level of a wireless device in the wireless communication system; determining, from amongst different power headroom report mappings respectively associated with different coverage levels of the wireless device and differing from each other with respect to reporting resolutions and/or reporting ranges of power headroom information, the power headroom report mapping associated with the coverage level indicated by the obtained information. 68. The method of claim 67, further comprising transmitting, to the wireless device, the determined power headroom report mapping. 69. The method of claim 67, further comprising receiving, from the wireless device, power headroom information using the determined reporting configuration. 70. The method of claim 67, further comprising: receiving, from the wireless device, an indication of one or more coverage levels supported by the wireless device, wherein the information includes the one or more coverage levels supported by the wireless device; and wherein the obtaining includes determining the coverage level from the one or more coverage levels supported by the wireless device. 71. The method of claim 67: wherein the wireless device is capable of operating as a Long Term Evolution (LTE) Category Narrowband 1 (LTE Cat NB1) device; and wherein the determined power headroom report mapping includes a power headroom report mapping for the LTE Cat NB1 device. 72. A network node in a wireless communication system, the network node comprising: processing circuitry; memory containing instructions executable by the processing circuitry whereby the network node is operative to: obtain information indicating a coverage level of a wireless device in the wireless communication system; and determine, from amongst different power headroom report mappings respectively associated with different coverage levels of the wireless device and differing from each other with respect to reporting resolutions and/or reporting ranges of power headroom information, the power headroom report mapping associated with the coverage level indicated by the obtained information.
Systems and methods of determining a reporting configuration associated with a coverage level of a wireless device are provided. In one exemplary embodiment, a method performed by a wireless device (105, 200, 300, 400, 1000) in a wireless communication system (100) includes obtaining (503) information indicating a coverage level (113a-d) of the wireless device. Further, the method includes determining (507), from amongst different reporting configurations (115a-d) respectively associated with different coverage levels of the wireless device, the reporting configuration associated with the coverage level indicated by the obtained information. Also, the method includes reporting (511) a measurement result using the determined reporting configuration.1-46. (canceled) 47. A method performed by a wireless device in a wireless communication system, the method comprising: obtaining information indicating a coverage level of the wireless device; determining, from amongst different power headroom report mappings respectively associated with different coverage levels of the wireless device and differing from each other with respect to reporting resolutions and/or reporting ranges of power headroom information, the power headroom report mapping associated with the coverage level indicated by the obtained information; and reporting the power headroom information using the determined power headroom report mapping. 48. The method of claim 47, wherein the reporting includes: generating an indication of the power headroom information using the determined reporting configuration; and transmitting, to a network node in the wireless communication system, the indication of the power headroom information. 49. The method of claim 47, further comprising transmitting, to a network node in the wireless communication system, an indication of the coverage level of the wireless device. 50. The method of claim 47, wherein the obtaining includes determining the coverage level of the wireless device based on the information. 51. The method of claim 47, further comprising receiving, from a network node in the wireless communication system, the information indicating the coverage level of the wireless device. 52. The method of claim 47: wherein the obtaining includes performing a measurement of a signal transmitted or received by the wireless device; and wherein the information includes the signal measurement. 53. The method of claim 47: wherein the obtaining includes determining a number of repetitions used for random access transmissions by the wireless device based on a random access configuration of the wireless device; and wherein the information includes the number of repetitions used for the random access transmissions. 54. The method of claim 47, wherein the determining the reporting configuration includes receiving, from a network node in the wireless communication system, an indication of the different power headroom report mappings. 55. The method of claim 47, wherein the information includes an indication that a network node serving the wireless device is using or supports the coverage level. 56. The method of claim 47, wherein the information includes an indication that a network node serving the wireless device supports the different coverage levels. 57. The method of claim 47, wherein the information includes a measurement of a signal transmitted or received by the wireless device. 58. The method of claim 47, wherein the information includes a random access configuration associated with the wireless device performing random access transmissions to a network node. 59. The method of claim 47, wherein the information includes a capability of the wireless device to support the different coverage levels. 60. The method of claim 47, wherein the information includes an indication of the different coverage levels of the wireless device. 61. The method of claim 47, wherein the determining the power headroom report mapping is based on predefined time periods associated with a measurement of a signal received by the wireless device from a network node. 62. The method of claim 47, wherein the determining the power headroom report mapping is based on one or more resources associated with the different power headroom report mappings being available for use by the wireless device. 63. The method of claim 47, wherein the determining the power headroom report mapping is based on data provided by a network node to assist the wireless device in the determining the power headroom report mapping. 64. The method of claim 47, wherein the different coverage levels include one or more normal coverage levels and one or more enhanced coverage levels. 65. The method of claim 47, wherein: the wireless device is capable of operating as a Long Term Evolution (LTE) Category Narrowband 1 (LTE Cat NB1) device; and the determined power headroom report mapping includes a power headroom report mapping for the LTE Cat NB1 device. 66. A wireless device in a wireless communication system, the wireless device comprising processing circuitry; memory containing instructions executable by the processing circuitry whereby the wireless device is operative to: obtain information indicating a coverage level of the wireless device; determine, from amongst different power headroom report mappings respectively associated with different coverage levels of the wireless device and differing from each other with respect to reporting resolutions and/or reporting ranges of power headroom information, the power headroom report mapping associated with the coverage level indicated by the obtained information; and report power headroom information using the determined power headroom report mapping. 67. A method performed by a network node in a wireless communication system, comprising: obtaining information indicating a coverage level of a wireless device in the wireless communication system; determining, from amongst different power headroom report mappings respectively associated with different coverage levels of the wireless device and differing from each other with respect to reporting resolutions and/or reporting ranges of power headroom information, the power headroom report mapping associated with the coverage level indicated by the obtained information. 68. The method of claim 67, further comprising transmitting, to the wireless device, the determined power headroom report mapping. 69. The method of claim 67, further comprising receiving, from the wireless device, power headroom information using the determined reporting configuration. 70. The method of claim 67, further comprising: receiving, from the wireless device, an indication of one or more coverage levels supported by the wireless device, wherein the information includes the one or more coverage levels supported by the wireless device; and wherein the obtaining includes determining the coverage level from the one or more coverage levels supported by the wireless device. 71. The method of claim 67: wherein the wireless device is capable of operating as a Long Term Evolution (LTE) Category Narrowband 1 (LTE Cat NB1) device; and wherein the determined power headroom report mapping includes a power headroom report mapping for the LTE Cat NB1 device. 72. A network node in a wireless communication system, the network node comprising: processing circuitry; memory containing instructions executable by the processing circuitry whereby the network node is operative to: obtain information indicating a coverage level of a wireless device in the wireless communication system; and determine, from amongst different power headroom report mappings respectively associated with different coverage levels of the wireless device and differing from each other with respect to reporting resolutions and/or reporting ranges of power headroom information, the power headroom report mapping associated with the coverage level indicated by the obtained information.
2,600
10,932
10,932
16,290,871
2,613
A method and a digital device for evaluating a generalized rational function are provided in which the evaluation of the generalized rational function includes computing all discontinuities of the generalized rational function, determining, for each discontinuity, whether or not the discontinuity is a removable discontinuity, wherein the determining includes determining whether or not the generalized random function approaches a point near the discontinuity, and displaying each removable discontinuity. A method for evaluating a generalized rational function on a handheld graphing calculator is provided that includes determining whether or not the generalized rational function has at least one asymptote and displaying the at least one asymptote on a display screen when the generalized rational function has the at least one asymptote.
1. A method for evaluating a generalized rational function on a digital device, the method comprising: computing, by at least one processor of the digital device, all discontinuities of the generalized rational function, wherein the computing comprises using a discontinuity determination process to determine the discontinuities, the discontinuity determination process comprising: determining that there is no discontinuity in an input to the discontinuity determination process when the input is a constant or a variable; determining a top level operator of the input when the input is not a constant or a variable; when the top level operator is negation, recursively applying the discontinuity determination process to an operand of the negation; when the top level operator is addition, multiplication, or subtraction, recursively applying the discontinuity determination process to a left operand of the top level operator, and recursively applying the discontinuity determination process to a right operand of the top level operator; and when the top level operator is division, recursively applying the discontinuity determination process to a numerator of the top level operator, and determining zeroes of a denominator of the top level operator, wherein the generalized rational function is an initial input to the discontinuity determination process; determining, for each discontinuity determined by the discontinuity determination process, whether or not the discontinuity is a removable discontinuity; and displaying each removable discontinuity on a display screen by the at least one processor. 2. The method of claim 1, wherein displaying further comprises displaying coordinates of each removable discontinuity. 3. The method of claim 1, wherein displaying further comprises displaying each removable discontinuity on a graph of the generalized rational function. 4. The method of claim 1, wherein the digital device is a handheld graphing calculator. 5. The method of claim 1, further comprising disabling functionality to determine removable discontinuities on the digital device, wherein responsive to the disabling, the computing, determining, and displaying are not performed. 6. The method of claim 1, wherein the discontinuity determination process further comprises: when the top level operator is exponentiation and an exponent is a positive integer, recursively applying the discontinuity determination process to a base of the exponentiation; and when the top level operator is exponentiation, the exponent is a negative integer, and the base is a polynomial, determining zeroes of the base. 7. The method of claim 1, further compromising: determining that a discontinuity computed by the discontinuity determination process is a vertical asymptote; and displaying the vertical asymptote on the display screen. 8. (canceled) 9. A method for evaluating a generalized rational function on a handheld graphing calculator, the method comprising: determining, by a processor of the handheld graphing calculator, whether or not the generalized rational function has at least one asymptote; and displaying, by the processor, the at least one asymptote on a display screen when the generalized rational function has the at least one asymptote, the displaying includes a textual representation of the at least one asymptote. 10. The method of claim 9, wherein displaying further comprises displaying a textual representation of the at least one asymptote. 11. The method of claim 9, wherein displaying further comprises displaying the at least one asymptote on a graph of the generalized rational function. 12. The method of claim 9, wherein determining further comprises: reducing, by the processor, the generalized rational function to a simplified rational function; computing, by the processor, all discontinuities of the simplified rational function, wherein the computing comprises using a discontinuity determination process to determine the discontinuities, the discontinuity determination process comprising: determining that there is no discontinuity in an input to the process when the input is a constant or a variable; determining a top level operator of the input when the input is not a constant or a variable; when the top level operator is negation, recursively applying the discontinuity determination process to an operand of the negation; when the top level operator is addition, multiplication, or subtraction, recursively applying the discontinuity determination process to a left operand of the top level operator, and recursively applying the discontinuity determination process to a right operand of the top level operator; and when the top level operator is division, recursively applying the discontinuity determination process to a numerator of the top level operator, and determining zeroes of a denominator of the top level operator, wherein the simplified rational function is an initial input to the discontinuity determination process; and determining, for each discontinuity computed by the discontinuity determination process, whether or not the discontinuity is a vertical asymptote. 13. (canceled) 14. The method of claim 9, wherein determining further comprises: computing, by the processor, a slope of a graph of the generalized rational function as values of an input of the generalized rational function approach the ∞ or −∞; computing, by the processor, a y-intercept of a line with the slope; determining, by the processor, that there is a horizontal asymptote at the y-intercept when the slope is zero; and determining, by the processor, that there is an oblique asymptote when the slope is not zero. 15. A digital device comprising: a non-transitory computer-readable medium storing software instructions for evaluating a generalized rational function, wherein the software instructions comprise software instructions to: compute all discontinuities of the generalized rational function using a discontinuity determination process to determine the discontinuities, the discontinuity determination process comprising software instructions to: determine that there is no discontinuity in an input to the process when the input is a constant or a variable; determine a top level operator of the input when the input is not a constant or a variable; when the top level operator is negation, recursively apply the discontinuity determination process to an operand of the negation; when the top level operator is addition, multiplication, or subtraction, recursively apply the discontinuity determination process to a left operand of the top level operator, and recursively apply the discontinuity determination process to a right operand of the top level operator; and when the top level operator is division, recursively apply the discontinuity determination process to a numerator of the top level operator, and determine zeroes of a denominator of the top level operator, wherein the generalized rational function is an initial input to the discontinuity determination process; determine, for each discontinuity determined by the discontinuity determination process, whether or not the discontinuity is a removable discontinuity and display each removable discontinuity on a display screen; and at least one processor coupled to the non-transitory computer-readable medium to execute the software instructions. 16. The digital device of claim 15, wherein the software instructions to display further comprise software instructions to display coordinates of each removable discontinuity. 17. The digital device of claim 15, wherein the software instructions to display further comprise software instructions to display each removable discontinuity on a graph of the generalized rational function. 18. The digital device of claim 15, wherein the digital device is a handheld graphing calculator. 19. The digital device of claim 15, wherein the software instructions for evaluating a generalized rational function further comprise software instructions to disable functionality to determine removable discontinuities on the digital device, wherein responsive to the disabling, the software instructions to compute, determine, and display are not performed. 20. The digital device of claim 15, wherein the software instructions of the discontinuity determination process further comprise software instructions to: when the top level operator is exponentiation and an exponent is a positive integer, recursively apply the discontinuity determination process to a base of the exponentiation; and when the top level operator is exponentiation, the exponent is a negative integer, and the base is a polynomial, determine zeroes of the base. 21. The digital device of claim 15, wherein the software instructions for evaluating a generalized rational function further comprise software instructions to: determine that a discontinuity computed by the discontinuity determination process is a vertical asymptote; and display the vertical asymptote on the display screen. 22. (canceled)
A method and a digital device for evaluating a generalized rational function are provided in which the evaluation of the generalized rational function includes computing all discontinuities of the generalized rational function, determining, for each discontinuity, whether or not the discontinuity is a removable discontinuity, wherein the determining includes determining whether or not the generalized random function approaches a point near the discontinuity, and displaying each removable discontinuity. A method for evaluating a generalized rational function on a handheld graphing calculator is provided that includes determining whether or not the generalized rational function has at least one asymptote and displaying the at least one asymptote on a display screen when the generalized rational function has the at least one asymptote.1. A method for evaluating a generalized rational function on a digital device, the method comprising: computing, by at least one processor of the digital device, all discontinuities of the generalized rational function, wherein the computing comprises using a discontinuity determination process to determine the discontinuities, the discontinuity determination process comprising: determining that there is no discontinuity in an input to the discontinuity determination process when the input is a constant or a variable; determining a top level operator of the input when the input is not a constant or a variable; when the top level operator is negation, recursively applying the discontinuity determination process to an operand of the negation; when the top level operator is addition, multiplication, or subtraction, recursively applying the discontinuity determination process to a left operand of the top level operator, and recursively applying the discontinuity determination process to a right operand of the top level operator; and when the top level operator is division, recursively applying the discontinuity determination process to a numerator of the top level operator, and determining zeroes of a denominator of the top level operator, wherein the generalized rational function is an initial input to the discontinuity determination process; determining, for each discontinuity determined by the discontinuity determination process, whether or not the discontinuity is a removable discontinuity; and displaying each removable discontinuity on a display screen by the at least one processor. 2. The method of claim 1, wherein displaying further comprises displaying coordinates of each removable discontinuity. 3. The method of claim 1, wherein displaying further comprises displaying each removable discontinuity on a graph of the generalized rational function. 4. The method of claim 1, wherein the digital device is a handheld graphing calculator. 5. The method of claim 1, further comprising disabling functionality to determine removable discontinuities on the digital device, wherein responsive to the disabling, the computing, determining, and displaying are not performed. 6. The method of claim 1, wherein the discontinuity determination process further comprises: when the top level operator is exponentiation and an exponent is a positive integer, recursively applying the discontinuity determination process to a base of the exponentiation; and when the top level operator is exponentiation, the exponent is a negative integer, and the base is a polynomial, determining zeroes of the base. 7. The method of claim 1, further compromising: determining that a discontinuity computed by the discontinuity determination process is a vertical asymptote; and displaying the vertical asymptote on the display screen. 8. (canceled) 9. A method for evaluating a generalized rational function on a handheld graphing calculator, the method comprising: determining, by a processor of the handheld graphing calculator, whether or not the generalized rational function has at least one asymptote; and displaying, by the processor, the at least one asymptote on a display screen when the generalized rational function has the at least one asymptote, the displaying includes a textual representation of the at least one asymptote. 10. The method of claim 9, wherein displaying further comprises displaying a textual representation of the at least one asymptote. 11. The method of claim 9, wherein displaying further comprises displaying the at least one asymptote on a graph of the generalized rational function. 12. The method of claim 9, wherein determining further comprises: reducing, by the processor, the generalized rational function to a simplified rational function; computing, by the processor, all discontinuities of the simplified rational function, wherein the computing comprises using a discontinuity determination process to determine the discontinuities, the discontinuity determination process comprising: determining that there is no discontinuity in an input to the process when the input is a constant or a variable; determining a top level operator of the input when the input is not a constant or a variable; when the top level operator is negation, recursively applying the discontinuity determination process to an operand of the negation; when the top level operator is addition, multiplication, or subtraction, recursively applying the discontinuity determination process to a left operand of the top level operator, and recursively applying the discontinuity determination process to a right operand of the top level operator; and when the top level operator is division, recursively applying the discontinuity determination process to a numerator of the top level operator, and determining zeroes of a denominator of the top level operator, wherein the simplified rational function is an initial input to the discontinuity determination process; and determining, for each discontinuity computed by the discontinuity determination process, whether or not the discontinuity is a vertical asymptote. 13. (canceled) 14. The method of claim 9, wherein determining further comprises: computing, by the processor, a slope of a graph of the generalized rational function as values of an input of the generalized rational function approach the ∞ or −∞; computing, by the processor, a y-intercept of a line with the slope; determining, by the processor, that there is a horizontal asymptote at the y-intercept when the slope is zero; and determining, by the processor, that there is an oblique asymptote when the slope is not zero. 15. A digital device comprising: a non-transitory computer-readable medium storing software instructions for evaluating a generalized rational function, wherein the software instructions comprise software instructions to: compute all discontinuities of the generalized rational function using a discontinuity determination process to determine the discontinuities, the discontinuity determination process comprising software instructions to: determine that there is no discontinuity in an input to the process when the input is a constant or a variable; determine a top level operator of the input when the input is not a constant or a variable; when the top level operator is negation, recursively apply the discontinuity determination process to an operand of the negation; when the top level operator is addition, multiplication, or subtraction, recursively apply the discontinuity determination process to a left operand of the top level operator, and recursively apply the discontinuity determination process to a right operand of the top level operator; and when the top level operator is division, recursively apply the discontinuity determination process to a numerator of the top level operator, and determine zeroes of a denominator of the top level operator, wherein the generalized rational function is an initial input to the discontinuity determination process; determine, for each discontinuity determined by the discontinuity determination process, whether or not the discontinuity is a removable discontinuity and display each removable discontinuity on a display screen; and at least one processor coupled to the non-transitory computer-readable medium to execute the software instructions. 16. The digital device of claim 15, wherein the software instructions to display further comprise software instructions to display coordinates of each removable discontinuity. 17. The digital device of claim 15, wherein the software instructions to display further comprise software instructions to display each removable discontinuity on a graph of the generalized rational function. 18. The digital device of claim 15, wherein the digital device is a handheld graphing calculator. 19. The digital device of claim 15, wherein the software instructions for evaluating a generalized rational function further comprise software instructions to disable functionality to determine removable discontinuities on the digital device, wherein responsive to the disabling, the software instructions to compute, determine, and display are not performed. 20. The digital device of claim 15, wherein the software instructions of the discontinuity determination process further comprise software instructions to: when the top level operator is exponentiation and an exponent is a positive integer, recursively apply the discontinuity determination process to a base of the exponentiation; and when the top level operator is exponentiation, the exponent is a negative integer, and the base is a polynomial, determine zeroes of the base. 21. The digital device of claim 15, wherein the software instructions for evaluating a generalized rational function further comprise software instructions to: determine that a discontinuity computed by the discontinuity determination process is a vertical asymptote; and display the vertical asymptote on the display screen. 22. (canceled)
2,600
10,933
10,933
14,591,354
2,628
An electronic device may have a variable refresh rate display. Static content may be displayed on the display at a lower refresh rate than moving content to conserve power. The display may include an array of pixels. Display driver circuitry in the display may load image data into rows of the pixels. The display driver circuitry may have digital-to-analog converter circuitry that supplies data signals to the array. The display driver circuitry may respond to a variable refresh rate control signal that is asserted and deasserted depending on whether static or moving image content is to be displayed. The display driver circuitry may use the digital-to-analog converter circuitry to apply a time-varying scaling factor to the image data. The magnitude of the scaling factor may be adjusted during transitions between refresh rates to help suppress luminance variations that might otherwise result in flickering on the display.
1. A variable refresh rate display, comprising: display driver circuitry that receives image data; and an array of pixels for displaying images at multiple refresh rates, wherein the display driver circuitry comprises digital-to-analog circuitry that applies a time-varying scaling factor to the image data while loading the image data into the array of pixels to prevent visual artifacts due to refresh rate transitions. 2. The variable refresh rate display defined in claim 1 wherein the multiple refresh rates include a first refresh rate and a second refresh rate that is lower than the first refresh rate and wherein the display driver circuitry is configured to apply a first scaling factor to the image data during use of the first refresh rate to refresh the image data on the array of pixels and is configured to apply a second scaling factor to the image data during use of the second refresh rate to refresh the image data on the array of pixels. 3. The variable refresh rate display defined in claim 2 wherein second scaling factor is smaller than the first scaling factor. 4. The variable refresh rate display defined in claim 3 wherein the display driver circuitry is configured to apply at least a third scaling factor to the image data when transitioning between the second refresh rate and the first refresh rate. 5. The variable refresh rate display defined in claim 4 wherein the third scaling factor is greater than the first scaling factor. 6. The variable refresh rate display defined in claim 5 wherein the display driver circuitry is configured to apply the first scaling factor to the image data after applying the third scaling factor to the image data when transitioning between the second refresh rate and the first refresh rate. 7. A method of operating an electronic device with a variable refresh rate display, comprising: receiving image data with display driver circuitry in the variable refresh rate display; displaying images at multiple refresh rates on an array of pixels in the variable refresh rate display using the display driver circuitry; and applying a time-varying scaling factor to the image data using digital-to-analog circuitry in the display driver circuitry while loading the image data into the array of pixels to prevent visual artifacts due to refresh rate transitions. 8. The method defined in claim 7 wherein the multiple refresh rates include a first refresh rate and a second refresh rate that is lower than the first refresh rate and wherein applying the time-varying scaling factor comprises: applying a first scaling factor to the image data during use of the first refresh rate to refresh the image data on the array of pixels; and applying a second scaling factor to the image data during use of the second refresh rate to refresh the image data on the array of pixels. 9. The method defined in claim 8 wherein the second scaling factor is smaller than the first scaling factor and wherein applying the second scaling factor comprises applying the second scaling factor that smaller than the first scaling factor. 10. The method defined in claim 9 further comprising: analyzing the image data with control circuitry in the electronic device to determine how to adjust the refresh rate. 11. The method defined in claim 10 further comprising providing the image data from the control circuitry to the display driver circuitry, wherein analyzing the image data comprises processing the image data in a frame buffer in the control circuitry. 12. The method defined in claim 11 further comprising: asserting a variable refresh rate control signal with the control circuitry in response to detection of static image content when processing the image data in the frame buffer. 13. The method defined in claim 12 wherein the second refresh rate is smaller than the first refresh rate, the method further comprising: adjusting the refresh rate from the first refresh rate to the second refresh rate with the display driver circuitry in response to receipt of the asserted variable refresh rate control signal with the display driver circuitry. 14. The method defined in claim 13 further comprising: adjusting the refresh rate from the second refresh rate to the first refresh rate with the display driver circuitry in response to deassertion of the variable refresh rate control signal. 15. The method defined in claim 14 further comprising: applying at least a third scaling factor to the image data when transitioning between the second refresh rate and the first refresh rate. 16. The method defined in claim 15 wherein the third scaling factor is greater than the first scaling factor, the method further comprising: when transitioning between the second refresh rate and the first refresh rate, applying the first scaling factor to the image data with the display driver circuitry after applying the third scaling factor to the image data. 17. The method defined in claim 16 further comprising: deasserting the variable refresh rate control signal in response to identifying moving image content in the frame buffer with the control circuitry. 18. An electronic device, comprising: control circuitry that analyzes image data to identify static image content and moving image content in the image data, wherein the control circuitry asserts a variable refresh rate control signal in response to identifying the static image content and deasserts the variable refresh rate control signal in response to identifying the moving image content; and a variable refresh rate display having display driver circuitry and having an array of pixels on which the display driver circuitry displays images corresponding to the image data, wherein the display driver circuitry includes a digital-to-analog converter that scales the image data by different amounts at different times to minimize visible display artifacts when transitioning between multiple refresh rates for the variable refresh rate display. 19. The electronic device defined in claim 18 wherein the control circuitry includes a frame buffer and wherein the control circuitry analyzes the image data by processing image data in the frame buffer. 20. The electronic device defined in claim 18 wherein the digital-to-analog converter is configured to scale the image data by applying a time-varying scaling factor to data signals being loaded into the array of pixels and wherein the digital-to-analog converter adjusts the scaling factor in response to transitions between the refresh rates.
An electronic device may have a variable refresh rate display. Static content may be displayed on the display at a lower refresh rate than moving content to conserve power. The display may include an array of pixels. Display driver circuitry in the display may load image data into rows of the pixels. The display driver circuitry may have digital-to-analog converter circuitry that supplies data signals to the array. The display driver circuitry may respond to a variable refresh rate control signal that is asserted and deasserted depending on whether static or moving image content is to be displayed. The display driver circuitry may use the digital-to-analog converter circuitry to apply a time-varying scaling factor to the image data. The magnitude of the scaling factor may be adjusted during transitions between refresh rates to help suppress luminance variations that might otherwise result in flickering on the display.1. A variable refresh rate display, comprising: display driver circuitry that receives image data; and an array of pixels for displaying images at multiple refresh rates, wherein the display driver circuitry comprises digital-to-analog circuitry that applies a time-varying scaling factor to the image data while loading the image data into the array of pixels to prevent visual artifacts due to refresh rate transitions. 2. The variable refresh rate display defined in claim 1 wherein the multiple refresh rates include a first refresh rate and a second refresh rate that is lower than the first refresh rate and wherein the display driver circuitry is configured to apply a first scaling factor to the image data during use of the first refresh rate to refresh the image data on the array of pixels and is configured to apply a second scaling factor to the image data during use of the second refresh rate to refresh the image data on the array of pixels. 3. The variable refresh rate display defined in claim 2 wherein second scaling factor is smaller than the first scaling factor. 4. The variable refresh rate display defined in claim 3 wherein the display driver circuitry is configured to apply at least a third scaling factor to the image data when transitioning between the second refresh rate and the first refresh rate. 5. The variable refresh rate display defined in claim 4 wherein the third scaling factor is greater than the first scaling factor. 6. The variable refresh rate display defined in claim 5 wherein the display driver circuitry is configured to apply the first scaling factor to the image data after applying the third scaling factor to the image data when transitioning between the second refresh rate and the first refresh rate. 7. A method of operating an electronic device with a variable refresh rate display, comprising: receiving image data with display driver circuitry in the variable refresh rate display; displaying images at multiple refresh rates on an array of pixels in the variable refresh rate display using the display driver circuitry; and applying a time-varying scaling factor to the image data using digital-to-analog circuitry in the display driver circuitry while loading the image data into the array of pixels to prevent visual artifacts due to refresh rate transitions. 8. The method defined in claim 7 wherein the multiple refresh rates include a first refresh rate and a second refresh rate that is lower than the first refresh rate and wherein applying the time-varying scaling factor comprises: applying a first scaling factor to the image data during use of the first refresh rate to refresh the image data on the array of pixels; and applying a second scaling factor to the image data during use of the second refresh rate to refresh the image data on the array of pixels. 9. The method defined in claim 8 wherein the second scaling factor is smaller than the first scaling factor and wherein applying the second scaling factor comprises applying the second scaling factor that smaller than the first scaling factor. 10. The method defined in claim 9 further comprising: analyzing the image data with control circuitry in the electronic device to determine how to adjust the refresh rate. 11. The method defined in claim 10 further comprising providing the image data from the control circuitry to the display driver circuitry, wherein analyzing the image data comprises processing the image data in a frame buffer in the control circuitry. 12. The method defined in claim 11 further comprising: asserting a variable refresh rate control signal with the control circuitry in response to detection of static image content when processing the image data in the frame buffer. 13. The method defined in claim 12 wherein the second refresh rate is smaller than the first refresh rate, the method further comprising: adjusting the refresh rate from the first refresh rate to the second refresh rate with the display driver circuitry in response to receipt of the asserted variable refresh rate control signal with the display driver circuitry. 14. The method defined in claim 13 further comprising: adjusting the refresh rate from the second refresh rate to the first refresh rate with the display driver circuitry in response to deassertion of the variable refresh rate control signal. 15. The method defined in claim 14 further comprising: applying at least a third scaling factor to the image data when transitioning between the second refresh rate and the first refresh rate. 16. The method defined in claim 15 wherein the third scaling factor is greater than the first scaling factor, the method further comprising: when transitioning between the second refresh rate and the first refresh rate, applying the first scaling factor to the image data with the display driver circuitry after applying the third scaling factor to the image data. 17. The method defined in claim 16 further comprising: deasserting the variable refresh rate control signal in response to identifying moving image content in the frame buffer with the control circuitry. 18. An electronic device, comprising: control circuitry that analyzes image data to identify static image content and moving image content in the image data, wherein the control circuitry asserts a variable refresh rate control signal in response to identifying the static image content and deasserts the variable refresh rate control signal in response to identifying the moving image content; and a variable refresh rate display having display driver circuitry and having an array of pixels on which the display driver circuitry displays images corresponding to the image data, wherein the display driver circuitry includes a digital-to-analog converter that scales the image data by different amounts at different times to minimize visible display artifacts when transitioning between multiple refresh rates for the variable refresh rate display. 19. The electronic device defined in claim 18 wherein the control circuitry includes a frame buffer and wherein the control circuitry analyzes the image data by processing image data in the frame buffer. 20. The electronic device defined in claim 18 wherein the digital-to-analog converter is configured to scale the image data by applying a time-varying scaling factor to data signals being loaded into the array of pixels and wherein the digital-to-analog converter adjusts the scaling factor in response to transitions between the refresh rates.
2,600
10,934
10,934
13,266,058
2,646
A control apparatus configured to allow a communication device supporting a first frequency setting to enter a system providing a communications facility based on a second frequency setting, wherein the first frequency setting provides only a partial support for communications in the system is disclosed. The control apparatus may be co-operative with a second control apparatus. The second control apparatus is configured to determine based on frequency setting information received from the system if it is possible to transmit to the system based on the first frequency setting supported by the communication device.
1.-40. (canceled) 41. An apparatus comprising: at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: allow a communication device supporting a first frequency setting to enter a system providing a communications facility based on a second frequency setting, wherein the first frequency setting provides only a partial support for communications in the system, and wherein the first frequency setting provides support for communication on at least one frequency band, and the second frequency setting defines communications on at least one different frequency band. 42. An apparatus in accordance with claim 41 further configured to send frequency setting information. 43. An apparatus in accordance with claim 41, wherein the second frequency setting defines communications on a main frequency band and at least one additional frequency band. 44. An apparatus in accordance with claim 41, wherein the first frequency setting and the second frequency setting comprise radio frequency requirements for relevant radio frequency bands. 45. An apparatus in accordance with claim 42, wherein the frequency setting information comprises an indication of at least one of a frequency band and at least one additional frequency band provided by the system. 46. An apparatus in accordance with claim 41, wherein the apparatus is further caused to: receive a transmission from the communication device, and determine the level of support provided by the communication device for the second frequency setting based on the received transmission and to control communications by the communication device based on determination. 47. An apparatus in accordance with claim 46, wherein the apparatus is further caused to: receive the transmission on a random access channel and determine the frequency bands supported by the communication device based on the received transmission. 48. An apparatus in accordance with claim 46, wherein the apparatus is further caused to decide on limitations on the communication device after determining capabilities thereof based on the received transmission. 49. An apparatus in accordance with claim 46, wherein the apparatus is further caused to control transmission scheduling based on the determination. 50. An apparatus in accordance with claim 49, wherein the apparatus is further caused to: at least one of use a limited channel range, allocate resource blocks that are away from potential blocking signals, avoid use of band edges, and allocate frequency or frequencies that is/are away from critical emission area or areas. 51. An apparatus in accordance with claim 41, wherein the apparatus is further caused to use different sets of channel numbers for different communication devices. 52. An apparatus comprising: at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: determine based on frequency setting information received from a system providing a communications facility if it is possible to transmit to the system based on a first frequency setting supported by the communication device, wherein the system provides the communications facility based on a second frequency setting and the first frequency setting provides only a partial support for communications in the system, and wherein the first frequency setting provides support for communication on at least one frequency band, and the second frequency setting defines communications on at least one different frequency band. 53. An apparatus in accordance with claim 52, wherein the second frequency setting defines communications on a main frequency band and at least one additional frequency band. 54. An apparatus in accordance with claim 52, wherein the first frequency setting and the second frequency setting comprise radio frequency requirements for relevant radio frequency bands. 55. An apparatus in accordance with claim 52, wherein the frequency setting information comprises an indication of at least one of a frequency band provided by the system and at least one additional frequency band provided by the system. 56. An apparatus in accordance with claim 52, wherein the apparatus is further caused to: determine based on the frequency setting information that at least one frequency band provided by the system is supported by the communication device, and allow transmission on a random access channel to the system in response to such determination. 57. A method comprising: receiving from a system frequency setting information at a communications device supporting a first frequency setting, wherein the system provides a communications facility based on a second frequency setting and the first frequency setting provides only a partial support for communications in the system; and determining based on the frequency setting information if sending from the communication device to the system is possible, wherein the first frequency setting provides support for communication on at least one frequency band, and the second frequency setting defines communications on at least one different frequency band. 58. A method in accordance with claim 57, wherein the second frequency setting defines communications on a main frequency band and at least one additional frequency band. 59. A method in accordance with claim 57, wherein the frequency setting information comprises an indication of at least one of a frequency band provided by the system and at least one additional frequency band provided by the system. 60. A method in accordance with claim 57, comprising transmitting a transmission on a random access channel to the access system in response to determining based on the received information that at least one frequency band provided by the system is supported by the communication device.
A control apparatus configured to allow a communication device supporting a first frequency setting to enter a system providing a communications facility based on a second frequency setting, wherein the first frequency setting provides only a partial support for communications in the system is disclosed. The control apparatus may be co-operative with a second control apparatus. The second control apparatus is configured to determine based on frequency setting information received from the system if it is possible to transmit to the system based on the first frequency setting supported by the communication device.1.-40. (canceled) 41. An apparatus comprising: at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: allow a communication device supporting a first frequency setting to enter a system providing a communications facility based on a second frequency setting, wherein the first frequency setting provides only a partial support for communications in the system, and wherein the first frequency setting provides support for communication on at least one frequency band, and the second frequency setting defines communications on at least one different frequency band. 42. An apparatus in accordance with claim 41 further configured to send frequency setting information. 43. An apparatus in accordance with claim 41, wherein the second frequency setting defines communications on a main frequency band and at least one additional frequency band. 44. An apparatus in accordance with claim 41, wherein the first frequency setting and the second frequency setting comprise radio frequency requirements for relevant radio frequency bands. 45. An apparatus in accordance with claim 42, wherein the frequency setting information comprises an indication of at least one of a frequency band and at least one additional frequency band provided by the system. 46. An apparatus in accordance with claim 41, wherein the apparatus is further caused to: receive a transmission from the communication device, and determine the level of support provided by the communication device for the second frequency setting based on the received transmission and to control communications by the communication device based on determination. 47. An apparatus in accordance with claim 46, wherein the apparatus is further caused to: receive the transmission on a random access channel and determine the frequency bands supported by the communication device based on the received transmission. 48. An apparatus in accordance with claim 46, wherein the apparatus is further caused to decide on limitations on the communication device after determining capabilities thereof based on the received transmission. 49. An apparatus in accordance with claim 46, wherein the apparatus is further caused to control transmission scheduling based on the determination. 50. An apparatus in accordance with claim 49, wherein the apparatus is further caused to: at least one of use a limited channel range, allocate resource blocks that are away from potential blocking signals, avoid use of band edges, and allocate frequency or frequencies that is/are away from critical emission area or areas. 51. An apparatus in accordance with claim 41, wherein the apparatus is further caused to use different sets of channel numbers for different communication devices. 52. An apparatus comprising: at least one processor and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: determine based on frequency setting information received from a system providing a communications facility if it is possible to transmit to the system based on a first frequency setting supported by the communication device, wherein the system provides the communications facility based on a second frequency setting and the first frequency setting provides only a partial support for communications in the system, and wherein the first frequency setting provides support for communication on at least one frequency band, and the second frequency setting defines communications on at least one different frequency band. 53. An apparatus in accordance with claim 52, wherein the second frequency setting defines communications on a main frequency band and at least one additional frequency band. 54. An apparatus in accordance with claim 52, wherein the first frequency setting and the second frequency setting comprise radio frequency requirements for relevant radio frequency bands. 55. An apparatus in accordance with claim 52, wherein the frequency setting information comprises an indication of at least one of a frequency band provided by the system and at least one additional frequency band provided by the system. 56. An apparatus in accordance with claim 52, wherein the apparatus is further caused to: determine based on the frequency setting information that at least one frequency band provided by the system is supported by the communication device, and allow transmission on a random access channel to the system in response to such determination. 57. A method comprising: receiving from a system frequency setting information at a communications device supporting a first frequency setting, wherein the system provides a communications facility based on a second frequency setting and the first frequency setting provides only a partial support for communications in the system; and determining based on the frequency setting information if sending from the communication device to the system is possible, wherein the first frequency setting provides support for communication on at least one frequency band, and the second frequency setting defines communications on at least one different frequency band. 58. A method in accordance with claim 57, wherein the second frequency setting defines communications on a main frequency band and at least one additional frequency band. 59. A method in accordance with claim 57, wherein the frequency setting information comprises an indication of at least one of a frequency band provided by the system and at least one additional frequency band provided by the system. 60. A method in accordance with claim 57, comprising transmitting a transmission on a random access channel to the access system in response to determining based on the received information that at least one frequency band provided by the system is supported by the communication device.
2,600
10,935
10,935
14,678,121
2,644
A method for low-rate signal transmission on Optical Transport Networks is provided. In the method, a signal is mapped to a low-rate OPU of a low-rate ODU, wherein the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; OPU overhead bytes and ODU overhead bytes are added to corresponding overhead section; then, the low-rate ODU is multiplexed to an Optical channel Data Unit-k (ODUk) that has a bit rate higher than the bit rate of the low-rate ODU; finally, the ODUk is transmitted via the OTN.
1. A method for transmitting a signal in an Optical Transport Network (OTN), comprising: mapping the signal to a low-rate Optical channel Payload Unit (OPU) of a low-rate Optical channel Data Unit (ODU), wherein the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; generating OPU overhead bytes and filling the OPU overhead bytes into the OPU overhead section of the low-rate OPU; generating ODU overhead bytes and filling the ODU overhead bytes into the ODU overhead section of the low-rate ODU; multiplexing the low-rate ODU to an Optical channel Data Unit-k (ODUk), wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU; and transmitting the ODUk via the OTN. 2. The method according to claim 1, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 3. The method according to claim 1, wherein the step of multiplexing the low-rate ODU to the ODUk comprising: mapping the low-rate ODU into a payload section of an ODTU; mapping the payload section of the ODTU into the payload section of the OPUk of the ODUk, and adding justification overhead bytes to the overhead section of the OPUk of the ODUk; wherein the ODTU occupies one time slot of the OPUk. 4. The method according to claim 1, wherein the OPU overhead bytes comprise payload type bytes that are used to indicate the type of the signal. 5. The method according to claim 1, wherein the signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal. 6. The method according to claim 1, wherein the step of multiplexing the low-rate ODU to the ODUk is implemented adopting an asynchronous multiplexing mode or synchronous multiplexing mode. 7. An apparatus for transmitting a signal in an Optical Transport Network (OTN), comprising: a processor and a computer readable medium having a plurality of computer executable instructions stored thereon which, when executed by the processor, cause the process to implement: mapping the signal to a low-rate Optical channel Payload Unit (OPU) of a low-rate Optical channel Data Unit (ODU), wherein the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; generating OPU overhead bytes and filling the OPU overhead bytes into the OPU overhead section of the low-rate OPU, generating ODU overhead bytes and filling the ODU overhead bytes into the ODU overhead section of the low-rate ODU; multiplexing the low-rate ODU to an Optical channel Data Unit-k (ODUk), wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU; transmitting the ODUk via the OTN. 8. The apparatus according to claim 7, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 9. The apparatus according to claim 7, wherein the multiplexing the low-rate ODU to the ODUk comprising: mapping the low-rate ODU into a payload section of an ODTU; mapping the payload section of the ODTU into the payload section of the OPUk of the ODUk, and adding justification overhead bytes to the overhead section of the OPUk of the ODUk; wherein the ODTU occupies one time slot of the OPUk. 10. The apparatus according to claim 7, wherein OPU overhead bytes comprise payload type bytes that are used to indicate the type of the signal. 11. The apparatus according to claim 7, wherein the signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal. 12. The apparatus according to claim 7, wherein the step of multiplexing the low-rate ODU to the ODUk is implemented adopting an asynchronous multiplexing mode or synchronous multiplexing mode. 13. A method for recovering a signal from an Optical channel Data Unit-k (ODUk) in an Optical Transport Network (OTN), comprising: de-multiplexing a low-rate Optical channel Data Unit (ODU) from the ODUk, wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU, the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; obtaining OPU overhead bytes in the OPU overhead section of the low-rate OPU; de-mapping to recover the signal from the OPU payload section of the OPU according the OPU overhead bytes. 14. The method according to claim 13, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 15. The method according to claim 13, wherein the low-rate ODU occupies one time slot of the OPUk of the ODUk. 16. The method according to claim 13, wherein the signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal. 17. An apparatus for recovering a signal from an Optical channel Data Unit-k (ODUk) in an Optical Transport Network (OTN), comprising: a processor and a computer readable medium having a plurality of computer executable instructions stored thereon which, when executed by the processor, cause the process to implement: de-multiplexing a low-rate Optical channel Data Unit (ODU) from the ODUk, wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU, the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; obtaining OPU overhead bytes in the OPU overhead section of the low-rate OPU; de-mapping to recover the signal from the OPU payload section of the OPU according the OPU overhead bytes. 18. The apparatus according to claim 17, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 19. The apparatus according to claim 17, wherein the low-rate ODU occupies one time slot of the OPUk of the ODUk. 20. The apparatus according to claim 17, wherein the signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal. 21. An system for communication in an Optical Transport Network (OTN), comprising: a first apparatus, configured to map a client signal to a low-rate Optical channel Payload Unit (OPU) of a low-rate Optical channel Data Unit (ODU), wherein the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; generate OPU overhead bytes and fill the OPU overhead bytes into the OPU overhead section of the low-rate OPU, generate ODU overhead bytes and fill the ODU overhead bytes into the ODU overhead section of the low-rate ODU; multiplex the low-rate ODU to an Optical channel Data Unit-k (ODUk), wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU; transmit the ODUk via the OTN; a second apparatus, configured to receive the ODUk, de-multiplex the low-rate ODU from the ODUk; obtain OPU overhead bytes in the OPU overhead section of the low-rate OPU; de-map to recover the client signal from the OPU payload section of the OPU according the OPU overhead bytes. 22. The system according to claim 21, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 23. The system according to claim 21, wherein the low-rate ODU occupies one time slot of the OPUk of the ODUk. 24. The system according to claim 21, wherein the client signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal.
A method for low-rate signal transmission on Optical Transport Networks is provided. In the method, a signal is mapped to a low-rate OPU of a low-rate ODU, wherein the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; OPU overhead bytes and ODU overhead bytes are added to corresponding overhead section; then, the low-rate ODU is multiplexed to an Optical channel Data Unit-k (ODUk) that has a bit rate higher than the bit rate of the low-rate ODU; finally, the ODUk is transmitted via the OTN.1. A method for transmitting a signal in an Optical Transport Network (OTN), comprising: mapping the signal to a low-rate Optical channel Payload Unit (OPU) of a low-rate Optical channel Data Unit (ODU), wherein the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; generating OPU overhead bytes and filling the OPU overhead bytes into the OPU overhead section of the low-rate OPU; generating ODU overhead bytes and filling the ODU overhead bytes into the ODU overhead section of the low-rate ODU; multiplexing the low-rate ODU to an Optical channel Data Unit-k (ODUk), wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU; and transmitting the ODUk via the OTN. 2. The method according to claim 1, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 3. The method according to claim 1, wherein the step of multiplexing the low-rate ODU to the ODUk comprising: mapping the low-rate ODU into a payload section of an ODTU; mapping the payload section of the ODTU into the payload section of the OPUk of the ODUk, and adding justification overhead bytes to the overhead section of the OPUk of the ODUk; wherein the ODTU occupies one time slot of the OPUk. 4. The method according to claim 1, wherein the OPU overhead bytes comprise payload type bytes that are used to indicate the type of the signal. 5. The method according to claim 1, wherein the signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal. 6. The method according to claim 1, wherein the step of multiplexing the low-rate ODU to the ODUk is implemented adopting an asynchronous multiplexing mode or synchronous multiplexing mode. 7. An apparatus for transmitting a signal in an Optical Transport Network (OTN), comprising: a processor and a computer readable medium having a plurality of computer executable instructions stored thereon which, when executed by the processor, cause the process to implement: mapping the signal to a low-rate Optical channel Payload Unit (OPU) of a low-rate Optical channel Data Unit (ODU), wherein the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; generating OPU overhead bytes and filling the OPU overhead bytes into the OPU overhead section of the low-rate OPU, generating ODU overhead bytes and filling the ODU overhead bytes into the ODU overhead section of the low-rate ODU; multiplexing the low-rate ODU to an Optical channel Data Unit-k (ODUk), wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU; transmitting the ODUk via the OTN. 8. The apparatus according to claim 7, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 9. The apparatus according to claim 7, wherein the multiplexing the low-rate ODU to the ODUk comprising: mapping the low-rate ODU into a payload section of an ODTU; mapping the payload section of the ODTU into the payload section of the OPUk of the ODUk, and adding justification overhead bytes to the overhead section of the OPUk of the ODUk; wherein the ODTU occupies one time slot of the OPUk. 10. The apparatus according to claim 7, wherein OPU overhead bytes comprise payload type bytes that are used to indicate the type of the signal. 11. The apparatus according to claim 7, wherein the signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal. 12. The apparatus according to claim 7, wherein the step of multiplexing the low-rate ODU to the ODUk is implemented adopting an asynchronous multiplexing mode or synchronous multiplexing mode. 13. A method for recovering a signal from an Optical channel Data Unit-k (ODUk) in an Optical Transport Network (OTN), comprising: de-multiplexing a low-rate Optical channel Data Unit (ODU) from the ODUk, wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU, the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; obtaining OPU overhead bytes in the OPU overhead section of the low-rate OPU; de-mapping to recover the signal from the OPU payload section of the OPU according the OPU overhead bytes. 14. The method according to claim 13, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 15. The method according to claim 13, wherein the low-rate ODU occupies one time slot of the OPUk of the ODUk. 16. The method according to claim 13, wherein the signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal. 17. An apparatus for recovering a signal from an Optical channel Data Unit-k (ODUk) in an Optical Transport Network (OTN), comprising: a processor and a computer readable medium having a plurality of computer executable instructions stored thereon which, when executed by the processor, cause the process to implement: de-multiplexing a low-rate Optical channel Data Unit (ODU) from the ODUk, wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU, the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; obtaining OPU overhead bytes in the OPU overhead section of the low-rate OPU; de-mapping to recover the signal from the OPU payload section of the OPU according the OPU overhead bytes. 18. The apparatus according to claim 17, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 19. The apparatus according to claim 17, wherein the low-rate ODU occupies one time slot of the OPUk of the ODUk. 20. The apparatus according to claim 17, wherein the signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal. 21. An system for communication in an Optical Transport Network (OTN), comprising: a first apparatus, configured to map a client signal to a low-rate Optical channel Payload Unit (OPU) of a low-rate Optical channel Data Unit (ODU), wherein the low-rate ODU comprises an ODU overhead section and the low-rate OPU, the low-rate OPU comprises an OPU overhead section and an OPU payload section, the low-rate ODU has a bit rate of 1, 244, 160 Kbps±20 ppm, and the OPU payload section has a bit rate of 1, 238, 954.31 Kbps±20 ppm; generate OPU overhead bytes and fill the OPU overhead bytes into the OPU overhead section of the low-rate OPU, generate ODU overhead bytes and fill the ODU overhead bytes into the ODU overhead section of the low-rate ODU; multiplex the low-rate ODU to an Optical channel Data Unit-k (ODUk), wherein the ODUk has a bit rate higher than the bit rate of the low-rate ODU; transmit the ODUk via the OTN; a second apparatus, configured to receive the ODUk, de-multiplex the low-rate ODU from the ODUk; obtain OPU overhead bytes in the OPU overhead section of the low-rate OPU; de-map to recover the client signal from the OPU payload section of the OPU according the OPU overhead bytes. 22. The system according to claim 21, the low-rate ODU has a size of 4×3,824 bytes, and the OPU payload section has a size of 4×3,808 bytes. 23. The system according to claim 21, wherein the low-rate ODU occupies one time slot of the OPUk of the ODUk. 24. The system according to claim 21, wherein the client signal is one of the following signals: a Gigabit Ethernet signal, a Fiber Connection signal, a High Division Television signal, and a Fast Ethernet signal.
2,600
10,936
10,936
15,636,346
2,658
Methods for providing enhanced services to users participating in communication sessions (CS), via a virtual assistant, are disclosed. One method receives content that is exchanged by users participating in the CS. The content includes natural language expressions that encode a conversation carried out by users. The method determines content features based on natural language models. The content features indicate intended semantics of the natural language expressions. The method determines a relevance of the content and identifies portions of the content that are likely relevant to the user. Determining the relevance is based on the content features, a context of the CS, a user-interest model, and a content-relevance model of the natural language models. Identifying the likely relevant content is based on the determined relevance of the content and a relevance threshold. A summary of the CS is automatically generated from summarized versions of the likely relevant portions of the content.
1. A computerized system comprising: one or more processors; and computer storage memory having computer-executable instructions stored thereon which, when executed by the one or more processors, implement a method comprising: receiving content that is exchanged within a communication session (CS), wherein the content includes one or more natural language expressions that encode a portion of a conversation carried out by a plurality of users participating in the CS; determining one or more content features based on the content and one or more natural language models, wherein the one or more content features indicate one or more intended semantics of the one or more natural language expressions; determining a relevance of the content based on the content features, a user-interest model for a first user of the plurality of users, and a content-relevance model for the first user; identifying one or more portions of the content based on the relevance of the content and one or more relevance thresholds, wherein the one or more identified portions of the content are likely relevant to the first user; and generating a summary of the CS based on the one or more likely relevant portions of the content. 2. The system of claim 1, wherein the method further comprises: monitoring user activity of the first user; identifying one or more user-activity patterns based on the monitored user activity; and generating the user-interest model based on the one or more user-activity patterns. 3. The system of claim 1, wherein the method further comprises: receiving metadata associated with the CS; determining one or more contextual features of the CS based on the received metadata and a CS context model, wherein the one or more contextual features indicate a context of the conversation for the first user; and determining the relevance of the content further based on the one or more contextual features of the CS. 4. The system of claim 1, wherein the method further comprises: identifying a sub-portion of the one or more portions of the content based on the relevance of the content and an additional relevance threshold, wherein the identified sub-portion of the one or more portions of the content is highly relevant to the first user; and providing a real-time notification of the identified highly relevant content to the first user. 5. The system of claim 1, wherein the method further comprises: determining one or more content-substance features based on the content and a content-substance model included in the one or more natural language models, wherein the one or more content-substance features indicate one or more topics discussed in the conversation; and determining one or more content-style features based on the content and a content-style model included in the one or more natural language models, wherein the one or more content-style features indicate an emotion of at least one of the plurality of the users; and determining the relevance of the content further based on the one or more content-substance features and the one or more content-style features. 6. The system of claim 1, wherein the method further comprises: generating a summarized version of at least one of the one or more likely relevant portions of the content based on the one or more natural language models; generating the summary of the CS such that the summary includes the summarized version of the at least one of the one or more likely relevant portions of the content; and providing the summary of the CS to the first user. 7. The system of claim 1, wherein the method further comprises: receiving another summary of the CS from the first user; generating a comparison of the summary of the CS and the other summary of the CS; and updating the content-relevance model based on the comparison of the summary of the CS and the other summary of the CS. 8. A method comprising: receiving content that is exchanged within a communication session (CS), wherein the content includes one or more natural language expressions that encode a conversation carried out by a plurality of users participating in the CS; determining one or more content features based on the content and one or more natural language models, wherein the one or more content features indicate one or more intended semantics of the one or more natural language expressions; determining a relevance of the content based on the content features, a user-interest model for a first user of the plurality of users, and a content-relevance model for the first user; identifying one or more portions of the content based on the relevance of the content and one or more relevance thresholds, wherein the one or more identified portions of the content are likely relevant to the first user; and generating a summary of the CS based on the one or more likely relevant portions of the content. 9. The method of claim 8, further comprising: monitoring user activity of the first user; identifying one or more user-activity patterns based on the monitored user activity; and generating the user-interest model based on the one or more user-activity patterns. 10. The method of claim 8, further comprising: receiving other content; determining one or more content features of the other content based on the one or more natural language models; querying a knowledge graph based on the one or more content features of the other content; and generating the content-relevance model based on a result of the query of the knowledge graph. 11. The method of claim 8, further comprising: identifying a sub-portion of the one or more portions of the content based on the relevance of the content and an additional relevance threshold, wherein the identified sub-portion of the one or more portions of the content is highly relevant to the first user; and providing a real-time notification of the identified highly relevant content to the first user. 12. The method of claim 8, further comprising: determining one or more content-substance features based on the content and a content-substance model included in the one or more natural language models, wherein the one or more content-substance features indicate one or more topics discussed in the conversation; determining one or more content-style features based on the content and a content-style model included in the one or more natural language models, wherein the one or more content-style features indicate an emotion of at least one of the plurality of the users; and determining the relevance of the content further based on the one or more content-substance features and the one or more content-style features. 13. The method of claim 8, further comprising: generating a summarized version of at least one of the one or more likely relevant portions of the content based on the one or more natural language models; generating the summary of the CS such that the summary includes the summarized version of the at least one of the one or more likely relevant portions of the content; and providing the summary of the CS to the first user. 14. The method of claim 8, further comprising: receiving another summary of the CS from the first user; generating a comparison of the summary of the CS and the other summary of the CS; and updating the content-relevance model based on the comparison of the summary of the CS and the other summary of the CS. 15. One or more computer-readable media having instructions stored thereon, wherein the instructions, when executed by a processor of a computing device, cause the computing device to perform actions including: receiving content that is exchanged within a communication session (CS), wherein the content includes one or more natural language expressions that encode a conversation carried out by a plurality of users participating in the CS; determining one or more content features based on the content and one or more natural language models, wherein the one or more content features indicate one or more intended semantics of the one or more natural language expressions; determining a relevance of the content based on the content features, a user-interest model for a first user of the plurality of users, and a content-relevance model for the first user; identifying one or more portions of the content based on the relevance of the content and one or more relevance thresholds, wherein the one or more identified portions of the content are likely relevant to the first user; and generating a summary of the CS based on the one or more likely relevant portions of the content. 16. The media of claim 15, the actions further comprising: monitoring user activity of the first user; identifying one or more user-activity patterns based on the monitored user activity; and generating the user-interest model based on the one or more user-activity patterns. 17. The media of claim 15, wherein the actions further comprise: receiving metadata associated with the CS; determining one or more contextual features of the CS based on the received metadata and a CS context model, wherein the one or more contextual features indicate a context of the conversation for the first user; and determining the relevance of the content further based on the one or more contextual features of the CS. 18. The media of claim 15, wherein the actions further comprise: identifying a sub-portion of the one or more portions of the content based on the relevance of the content and an additional relevance threshold, wherein the identified sub-portion of the one or more portions of the content is highly relevant to the first user; and providing a real-time notification of the identified highly relevant content to the first user. 19. The media of claim 15, wherein the actions further comprise: determining one or more content-substance features based on the content and a content-substance model included in the one or more natural language models, wherein the one or more content-substance features indicate one or more topics discussed in the conversation; and determining one or more content-style features based on the content and a content-style model included in the one or more natural language models, wherein the one or more content-style features indicate an emotion of at least one of the plurality of the users; and determining the relevance of the content further based on the one or more content-substance features and the one or more content-style features. 20. The media of claim 15, wherein the actions further comprise: receiving other content; determining one or more content features of the other content based on the one or more natural language models; querying a concept map based on the one or more content features of the other features; and generating the content-relevance model based on a result of the query of the concept map.
Methods for providing enhanced services to users participating in communication sessions (CS), via a virtual assistant, are disclosed. One method receives content that is exchanged by users participating in the CS. The content includes natural language expressions that encode a conversation carried out by users. The method determines content features based on natural language models. The content features indicate intended semantics of the natural language expressions. The method determines a relevance of the content and identifies portions of the content that are likely relevant to the user. Determining the relevance is based on the content features, a context of the CS, a user-interest model, and a content-relevance model of the natural language models. Identifying the likely relevant content is based on the determined relevance of the content and a relevance threshold. A summary of the CS is automatically generated from summarized versions of the likely relevant portions of the content.1. A computerized system comprising: one or more processors; and computer storage memory having computer-executable instructions stored thereon which, when executed by the one or more processors, implement a method comprising: receiving content that is exchanged within a communication session (CS), wherein the content includes one or more natural language expressions that encode a portion of a conversation carried out by a plurality of users participating in the CS; determining one or more content features based on the content and one or more natural language models, wherein the one or more content features indicate one or more intended semantics of the one or more natural language expressions; determining a relevance of the content based on the content features, a user-interest model for a first user of the plurality of users, and a content-relevance model for the first user; identifying one or more portions of the content based on the relevance of the content and one or more relevance thresholds, wherein the one or more identified portions of the content are likely relevant to the first user; and generating a summary of the CS based on the one or more likely relevant portions of the content. 2. The system of claim 1, wherein the method further comprises: monitoring user activity of the first user; identifying one or more user-activity patterns based on the monitored user activity; and generating the user-interest model based on the one or more user-activity patterns. 3. The system of claim 1, wherein the method further comprises: receiving metadata associated with the CS; determining one or more contextual features of the CS based on the received metadata and a CS context model, wherein the one or more contextual features indicate a context of the conversation for the first user; and determining the relevance of the content further based on the one or more contextual features of the CS. 4. The system of claim 1, wherein the method further comprises: identifying a sub-portion of the one or more portions of the content based on the relevance of the content and an additional relevance threshold, wherein the identified sub-portion of the one or more portions of the content is highly relevant to the first user; and providing a real-time notification of the identified highly relevant content to the first user. 5. The system of claim 1, wherein the method further comprises: determining one or more content-substance features based on the content and a content-substance model included in the one or more natural language models, wherein the one or more content-substance features indicate one or more topics discussed in the conversation; and determining one or more content-style features based on the content and a content-style model included in the one or more natural language models, wherein the one or more content-style features indicate an emotion of at least one of the plurality of the users; and determining the relevance of the content further based on the one or more content-substance features and the one or more content-style features. 6. The system of claim 1, wherein the method further comprises: generating a summarized version of at least one of the one or more likely relevant portions of the content based on the one or more natural language models; generating the summary of the CS such that the summary includes the summarized version of the at least one of the one or more likely relevant portions of the content; and providing the summary of the CS to the first user. 7. The system of claim 1, wherein the method further comprises: receiving another summary of the CS from the first user; generating a comparison of the summary of the CS and the other summary of the CS; and updating the content-relevance model based on the comparison of the summary of the CS and the other summary of the CS. 8. A method comprising: receiving content that is exchanged within a communication session (CS), wherein the content includes one or more natural language expressions that encode a conversation carried out by a plurality of users participating in the CS; determining one or more content features based on the content and one or more natural language models, wherein the one or more content features indicate one or more intended semantics of the one or more natural language expressions; determining a relevance of the content based on the content features, a user-interest model for a first user of the plurality of users, and a content-relevance model for the first user; identifying one or more portions of the content based on the relevance of the content and one or more relevance thresholds, wherein the one or more identified portions of the content are likely relevant to the first user; and generating a summary of the CS based on the one or more likely relevant portions of the content. 9. The method of claim 8, further comprising: monitoring user activity of the first user; identifying one or more user-activity patterns based on the monitored user activity; and generating the user-interest model based on the one or more user-activity patterns. 10. The method of claim 8, further comprising: receiving other content; determining one or more content features of the other content based on the one or more natural language models; querying a knowledge graph based on the one or more content features of the other content; and generating the content-relevance model based on a result of the query of the knowledge graph. 11. The method of claim 8, further comprising: identifying a sub-portion of the one or more portions of the content based on the relevance of the content and an additional relevance threshold, wherein the identified sub-portion of the one or more portions of the content is highly relevant to the first user; and providing a real-time notification of the identified highly relevant content to the first user. 12. The method of claim 8, further comprising: determining one or more content-substance features based on the content and a content-substance model included in the one or more natural language models, wherein the one or more content-substance features indicate one or more topics discussed in the conversation; determining one or more content-style features based on the content and a content-style model included in the one or more natural language models, wherein the one or more content-style features indicate an emotion of at least one of the plurality of the users; and determining the relevance of the content further based on the one or more content-substance features and the one or more content-style features. 13. The method of claim 8, further comprising: generating a summarized version of at least one of the one or more likely relevant portions of the content based on the one or more natural language models; generating the summary of the CS such that the summary includes the summarized version of the at least one of the one or more likely relevant portions of the content; and providing the summary of the CS to the first user. 14. The method of claim 8, further comprising: receiving another summary of the CS from the first user; generating a comparison of the summary of the CS and the other summary of the CS; and updating the content-relevance model based on the comparison of the summary of the CS and the other summary of the CS. 15. One or more computer-readable media having instructions stored thereon, wherein the instructions, when executed by a processor of a computing device, cause the computing device to perform actions including: receiving content that is exchanged within a communication session (CS), wherein the content includes one or more natural language expressions that encode a conversation carried out by a plurality of users participating in the CS; determining one or more content features based on the content and one or more natural language models, wherein the one or more content features indicate one or more intended semantics of the one or more natural language expressions; determining a relevance of the content based on the content features, a user-interest model for a first user of the plurality of users, and a content-relevance model for the first user; identifying one or more portions of the content based on the relevance of the content and one or more relevance thresholds, wherein the one or more identified portions of the content are likely relevant to the first user; and generating a summary of the CS based on the one or more likely relevant portions of the content. 16. The media of claim 15, the actions further comprising: monitoring user activity of the first user; identifying one or more user-activity patterns based on the monitored user activity; and generating the user-interest model based on the one or more user-activity patterns. 17. The media of claim 15, wherein the actions further comprise: receiving metadata associated with the CS; determining one or more contextual features of the CS based on the received metadata and a CS context model, wherein the one or more contextual features indicate a context of the conversation for the first user; and determining the relevance of the content further based on the one or more contextual features of the CS. 18. The media of claim 15, wherein the actions further comprise: identifying a sub-portion of the one or more portions of the content based on the relevance of the content and an additional relevance threshold, wherein the identified sub-portion of the one or more portions of the content is highly relevant to the first user; and providing a real-time notification of the identified highly relevant content to the first user. 19. The media of claim 15, wherein the actions further comprise: determining one or more content-substance features based on the content and a content-substance model included in the one or more natural language models, wherein the one or more content-substance features indicate one or more topics discussed in the conversation; and determining one or more content-style features based on the content and a content-style model included in the one or more natural language models, wherein the one or more content-style features indicate an emotion of at least one of the plurality of the users; and determining the relevance of the content further based on the one or more content-substance features and the one or more content-style features. 20. The media of claim 15, wherein the actions further comprise: receiving other content; determining one or more content features of the other content based on the one or more natural language models; querying a concept map based on the one or more content features of the other features; and generating the content-relevance model based on a result of the query of the concept map.
2,600
10,937
10,937
16,401,349
2,659
Methods, systems, and apparatus, including computer programs encoded on computer storage media for classification using neural networks. One method includes receiving audio data corresponding to an utterance. Obtaining a transcription of the utterance. Generating a representation of the audio data. Generating a representation of the transcription of the utterance. Providing (i) the representation of the audio data and (ii) the representation of the transcription of the utterance to a classifier that, based on a given representation of the audio data and a given representation of the transcription of the utterance, is trained to output an indication of whether the utterance associated with the given representation is likely directed to an automated assistance or is likely not directed to an automated assistant. Receiving, from the classifier, an indication of whether the utterance corresponding to the received audio data is likely directed to the automated assistant or is likely not directed to the automated assistant. Selectively instructing the automated assistant based at least on the indication of whether the utterance corresponding to the received audio data is likely directed to the automated assistant or is likely not directed to the automated assistant.
1. (canceled) 2. A computer-implemented method comprising: obtaining, by an automated assistance device, audio data that corresponds to an utterance directed at an automated assistant device and an utterance that was not directed at the automated assistant device; and generating a response to the utterance that was directed at the automated assistant device and not generating a response to the second utterance that was not directed at the automated assistant device. 3. The method of claim 2, wherein the response to the utterance that was directed at the automated assistant device is generated based on a transcription of the utterance that was directed at the automated assistant device. 4. The method of claim 2, wherein the response to the utterance that was directed at the automated assistant device is generated based on determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device. 5. The method of claim 2, wherein the determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device was determined based on the audio data. 6. The method of claim 2, wherein the audio data represents audio captured by a microphone of the automated assistance device. 7. The method of claim 2, wherein the utterance directed at an automated assistant device was completely spoken before the utterance that was not directed at the automated assistant device started being spoken. 8. The method of claim 2, wherein the automated assistance device comprises a neural network that is trained to determine how likely audio data corresponds to an utterance directed to the automated assistance device. 9. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: obtaining, by an automated assistance device, audio data that corresponds to an utterance directed at an automated assistant device and an utterance that was not directed at the automated assistant device; and generating a response to the utterance that was directed at the automated assistant device and not generating a response to the second utterance that was not directed at the automated assistant device. 10. The system of claim 9, wherein the response to the utterance that was directed at the automated assistant device is generated based on a transcription of the utterance that was directed at the automated assistant device. 11. The system of claim 9, wherein the response to the utterance that was directed at the automated assistant device is generated based on determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device. 12. The system of claim 9, wherein the determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device was determined based on the audio data. 13. The system of claim 9, wherein the audio data represents audio captured by a microphone of the automated assistance device. 14. The system of claim 9, wherein the utterance directed at an automated assistant device was completely spoken before the utterance that was not directed at the automated assistant device started being spoken. 15. The system of claim 9, wherein the automated assistance device comprises a neural network that is trained to determine how likely audio data corresponds to an utterance directed to the automated assistance device. 16. A non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising: obtaining, by an automated assistance device, audio data that corresponds to an utterance directed at an automated assistant device and an utterance that was not directed at the automated assistant device; and generating a response to the utterance that was directed at the automated assistant device and not generating a response to the second utterance that was not directed at the automated assistant device. 17. The medium of claim 16, wherein the response to the utterance that was directed at the automated assistant device is generated based on a transcription of the utterance that was directed at the automated assistant device. 18. The medium of claim 16, wherein the response to the utterance that was directed at the automated assistant device is generated based on determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device. 19. The medium of claim 16, wherein the determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device was determined based on the audio data. 20. The medium of claim 16, wherein the audio data represents audio captured by a microphone of the automated assistance device. 21. The medium of claim 16, wherein the utterance directed at an automated assistant device was completely spoken before the utterance that was not directed at the automated assistant device started being spoken.
Methods, systems, and apparatus, including computer programs encoded on computer storage media for classification using neural networks. One method includes receiving audio data corresponding to an utterance. Obtaining a transcription of the utterance. Generating a representation of the audio data. Generating a representation of the transcription of the utterance. Providing (i) the representation of the audio data and (ii) the representation of the transcription of the utterance to a classifier that, based on a given representation of the audio data and a given representation of the transcription of the utterance, is trained to output an indication of whether the utterance associated with the given representation is likely directed to an automated assistance or is likely not directed to an automated assistant. Receiving, from the classifier, an indication of whether the utterance corresponding to the received audio data is likely directed to the automated assistant or is likely not directed to the automated assistant. Selectively instructing the automated assistant based at least on the indication of whether the utterance corresponding to the received audio data is likely directed to the automated assistant or is likely not directed to the automated assistant.1. (canceled) 2. A computer-implemented method comprising: obtaining, by an automated assistance device, audio data that corresponds to an utterance directed at an automated assistant device and an utterance that was not directed at the automated assistant device; and generating a response to the utterance that was directed at the automated assistant device and not generating a response to the second utterance that was not directed at the automated assistant device. 3. The method of claim 2, wherein the response to the utterance that was directed at the automated assistant device is generated based on a transcription of the utterance that was directed at the automated assistant device. 4. The method of claim 2, wherein the response to the utterance that was directed at the automated assistant device is generated based on determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device. 5. The method of claim 2, wherein the determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device was determined based on the audio data. 6. The method of claim 2, wherein the audio data represents audio captured by a microphone of the automated assistance device. 7. The method of claim 2, wherein the utterance directed at an automated assistant device was completely spoken before the utterance that was not directed at the automated assistant device started being spoken. 8. The method of claim 2, wherein the automated assistance device comprises a neural network that is trained to determine how likely audio data corresponds to an utterance directed to the automated assistance device. 9. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: obtaining, by an automated assistance device, audio data that corresponds to an utterance directed at an automated assistant device and an utterance that was not directed at the automated assistant device; and generating a response to the utterance that was directed at the automated assistant device and not generating a response to the second utterance that was not directed at the automated assistant device. 10. The system of claim 9, wherein the response to the utterance that was directed at the automated assistant device is generated based on a transcription of the utterance that was directed at the automated assistant device. 11. The system of claim 9, wherein the response to the utterance that was directed at the automated assistant device is generated based on determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device. 12. The system of claim 9, wherein the determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device was determined based on the audio data. 13. The system of claim 9, wherein the audio data represents audio captured by a microphone of the automated assistance device. 14. The system of claim 9, wherein the utterance directed at an automated assistant device was completely spoken before the utterance that was not directed at the automated assistant device started being spoken. 15. The system of claim 9, wherein the automated assistance device comprises a neural network that is trained to determine how likely audio data corresponds to an utterance directed to the automated assistance device. 16. A non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising: obtaining, by an automated assistance device, audio data that corresponds to an utterance directed at an automated assistant device and an utterance that was not directed at the automated assistant device; and generating a response to the utterance that was directed at the automated assistant device and not generating a response to the second utterance that was not directed at the automated assistant device. 17. The medium of claim 16, wherein the response to the utterance that was directed at the automated assistant device is generated based on a transcription of the utterance that was directed at the automated assistant device. 18. The medium of claim 16, wherein the response to the utterance that was directed at the automated assistant device is generated based on determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device. 19. The medium of claim 16, wherein the determination that the utterance that was directed at the automated assistant device was likely directed at the automated assistant device was determined based on the audio data. 20. The medium of claim 16, wherein the audio data represents audio captured by a microphone of the automated assistance device. 21. The medium of claim 16, wherein the utterance directed at an automated assistant device was completely spoken before the utterance that was not directed at the automated assistant device started being spoken.
2,600
10,938
10,938
15,465,044
2,642
An apparatus obtains results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site. The results include at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals. A plurality of radio transmitters are distributed at the site. The apparatus compares the received signal strength with a threshold. If the received signal strength exceeds the threshold, the apparatus causes a notification of a user of the mobile device, obtains an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device, and causes storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter.
1. A method performed by at least one apparatus, the method comprising: obtaining results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site, the results comprising at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals, wherein a plurality of radio transmitters are distributed at the site; comparing the received signal strength with a threshold; in response to the received signal strength exceeding the threshold, causing a notification of a user of the mobile device; in response to the received signal strength exceeding the threshold, obtaining an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device; and causing storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter. 2. The method according to claim 1, further comprising: causing storage of the identifier of the radio transmitter and the indication of the location in a database configured to store identifiers of a plurality of radio transmitters and a respectively associated indication of a location. 3. The method according to claim 1, further comprising: causing storage of the indication of the received signal strength along with the location and the identifier of the radio transmitter. 4. The method according to claim 1, wherein the notification of a user is caused only in case the received signal strength of radio signals of a single radio transmitter is determined to exceed a threshold at a current location of the mobile device; or only in case the received signal strength of radio signals of at least one radio transmitter of a single entity is determined to exceed a threshold at a current location of the mobile device; or in case the received signal strength of radio signals of at least one radio transmitter is determined to exceed a threshold at a current location of the mobile device. 5. The method according to claim 1, wherein the map of the site is caused to be presented to a user with a segmentation into sub-areas. 6. The method according to claim 5, wherein: sub-areas, for which an approximate location of at least one radio transmitter is still required, are pointed out on the display; and/or sub-areas, for which a predetermined number of approximate locations of radio transmitters has already been stored, are pointed out on the display. 7. The method according to claim 1, wherein the stored identifier of the radio transmitter and the stored indication of the location are made available for a tracking of mobile devices at the site; or for a calibration of a sensor-based tracking of mobile devices collecting fingerprints at the site. 8. The method according to claim 1, wherein the at least one apparatus comprises one of the mobile device; or a component of the mobile device; or a server receiving results of measurements from at least one mobile device; or a component of a server receiving results of measurements from at least one mobile device. 9. The method according to claim 1, wherein the identifier of the radio transmitter and the indication of the location are stored in a memory as a part of a plurality of identifiers of radio transmitters and associated indications of locations for the site, the method further comprising at a further mobile device: receiving the plurality of identifiers of radio transmitters and associated indications of locations stored for the site; calibrating a position of the further mobile device using an indication of a location associated with an identifier of a radio transmitter, if radio signals of the radio transmitter are received with a signal strength exceeding a threshold; and using the calibrated position for a sensor-based tracking of the further mobile device. 10. The method according to claim 9, further comprising at the further mobile device: assembling fingerprints, each fingerprint comprising results of measurements on radio signals at a respective location of measurement at the site and an indication of the respective location of measurement, wherein the respective location of measurement is based on the sensor-based tracking of the further mobile device; and transmitting assembled fingerprints to a server. 11. An apparatus comprising at least one processor and at least one memory, wherein the at least one memory includes computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause a device at least to: obtain results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site, the results comprising at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals, wherein a plurality of radio transmitters are distributed at the site; compare the received signal strength with a threshold; and in response to the received signal strength exceeding the threshold, cause a notification of a user of the mobile device; in response to the received signal strength exceeding the threshold, obtain an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device; and cause storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter. 12. The apparatus according to claim 11, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the device to cause storage of the identifier of the radio transmitter and the indication of the location in a database configured to store identifiers of a plurality of radio transmitters and a respectively associated indication of a location. 13. The apparatus according to claim 11, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the device to cause storage of the indication of the received signal strength along with the location and the identifier of the radio transmitter. 14. The apparatus according to claim 11, wherein the notification of a user is caused only in case the received signal strength of radio signals of a single radio transmitter is determined to exceed a threshold at a current location of the mobile device; or only in case the received signal strength of radio signals of at least one radio transmitter of a single entity is determined to exceed a threshold at a current location of the mobile device; or in case the received signal strength of radio signals of at least one radio transmitter is determined to exceed a threshold at a current location of the mobile device. 15. The apparatus according to claim 11, wherein the map of the site is caused to be presented to a user with a segmentation into sub-areas. 16. The apparatus according to claim 15, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the device to: point out sub-areas, for which an approximate location of at least one radio transmitter is still required, on the display; and/or point out sub-areas, for which a predetermined number of approximate locations of radio transmitters has already been stored, on the display. 17. The apparatus according to claim 11, wherein the stored identifier of the radio transmitter and the stored indication of the location are made available for a tracking of mobile devices at the site; or for a calibration of a sensor-based tracking of mobile devices collecting fingerprints at the site. 18. The apparatus according to claim 11, wherein the apparatus is one of: a module for a mobile device; or a mobile device; or a module for a server; or a server. 19. A system comprising an apparatus, the apparatus comprising at least one processor and at least one memory, wherein the at least one memory includes computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause a device at least to: obtain results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site, the results comprising at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals, wherein a plurality of radio transmitters are distributed at the site; compare the received signal strength with a threshold; and in response to the received signal strength exceeding the threshold, cause a notification of a user of the mobile device; in response to the receive signal strength exceeding the threshold, obtain an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device; and cause storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter; wherein the apparatus is: a server, the system further comprising a memory that is configured to store identifiers of radio transmitters and associated indications of locations and that is accessible to the server; or a server, the system further comprising the mobile device providing results of measurements on radio signals at the site; or a server, the system further comprising a mobile device configured to calibrate its location for a sensor-based tracking using a plurality of identifiers of radio transmitters and associated indications of locations made available by the server for the site; or the mobile device, the system further comprising a server configured to cause storage of identifiers of radio transmitters and associated indications of locations provided by the mobile device; or the mobile device, the system further comprising a server configured to cause storage of identifiers of radio transmitters and associated indications of locations provided by the mobile device, and a mobile device configured to calibrate its location for a sensor-based tracking using a plurality of identifiers of radio transmitters and associated indications of locations made available by the server for the site. 20. A non-transitory tangible computer readable storage medium in which computer program code is stored, the computer program code causing a device to perform the following when executed by a processor: obtain results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site, the results comprising at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals, wherein a plurality of radio transmitters are distributed at the site; compare the received signal strength with a threshold; in response to if the received signal strength exceeding the threshold, cause a notification of a user of the mobile device; in response to the received signal strength exceeding the threshold, obtain an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device; and in response to the received signal strength exceeding the threshold, cause storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter.
An apparatus obtains results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site. The results include at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals. A plurality of radio transmitters are distributed at the site. The apparatus compares the received signal strength with a threshold. If the received signal strength exceeds the threshold, the apparatus causes a notification of a user of the mobile device, obtains an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device, and causes storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter.1. A method performed by at least one apparatus, the method comprising: obtaining results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site, the results comprising at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals, wherein a plurality of radio transmitters are distributed at the site; comparing the received signal strength with a threshold; in response to the received signal strength exceeding the threshold, causing a notification of a user of the mobile device; in response to the received signal strength exceeding the threshold, obtaining an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device; and causing storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter. 2. The method according to claim 1, further comprising: causing storage of the identifier of the radio transmitter and the indication of the location in a database configured to store identifiers of a plurality of radio transmitters and a respectively associated indication of a location. 3. The method according to claim 1, further comprising: causing storage of the indication of the received signal strength along with the location and the identifier of the radio transmitter. 4. The method according to claim 1, wherein the notification of a user is caused only in case the received signal strength of radio signals of a single radio transmitter is determined to exceed a threshold at a current location of the mobile device; or only in case the received signal strength of radio signals of at least one radio transmitter of a single entity is determined to exceed a threshold at a current location of the mobile device; or in case the received signal strength of radio signals of at least one radio transmitter is determined to exceed a threshold at a current location of the mobile device. 5. The method according to claim 1, wherein the map of the site is caused to be presented to a user with a segmentation into sub-areas. 6. The method according to claim 5, wherein: sub-areas, for which an approximate location of at least one radio transmitter is still required, are pointed out on the display; and/or sub-areas, for which a predetermined number of approximate locations of radio transmitters has already been stored, are pointed out on the display. 7. The method according to claim 1, wherein the stored identifier of the radio transmitter and the stored indication of the location are made available for a tracking of mobile devices at the site; or for a calibration of a sensor-based tracking of mobile devices collecting fingerprints at the site. 8. The method according to claim 1, wherein the at least one apparatus comprises one of the mobile device; or a component of the mobile device; or a server receiving results of measurements from at least one mobile device; or a component of a server receiving results of measurements from at least one mobile device. 9. The method according to claim 1, wherein the identifier of the radio transmitter and the indication of the location are stored in a memory as a part of a plurality of identifiers of radio transmitters and associated indications of locations for the site, the method further comprising at a further mobile device: receiving the plurality of identifiers of radio transmitters and associated indications of locations stored for the site; calibrating a position of the further mobile device using an indication of a location associated with an identifier of a radio transmitter, if radio signals of the radio transmitter are received with a signal strength exceeding a threshold; and using the calibrated position for a sensor-based tracking of the further mobile device. 10. The method according to claim 9, further comprising at the further mobile device: assembling fingerprints, each fingerprint comprising results of measurements on radio signals at a respective location of measurement at the site and an indication of the respective location of measurement, wherein the respective location of measurement is based on the sensor-based tracking of the further mobile device; and transmitting assembled fingerprints to a server. 11. An apparatus comprising at least one processor and at least one memory, wherein the at least one memory includes computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause a device at least to: obtain results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site, the results comprising at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals, wherein a plurality of radio transmitters are distributed at the site; compare the received signal strength with a threshold; and in response to the received signal strength exceeding the threshold, cause a notification of a user of the mobile device; in response to the received signal strength exceeding the threshold, obtain an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device; and cause storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter. 12. The apparatus according to claim 11, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the device to cause storage of the identifier of the radio transmitter and the indication of the location in a database configured to store identifiers of a plurality of radio transmitters and a respectively associated indication of a location. 13. The apparatus according to claim 11, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the device to cause storage of the indication of the received signal strength along with the location and the identifier of the radio transmitter. 14. The apparatus according to claim 11, wherein the notification of a user is caused only in case the received signal strength of radio signals of a single radio transmitter is determined to exceed a threshold at a current location of the mobile device; or only in case the received signal strength of radio signals of at least one radio transmitter of a single entity is determined to exceed a threshold at a current location of the mobile device; or in case the received signal strength of radio signals of at least one radio transmitter is determined to exceed a threshold at a current location of the mobile device. 15. The apparatus according to claim 11, wherein the map of the site is caused to be presented to a user with a segmentation into sub-areas. 16. The apparatus according to claim 15, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the device to: point out sub-areas, for which an approximate location of at least one radio transmitter is still required, on the display; and/or point out sub-areas, for which a predetermined number of approximate locations of radio transmitters has already been stored, on the display. 17. The apparatus according to claim 11, wherein the stored identifier of the radio transmitter and the stored indication of the location are made available for a tracking of mobile devices at the site; or for a calibration of a sensor-based tracking of mobile devices collecting fingerprints at the site. 18. The apparatus according to claim 11, wherein the apparatus is one of: a module for a mobile device; or a mobile device; or a module for a server; or a server. 19. A system comprising an apparatus, the apparatus comprising at least one processor and at least one memory, wherein the at least one memory includes computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause a device at least to: obtain results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site, the results comprising at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals, wherein a plurality of radio transmitters are distributed at the site; compare the received signal strength with a threshold; and in response to the received signal strength exceeding the threshold, cause a notification of a user of the mobile device; in response to the receive signal strength exceeding the threshold, obtain an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device; and cause storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter; wherein the apparatus is: a server, the system further comprising a memory that is configured to store identifiers of radio transmitters and associated indications of locations and that is accessible to the server; or a server, the system further comprising the mobile device providing results of measurements on radio signals at the site; or a server, the system further comprising a mobile device configured to calibrate its location for a sensor-based tracking using a plurality of identifiers of radio transmitters and associated indications of locations made available by the server for the site; or the mobile device, the system further comprising a server configured to cause storage of identifiers of radio transmitters and associated indications of locations provided by the mobile device; or the mobile device, the system further comprising a server configured to cause storage of identifiers of radio transmitters and associated indications of locations provided by the mobile device, and a mobile device configured to calibrate its location for a sensor-based tracking using a plurality of identifiers of radio transmitters and associated indications of locations made available by the server for the site. 20. A non-transitory tangible computer readable storage medium in which computer program code is stored, the computer program code causing a device to perform the following when executed by a processor: obtain results of measurements of a mobile device on radio signals transmitted by a radio transmitter at a site, the results comprising at least an identifier of the radio transmitter and an indication of a received signal strength of the radio signals, wherein a plurality of radio transmitters are distributed at the site; compare the received signal strength with a threshold; in response to if the received signal strength exceeding the threshold, cause a notification of a user of the mobile device; in response to the received signal strength exceeding the threshold, obtain an indication of a location based on a user input identifying a location on a map of the site, the map presented on a display of the mobile device; and in response to the received signal strength exceeding the threshold, cause storage of the identifier of the radio transmitter and the indication of the location as approximate location of the radio transmitter.
2,600
10,939
10,939
15,864,232
2,666
A face verification method and apparatus is disclosed. The face verification method includes selecting a current verification mode, from among plural verification modes, to be implemented for the verifying of the face, determining one or more recognizers, from among plural recognizers, based on the selected current verification mode, extracting feature information from information of the face using at least one of the determined one or more recognizers, and indicating whether a verification is successful based on the extracted feature information.
1. A processor implemented method of verifying a face, the method comprising: selecting a current verification mode, from among plural verification modes, to be implemented for the verifying of the face; determining one or more recognizers, from among plural recognizers, based on the selected current verification mode; extracting feature information from information of the face using at least one of the determined one or more recognizers; and indicating whether a verification is successful based on the extracted feature information. 2. The method of claim 1, further comprising acquiring a recognizer based on the determination of the one or more recognizers, wherein the extracting of the feature information includes applying image information derived from a determined face region of an image to the acquired recognizer, to generate the extracted feature information. 3. The method of claim 2, further comprising performing a first normalizing of the determined face region of the image to generate the image information in a form predetermined suitable for application to the acquired recognizer, the acquired recognizer being a trained neural network or machine learning model. 4. The method of claim 1, further comprising performing respective normalizations of a determined face region of an image to generate respective image information in respective forms predetermined suitable for application to different acquired recognizers based on the current verification mode, the extracting of the feature information including extracting respective feature information from the respective image information using the different acquired recognizers, and the indicating of whether the verification is successful includes performing the indicating of whether the verification is successful based on at least one of the extracted respective feature information. 5. The method of claim 4, wherein at least one of the respective normalizations includes synthesizing image information with respect to an occlusion, or prediction of the occlusion, in the determined face region of the image. 6. The method of claim 4, further comprising performing the verification, including performing a first verification based on a first extracted feature information from among the extracted respective feature information, selectively performing a second verification based on a second extracted feature information from among the extracted respective feature information, and, dependent on whether the second verification is selectively performed, selectively performing a third verification based on the first extracted feature information and the second extracted feature information and determining whether the verification is successful based on a result of the third verification. 7. The method of claim 1, further comprising performing the extracting of the feature information including extracting respective feature information from respective normalizations of a determined face region of an image using different determined recognizers based on the current verification mode, wherein the performing of the extracting of the respective feature information is selectively partially performed, based on the current verification mode, to focus on one of usability and security in the verifying of the face. 8. The method of claim 1, wherein the determining of the one or more recognizers includes determining a plurality of different recognizers for the verification and performing the verification based on a select combination and/or arrangement of the different recognizers dependent on the current verification mode. 9. The method of claim 8, wherein the performing the verification includes selectively performing respective verification processes corresponding to the different recognizers based on the current verification mode. 10. The method of claim 9, wherein the respective verification processes each include comparing respective similarities, between differently extracted feature information and corresponding registration information, with corresponding thresholds. 11. The method of claim 9, wherein the plurality of different recognizers include a first neural network model recognizer and a second neural network model recognizer, the first neural network model recognizer and the second neural network model recognizer being characterized as having been respectively trained using one or more different pieces of training data, and wherein the first neural network model recognizer is trained to perform low light face feature extraction for low light condition face verification and the second neural network model recognizer is trained with training data associated with a face region including one or more occlusion regions. 12. The method of claim 8, wherein the extracting of the feature information includes extracting first feature information from image information derived from a determined face region of an image using a first recognizer, wherein the method further includes performing the verification, including: determining a first verification result based on the first feature information; and determining whether to extract second feature information using a second recognizer based on the first verification result and the current verification mode, and wherein the plural recognition modes include at least a first verification mode and a different second verification mode. 13. The method of claim 12, wherein the extracting of the feature information includes extracting the second feature information from the image information, or other image information derived from the determined face region of the image or a determined face region of another image, using the second recognizer in response to the determining of whether to extract the second feature information being a determination that the second feature information is to be extracted, and wherein the performing of the verification further includes: determining a second verification result based on the second feature information; and determining whether the verification is successful based on the second verification result. 14. The method of claim 13, wherein, in response to the current verification mode being selected to be the first verification mode, determining that a final verification result is failure in response to the second verification result being failure, and determining that the final verification result is success in response to the second verification result being success. 15. The method of claim 13, wherein, in response to the current verification mode being selected to be the second verification mode, determining that the second feature information is to be extracted in response to the first verification result being success, and determining, in response to the second verification result being success, that a final verification result is success. 16. The method of claim 12, wherein, in response to the current verification mode being selected to be the first verification mode, determining that the second feature information is to be extracted in response to the first verification result being failure, and determining, in response to the first verification result being success, that a final verification result is success and terminating a face verification process. 17. The method of claim 12, wherein, in response to the current verification mode being selected to be the second verification mode, determining, in response to the first verification result being failure, that a final verification result is failure and terminating a face verification process, and determining that the second feature information is to be extracted in response to the first verification result being success. 18. The method of claim 12, wherein the determining of the first verification result includes determining the first verification result based on a comparing of the first feature information to first registered feature information previously extracted using the first recognizer. 19. The method of claim 18, further comprising: selectively updating the first registered feature information based on the first feature information and in response to the first verification result being success. 20. The method of claim 12, further comprising determining a second verification result based on a comparing of the second feature information to second registered feature information previously extracted using the second recognizer. 21. The method of claim 20, further comprising: selectively updating the second registered feature information based on the second feature information and in response to the second verification result being success. 22. The method of claim 12, wherein the determining of the first verification result includes: calculating a similarity between the first feature information and first registered feature information previously extracted using the first recognizer; and determining the first verification result based on a result of a comparison between a first threshold and the similarity. 23. The method of claim 1, wherein the extracting of the feature information includes: detecting a face region from an input image; detecting facial landmarks from the detected face region; and normalizing the face region based on the detected facial landmarks to generate the information of the face. 24. A non-transitory computer-readable medium storing instructions, which when executed by a processor, cause the processor to implement the method of claim 1. 25. An apparatus for verifying a face, the apparatus comprising: one or more processors configured to: select a current verification mode, from among plural verification modes, to be implemented for the verifying of the face; determine one or more recognizers, from among plural recognizers, based on the selected current verification mode; extract feature information from information of the face using at least one of the determined one or more recognizers; and indicate whether a verification is successful based on the extracted feature information. 26. The apparatus of claim 25, the one or more processors are further configured to acquire a recognizer based on the determination of the one or more recognizers, wherein the one or more processors are configured to apply image information derived from a determined face region of an image to the acquired recognizer, for generating the extracted feature information. 27. The apparatus of claim 26, the one or more processors are further configured to perform a first normalizing of the determined face region of the image to generate the image information in a form predetermined suitable for application to the acquired recognizer, the acquired recognizer being a trained neural network or machine learning model. 28. The apparatus of claim 25, the one or more processors are further configured to: perform respective normalizations of a determined face region of an image to generate respective image information in respective forms predetermined suitable for application to different acquired recognizers based on the current verification mode, perform the extracting of the feature information by extracting respective feature information from the respective image information using the different acquired recognizers, and perform the indicating of whether the verification is successful based on at least one of the extracted respective feature information. 29. The apparatus of claim 28, wherein for at least one of the respective normalizations the one or more processors are further configured to synthesize image information with respect to an occlusion, or prediction of the occlusion, in the determined face region of the image. 30. The apparatus of claim 28, wherein the one or more processors perform a first verification based on a first extracted feature information from among the extracted respective feature information, selectively perform a second verification based on a second extracted feature information from among the extracted respective feature information, and, dependent on whether the second verification is selectively performed, selectively perform a third verification based on the first extracted feature information and the second extracted feature information and determine whether the verification is successful based on a result of the third verification. 31. The apparatus of claim 25, to perform the extracting of the feature information, the one or more processors are configured to extract respective feature information from respective normalizations of a determined face region of an image using different determined recognizers based on the current verification mode, wherein the performing of the extracting of the respective feature information is selectively partially performed by the one or more processors, based on the current verification mode, to focus on one of usability and security in the verifying of the face. 32. The apparatus of claim 25, wherein the one or more processors are configured to, based on the current verification mode, determine a plurality of different recognizers for the verification and perform the verification based on a select combination and/or arrangement of the different recognizers dependent on the current verification mode. 33. The apparatus of claim 32, wherein, to perform the verification, the one or more processors are configured to selectively perform respective verification processes corresponding to the different recognizers based on the current verification mode. 34. The apparatus of claim 33, wherein the respective verification processes each include comparing respective similarities, between differently extracted feature information and corresponding registration information, with corresponding thresholds by the one or more processors. 35. The apparatus of claim 32, wherein the one or more processors are further configured to: extract first feature information from image information derived from a determined face region of an image using a first recognizer, to perform the extracting of the feature information; determine a first verification result based on the first feature information, to perform the verification; and determine whether to extract second feature information using a second recognizer based on the first verification result and the current verification mode, wherein the plural recognition modes include at least a first verification mode and a different second verification mode. 36. The apparatus of claim 35, wherein the one or more processors are further configured to: extract the second feature information from the image information, or other image information derived from the determined face region of the image or a determined face region of another image, using the second recognizer in response to the determining of whether to extract the second feature information being a determination that the second feature information is to be extracted; determine a second verification result based on the second feature information, to further perform the verification; and determine whether the verification is successful based on the second verification result. 37. The apparatus of claim 36, wherein, in response to the current verification mode being selected to be the first verification mode, the one or more processors are configured to perform the verification to: determine that a final verification result is failure in response to the second verification result being failure; and determine that the final verification result is success in response to the second verification result being success. 38. The apparatus of claim 36, wherein, in response to the current verification mode being selected to be the second verification mode, the one or more processors are configured to perform the verification to: determine that the second feature information is to be extracted in response to the first verification result being success; and determine, in response to the second verification result being success, that a final verification result is success. 39. The apparatus of claim 35, wherein, in response to the current verification mode being selected to be the first verification mode, the one or more processors are configured to: determine that the second feature information is to be extracted in response to the first verification result being failure; and determine, in response to the first verification result being success, that a final verification result is success and terminate a face verification process. 40. The apparatus of claim 35, wherein, in response to the current verification mode being selected to be the second verification mode, the one or more processors are configured to: determine, in response to the first verification result being failure, that a final verification result is failure and terminate a face verification process; and determine that the second feature information is to be extracted in response to the first verification result being success. 41. The apparatus of claim 35, wherein the one or more processors are configured to determine the first verification result based on a comparing of the first feature information to first registered feature information previously extracted using the first recognizer. 42. The apparatus of claim 41, wherein the one or more processors are configured to selectively update the first registered feature information based on the first feature information and in response to the first verification result being success. 43. The apparatus of claim 35, wherein the one or more processors are configured to determine the second verification result based on a comparing of the second feature information to second registered feature information previously extracted using the second recognizer. 44. The apparatus of claim 43, wherein the one or more processors are configured to selectively update the second registered feature information based on the second feature information and in response to the second verification result being success. 45. The apparatus of claim 35, wherein the one or more processors are configured to: calculate a similarity between the first feature information and first registered feature information previously extracted using the first recognizer; and determine the first verification result based on a result of a comparison between a first threshold and the similarity. 46. The apparatus of claim 25, wherein the one or more processors are configured to: detect a face region from an input image; detect facial landmarks from the detected face region; and normalize the face region based on the detected facial landmarks to generate the information of the face. 47. A computing apparatus comprising: a first recognizer including a neural network model configured to extract first feature information from an input image including a face region of a user; a second recognizer including another neural network model configured to extract second feature information from the input image, the second feature information being different from the first feature information; and one or more processors configured to determine that a face recognition is successful in response to at least one of a first verification result and a second verification result being success, wherein the first verification result is obtained by comparing the first feature information to first registered feature information and the second verification result is obtained by comparing the second feature information to second registered feature information. 48. The computing apparatus of claim 47, wherein the neural network model of the first recognizer and the other neural network model of the second recognizer are each configured as having been respectively trained based on at least different pieces of training data. 49. The computing apparatus of claim 47, wherein the other neural network model of the second recognizer is configured as having been trained based on training data associated with a training face region including, or predicted to include, an occlusion region. 50. The computing apparatus of claim 47, wherein an image, obtained by substituting a region that is predicted to have an occlusion region in the face region with image information of a corresponding region in an average image, an average value image, or a single color image, is input to the other neural network model of the second recognizer for the extracting of the second feature information. 51. The computing apparatus of claim 50, wherein the region that is predicted to have the occlusion region is a region in which an occlusion is predicted to occur in the face region due to any one or any combination of any two or more of glasses, sunglasses, hat, and mask occlusions. 52. The computing apparatus of claim 47, wherein the one or more processors are configured to release a lock mode of the computing apparatus in response to the face recognition being successful, and in response to at least both the first verification result and the second verification result being failures, the one or more processors are configured to maintain the lock mode or to not release the lock mode. 53. A computing apparatus comprising: a first recognizer including a neural network model configured to extract first feature information from an input image including a face region of a user; a second recognizer including another neural network model configured to extract second feature information from the input image, the second feature information being different from the first feature information; and one or more processors configured to determine that a face recognition is failed in response to at least one of a first verification result and a second verification result being failure, wherein the first verification result is obtained by comparing the first feature information to first registered feature information and the second verification result is obtained by comparing the second feature information to second registered feature information. 54. The computing apparatus of claim 53, wherein the neural network model of the first recognizer and the other neural network model of the second recognizer are each configured as having been respectively trained based on at least different pieces of training data. 55. The computing apparatus of claim 53, wherein the other neural network model of the second recognizer is configured as having been trained based on training data associated with a training face region including, or predicted to include, an occlusion region. 56. The computing apparatus of claim 53, wherein an image, obtained by substituting a region that is predicted to have an occlusion region in the face region with image information of a corresponding region in an average image, an average value image, or a single color image, is input to the other neural network model of the second recognizer for the extracting of the second feature information. 57. The computing apparatus of claim 56, wherein the region that is predicted to have the occlusion region is a region in which an occlusion is predicted to occur in the face region due to any one or any combination of any two or more of glasses, sunglasses, hat, and mask occlusions. 58. The computing apparatus of claim 53, wherein, in response to the face verification being failed, the one or more processors are configured to determine that a verification result is failure in a payment service or a financial service, and in response to at least both the first verification result and the second verification result being successes, the one or more processors are configured to determine that the verification result is successful in the payment service or the financial service. 59. The computing apparatus of claim 58, wherein the computing apparatus further comprises a transceiver, and wherein, in response to the determination that the verification result is successful in the payment service or the financial service, the one or more processors are configured to control the transceiver to transmit payment information to an external terminal, configured to perform a financial transaction according the financial service, and/or configured to provide a user interface with financial information according to the financial service. 60. A computing apparatus comprising: a first recognizer including a neural network model configured to extract first feature information from an input image including a face region of a user; a second recognizer including another neural network model configured to extract second feature information from the input image, the second feature information being different from the first feature information; and one or more processors configured to determine a current verification mode, from among at least a first verification mode and a second verification mode, to be implemented to verify the user, the one or more processors being further configured, in response to the current verification mode being determined to be the first verification mode, to determine that a face recognition is successful in response to at least one of a first verification result and a second verification result being success, wherein the first verification result is obtained by comparing the first feature information to first registered feature information and the second verification result is obtained by comparing the second feature information to second registered feature information, and the one or more processors being further configured, in response to the current verification mode being determined to be the second verification mode, to determine that a face recognition is failed in response to at least one of the first verification result and the second verification result being failure. 61. The computing apparatus of claim 60, wherein the first verification mode is a predetermined unlock verification mode, determined to be implemented by the one or more processors when the user attempts to unlock a user interface of the computing apparatus, or controlled to be automatically implemented by the one or more processors when the computing apparatus is in a locked state. 62. The computing apparatus of claim 60, wherein the second verification mode is a predetermined payment verification mode, determined to be implemented by the one or more processors when the user accesses or selects a payment service of the computing apparatus.
A face verification method and apparatus is disclosed. The face verification method includes selecting a current verification mode, from among plural verification modes, to be implemented for the verifying of the face, determining one or more recognizers, from among plural recognizers, based on the selected current verification mode, extracting feature information from information of the face using at least one of the determined one or more recognizers, and indicating whether a verification is successful based on the extracted feature information.1. A processor implemented method of verifying a face, the method comprising: selecting a current verification mode, from among plural verification modes, to be implemented for the verifying of the face; determining one or more recognizers, from among plural recognizers, based on the selected current verification mode; extracting feature information from information of the face using at least one of the determined one or more recognizers; and indicating whether a verification is successful based on the extracted feature information. 2. The method of claim 1, further comprising acquiring a recognizer based on the determination of the one or more recognizers, wherein the extracting of the feature information includes applying image information derived from a determined face region of an image to the acquired recognizer, to generate the extracted feature information. 3. The method of claim 2, further comprising performing a first normalizing of the determined face region of the image to generate the image information in a form predetermined suitable for application to the acquired recognizer, the acquired recognizer being a trained neural network or machine learning model. 4. The method of claim 1, further comprising performing respective normalizations of a determined face region of an image to generate respective image information in respective forms predetermined suitable for application to different acquired recognizers based on the current verification mode, the extracting of the feature information including extracting respective feature information from the respective image information using the different acquired recognizers, and the indicating of whether the verification is successful includes performing the indicating of whether the verification is successful based on at least one of the extracted respective feature information. 5. The method of claim 4, wherein at least one of the respective normalizations includes synthesizing image information with respect to an occlusion, or prediction of the occlusion, in the determined face region of the image. 6. The method of claim 4, further comprising performing the verification, including performing a first verification based on a first extracted feature information from among the extracted respective feature information, selectively performing a second verification based on a second extracted feature information from among the extracted respective feature information, and, dependent on whether the second verification is selectively performed, selectively performing a third verification based on the first extracted feature information and the second extracted feature information and determining whether the verification is successful based on a result of the third verification. 7. The method of claim 1, further comprising performing the extracting of the feature information including extracting respective feature information from respective normalizations of a determined face region of an image using different determined recognizers based on the current verification mode, wherein the performing of the extracting of the respective feature information is selectively partially performed, based on the current verification mode, to focus on one of usability and security in the verifying of the face. 8. The method of claim 1, wherein the determining of the one or more recognizers includes determining a plurality of different recognizers for the verification and performing the verification based on a select combination and/or arrangement of the different recognizers dependent on the current verification mode. 9. The method of claim 8, wherein the performing the verification includes selectively performing respective verification processes corresponding to the different recognizers based on the current verification mode. 10. The method of claim 9, wherein the respective verification processes each include comparing respective similarities, between differently extracted feature information and corresponding registration information, with corresponding thresholds. 11. The method of claim 9, wherein the plurality of different recognizers include a first neural network model recognizer and a second neural network model recognizer, the first neural network model recognizer and the second neural network model recognizer being characterized as having been respectively trained using one or more different pieces of training data, and wherein the first neural network model recognizer is trained to perform low light face feature extraction for low light condition face verification and the second neural network model recognizer is trained with training data associated with a face region including one or more occlusion regions. 12. The method of claim 8, wherein the extracting of the feature information includes extracting first feature information from image information derived from a determined face region of an image using a first recognizer, wherein the method further includes performing the verification, including: determining a first verification result based on the first feature information; and determining whether to extract second feature information using a second recognizer based on the first verification result and the current verification mode, and wherein the plural recognition modes include at least a first verification mode and a different second verification mode. 13. The method of claim 12, wherein the extracting of the feature information includes extracting the second feature information from the image information, or other image information derived from the determined face region of the image or a determined face region of another image, using the second recognizer in response to the determining of whether to extract the second feature information being a determination that the second feature information is to be extracted, and wherein the performing of the verification further includes: determining a second verification result based on the second feature information; and determining whether the verification is successful based on the second verification result. 14. The method of claim 13, wherein, in response to the current verification mode being selected to be the first verification mode, determining that a final verification result is failure in response to the second verification result being failure, and determining that the final verification result is success in response to the second verification result being success. 15. The method of claim 13, wherein, in response to the current verification mode being selected to be the second verification mode, determining that the second feature information is to be extracted in response to the first verification result being success, and determining, in response to the second verification result being success, that a final verification result is success. 16. The method of claim 12, wherein, in response to the current verification mode being selected to be the first verification mode, determining that the second feature information is to be extracted in response to the first verification result being failure, and determining, in response to the first verification result being success, that a final verification result is success and terminating a face verification process. 17. The method of claim 12, wherein, in response to the current verification mode being selected to be the second verification mode, determining, in response to the first verification result being failure, that a final verification result is failure and terminating a face verification process, and determining that the second feature information is to be extracted in response to the first verification result being success. 18. The method of claim 12, wherein the determining of the first verification result includes determining the first verification result based on a comparing of the first feature information to first registered feature information previously extracted using the first recognizer. 19. The method of claim 18, further comprising: selectively updating the first registered feature information based on the first feature information and in response to the first verification result being success. 20. The method of claim 12, further comprising determining a second verification result based on a comparing of the second feature information to second registered feature information previously extracted using the second recognizer. 21. The method of claim 20, further comprising: selectively updating the second registered feature information based on the second feature information and in response to the second verification result being success. 22. The method of claim 12, wherein the determining of the first verification result includes: calculating a similarity between the first feature information and first registered feature information previously extracted using the first recognizer; and determining the first verification result based on a result of a comparison between a first threshold and the similarity. 23. The method of claim 1, wherein the extracting of the feature information includes: detecting a face region from an input image; detecting facial landmarks from the detected face region; and normalizing the face region based on the detected facial landmarks to generate the information of the face. 24. A non-transitory computer-readable medium storing instructions, which when executed by a processor, cause the processor to implement the method of claim 1. 25. An apparatus for verifying a face, the apparatus comprising: one or more processors configured to: select a current verification mode, from among plural verification modes, to be implemented for the verifying of the face; determine one or more recognizers, from among plural recognizers, based on the selected current verification mode; extract feature information from information of the face using at least one of the determined one or more recognizers; and indicate whether a verification is successful based on the extracted feature information. 26. The apparatus of claim 25, the one or more processors are further configured to acquire a recognizer based on the determination of the one or more recognizers, wherein the one or more processors are configured to apply image information derived from a determined face region of an image to the acquired recognizer, for generating the extracted feature information. 27. The apparatus of claim 26, the one or more processors are further configured to perform a first normalizing of the determined face region of the image to generate the image information in a form predetermined suitable for application to the acquired recognizer, the acquired recognizer being a trained neural network or machine learning model. 28. The apparatus of claim 25, the one or more processors are further configured to: perform respective normalizations of a determined face region of an image to generate respective image information in respective forms predetermined suitable for application to different acquired recognizers based on the current verification mode, perform the extracting of the feature information by extracting respective feature information from the respective image information using the different acquired recognizers, and perform the indicating of whether the verification is successful based on at least one of the extracted respective feature information. 29. The apparatus of claim 28, wherein for at least one of the respective normalizations the one or more processors are further configured to synthesize image information with respect to an occlusion, or prediction of the occlusion, in the determined face region of the image. 30. The apparatus of claim 28, wherein the one or more processors perform a first verification based on a first extracted feature information from among the extracted respective feature information, selectively perform a second verification based on a second extracted feature information from among the extracted respective feature information, and, dependent on whether the second verification is selectively performed, selectively perform a third verification based on the first extracted feature information and the second extracted feature information and determine whether the verification is successful based on a result of the third verification. 31. The apparatus of claim 25, to perform the extracting of the feature information, the one or more processors are configured to extract respective feature information from respective normalizations of a determined face region of an image using different determined recognizers based on the current verification mode, wherein the performing of the extracting of the respective feature information is selectively partially performed by the one or more processors, based on the current verification mode, to focus on one of usability and security in the verifying of the face. 32. The apparatus of claim 25, wherein the one or more processors are configured to, based on the current verification mode, determine a plurality of different recognizers for the verification and perform the verification based on a select combination and/or arrangement of the different recognizers dependent on the current verification mode. 33. The apparatus of claim 32, wherein, to perform the verification, the one or more processors are configured to selectively perform respective verification processes corresponding to the different recognizers based on the current verification mode. 34. The apparatus of claim 33, wherein the respective verification processes each include comparing respective similarities, between differently extracted feature information and corresponding registration information, with corresponding thresholds by the one or more processors. 35. The apparatus of claim 32, wherein the one or more processors are further configured to: extract first feature information from image information derived from a determined face region of an image using a first recognizer, to perform the extracting of the feature information; determine a first verification result based on the first feature information, to perform the verification; and determine whether to extract second feature information using a second recognizer based on the first verification result and the current verification mode, wherein the plural recognition modes include at least a first verification mode and a different second verification mode. 36. The apparatus of claim 35, wherein the one or more processors are further configured to: extract the second feature information from the image information, or other image information derived from the determined face region of the image or a determined face region of another image, using the second recognizer in response to the determining of whether to extract the second feature information being a determination that the second feature information is to be extracted; determine a second verification result based on the second feature information, to further perform the verification; and determine whether the verification is successful based on the second verification result. 37. The apparatus of claim 36, wherein, in response to the current verification mode being selected to be the first verification mode, the one or more processors are configured to perform the verification to: determine that a final verification result is failure in response to the second verification result being failure; and determine that the final verification result is success in response to the second verification result being success. 38. The apparatus of claim 36, wherein, in response to the current verification mode being selected to be the second verification mode, the one or more processors are configured to perform the verification to: determine that the second feature information is to be extracted in response to the first verification result being success; and determine, in response to the second verification result being success, that a final verification result is success. 39. The apparatus of claim 35, wherein, in response to the current verification mode being selected to be the first verification mode, the one or more processors are configured to: determine that the second feature information is to be extracted in response to the first verification result being failure; and determine, in response to the first verification result being success, that a final verification result is success and terminate a face verification process. 40. The apparatus of claim 35, wherein, in response to the current verification mode being selected to be the second verification mode, the one or more processors are configured to: determine, in response to the first verification result being failure, that a final verification result is failure and terminate a face verification process; and determine that the second feature information is to be extracted in response to the first verification result being success. 41. The apparatus of claim 35, wherein the one or more processors are configured to determine the first verification result based on a comparing of the first feature information to first registered feature information previously extracted using the first recognizer. 42. The apparatus of claim 41, wherein the one or more processors are configured to selectively update the first registered feature information based on the first feature information and in response to the first verification result being success. 43. The apparatus of claim 35, wherein the one or more processors are configured to determine the second verification result based on a comparing of the second feature information to second registered feature information previously extracted using the second recognizer. 44. The apparatus of claim 43, wherein the one or more processors are configured to selectively update the second registered feature information based on the second feature information and in response to the second verification result being success. 45. The apparatus of claim 35, wherein the one or more processors are configured to: calculate a similarity between the first feature information and first registered feature information previously extracted using the first recognizer; and determine the first verification result based on a result of a comparison between a first threshold and the similarity. 46. The apparatus of claim 25, wherein the one or more processors are configured to: detect a face region from an input image; detect facial landmarks from the detected face region; and normalize the face region based on the detected facial landmarks to generate the information of the face. 47. A computing apparatus comprising: a first recognizer including a neural network model configured to extract first feature information from an input image including a face region of a user; a second recognizer including another neural network model configured to extract second feature information from the input image, the second feature information being different from the first feature information; and one or more processors configured to determine that a face recognition is successful in response to at least one of a first verification result and a second verification result being success, wherein the first verification result is obtained by comparing the first feature information to first registered feature information and the second verification result is obtained by comparing the second feature information to second registered feature information. 48. The computing apparatus of claim 47, wherein the neural network model of the first recognizer and the other neural network model of the second recognizer are each configured as having been respectively trained based on at least different pieces of training data. 49. The computing apparatus of claim 47, wherein the other neural network model of the second recognizer is configured as having been trained based on training data associated with a training face region including, or predicted to include, an occlusion region. 50. The computing apparatus of claim 47, wherein an image, obtained by substituting a region that is predicted to have an occlusion region in the face region with image information of a corresponding region in an average image, an average value image, or a single color image, is input to the other neural network model of the second recognizer for the extracting of the second feature information. 51. The computing apparatus of claim 50, wherein the region that is predicted to have the occlusion region is a region in which an occlusion is predicted to occur in the face region due to any one or any combination of any two or more of glasses, sunglasses, hat, and mask occlusions. 52. The computing apparatus of claim 47, wherein the one or more processors are configured to release a lock mode of the computing apparatus in response to the face recognition being successful, and in response to at least both the first verification result and the second verification result being failures, the one or more processors are configured to maintain the lock mode or to not release the lock mode. 53. A computing apparatus comprising: a first recognizer including a neural network model configured to extract first feature information from an input image including a face region of a user; a second recognizer including another neural network model configured to extract second feature information from the input image, the second feature information being different from the first feature information; and one or more processors configured to determine that a face recognition is failed in response to at least one of a first verification result and a second verification result being failure, wherein the first verification result is obtained by comparing the first feature information to first registered feature information and the second verification result is obtained by comparing the second feature information to second registered feature information. 54. The computing apparatus of claim 53, wherein the neural network model of the first recognizer and the other neural network model of the second recognizer are each configured as having been respectively trained based on at least different pieces of training data. 55. The computing apparatus of claim 53, wherein the other neural network model of the second recognizer is configured as having been trained based on training data associated with a training face region including, or predicted to include, an occlusion region. 56. The computing apparatus of claim 53, wherein an image, obtained by substituting a region that is predicted to have an occlusion region in the face region with image information of a corresponding region in an average image, an average value image, or a single color image, is input to the other neural network model of the second recognizer for the extracting of the second feature information. 57. The computing apparatus of claim 56, wherein the region that is predicted to have the occlusion region is a region in which an occlusion is predicted to occur in the face region due to any one or any combination of any two or more of glasses, sunglasses, hat, and mask occlusions. 58. The computing apparatus of claim 53, wherein, in response to the face verification being failed, the one or more processors are configured to determine that a verification result is failure in a payment service or a financial service, and in response to at least both the first verification result and the second verification result being successes, the one or more processors are configured to determine that the verification result is successful in the payment service or the financial service. 59. The computing apparatus of claim 58, wherein the computing apparatus further comprises a transceiver, and wherein, in response to the determination that the verification result is successful in the payment service or the financial service, the one or more processors are configured to control the transceiver to transmit payment information to an external terminal, configured to perform a financial transaction according the financial service, and/or configured to provide a user interface with financial information according to the financial service. 60. A computing apparatus comprising: a first recognizer including a neural network model configured to extract first feature information from an input image including a face region of a user; a second recognizer including another neural network model configured to extract second feature information from the input image, the second feature information being different from the first feature information; and one or more processors configured to determine a current verification mode, from among at least a first verification mode and a second verification mode, to be implemented to verify the user, the one or more processors being further configured, in response to the current verification mode being determined to be the first verification mode, to determine that a face recognition is successful in response to at least one of a first verification result and a second verification result being success, wherein the first verification result is obtained by comparing the first feature information to first registered feature information and the second verification result is obtained by comparing the second feature information to second registered feature information, and the one or more processors being further configured, in response to the current verification mode being determined to be the second verification mode, to determine that a face recognition is failed in response to at least one of the first verification result and the second verification result being failure. 61. The computing apparatus of claim 60, wherein the first verification mode is a predetermined unlock verification mode, determined to be implemented by the one or more processors when the user attempts to unlock a user interface of the computing apparatus, or controlled to be automatically implemented by the one or more processors when the computing apparatus is in a locked state. 62. The computing apparatus of claim 60, wherein the second verification mode is a predetermined payment verification mode, determined to be implemented by the one or more processors when the user accesses or selects a payment service of the computing apparatus.
2,600
10,940
10,940
16,754,427
2,688
A parking assistance device according to an embodiment includes: an image processing unit that generates a surrounding image representing a situation around a towing vehicle based on an image capturing result obtained by an image capturing unit provided in the towing vehicle; a path obtaining unit that obtains a guidance path for guiding a towed vehicle to a target parking position, the guidance path extending from the target parking position as a starting point in a case where the target parking position of the towed vehicle connected to the towing vehicle is set; and a display processing unit that displays the guidance path such that the guidance path is superimposed on the surrounding image when the guidance path is obtained.
1. A parking assistance device comprising: an image processing unit that generates a surrounding image representing a situation around a towing vehicle based on an image capturing result obtained by an image capturing unit provided in the towing vehicle; a path obtaining unit that obtains a guidance path for guiding a towed vehicle to a target parking position, the guidance path extending from the target parking position as a starting point, when the target parking position of the towed vehicle connected to the towing vehicle is set; and a display processing unit that displays the guidance path such that the guidance path is superimposed on the surrounding image when the guidance path is obtained. 2. The parking assistance device according to claim 1, wherein the target parking position includes a pair of target positions corresponding to a vehicle width of the towed vehicle, and the display processing unit displays the guidance path extending from at least one target position located inside of a turn of the towing vehicle among the pair of target positions such that the guidance path is superimposed on the surrounding image. 3. The parking assistance device according to claim 1, wherein when the towed vehicle approaches the target parking position, the display processing unit deletes the guidance path superimposed on the surrounding image. 4. The parking assistance device according to claim 1, wherein the display processing unit changes a display mode of the guidance path according to a positional relationship between the towed vehicle and the target parking position. 5. The parking assistance device according to claim 1, wherein the display processing unit changes a display mode of the guidance path according to brightness of the surrounding image. 6. The parking assistance device according to claim 1, wherein the display processing unit further superimposes a first indicator for highlighting the target parking position on the surrounding image in addition to the guidance path. 7. The parking assistance device according to claim 1, wherein the display processing unit further superimposes, on the surrounding image, a second indicator representing movement of the towed vehicle expected when the towed vehicle reaches the target parking position along the guidance path. 8. The parking assistance device according to claim 1, wherein when the towing vehicle is in a backward movement state, the path obtaining unit newly obtains the guidance path each time the towing vehicle is stopped, and the display processing unit updates the guidance path superimposed on the surrounding image each time the guidance path is newly obtained. 9. The parking assistance device according to claim 1, wherein the surrounding image includes an overhead image representing the situation around the towing vehicle viewed from above, and an inside image representing a situation inside of a turn of the towing vehicle, and the display processing unit displays at least one image of the overhead image and the inside image as the surrounding image, and superimposes the guidance path on the at least one image.
A parking assistance device according to an embodiment includes: an image processing unit that generates a surrounding image representing a situation around a towing vehicle based on an image capturing result obtained by an image capturing unit provided in the towing vehicle; a path obtaining unit that obtains a guidance path for guiding a towed vehicle to a target parking position, the guidance path extending from the target parking position as a starting point in a case where the target parking position of the towed vehicle connected to the towing vehicle is set; and a display processing unit that displays the guidance path such that the guidance path is superimposed on the surrounding image when the guidance path is obtained.1. A parking assistance device comprising: an image processing unit that generates a surrounding image representing a situation around a towing vehicle based on an image capturing result obtained by an image capturing unit provided in the towing vehicle; a path obtaining unit that obtains a guidance path for guiding a towed vehicle to a target parking position, the guidance path extending from the target parking position as a starting point, when the target parking position of the towed vehicle connected to the towing vehicle is set; and a display processing unit that displays the guidance path such that the guidance path is superimposed on the surrounding image when the guidance path is obtained. 2. The parking assistance device according to claim 1, wherein the target parking position includes a pair of target positions corresponding to a vehicle width of the towed vehicle, and the display processing unit displays the guidance path extending from at least one target position located inside of a turn of the towing vehicle among the pair of target positions such that the guidance path is superimposed on the surrounding image. 3. The parking assistance device according to claim 1, wherein when the towed vehicle approaches the target parking position, the display processing unit deletes the guidance path superimposed on the surrounding image. 4. The parking assistance device according to claim 1, wherein the display processing unit changes a display mode of the guidance path according to a positional relationship between the towed vehicle and the target parking position. 5. The parking assistance device according to claim 1, wherein the display processing unit changes a display mode of the guidance path according to brightness of the surrounding image. 6. The parking assistance device according to claim 1, wherein the display processing unit further superimposes a first indicator for highlighting the target parking position on the surrounding image in addition to the guidance path. 7. The parking assistance device according to claim 1, wherein the display processing unit further superimposes, on the surrounding image, a second indicator representing movement of the towed vehicle expected when the towed vehicle reaches the target parking position along the guidance path. 8. The parking assistance device according to claim 1, wherein when the towing vehicle is in a backward movement state, the path obtaining unit newly obtains the guidance path each time the towing vehicle is stopped, and the display processing unit updates the guidance path superimposed on the surrounding image each time the guidance path is newly obtained. 9. The parking assistance device according to claim 1, wherein the surrounding image includes an overhead image representing the situation around the towing vehicle viewed from above, and an inside image representing a situation inside of a turn of the towing vehicle, and the display processing unit displays at least one image of the overhead image and the inside image as the surrounding image, and superimposes the guidance path on the at least one image.
2,600
10,941
10,941
15,785,173
2,643
Certain aspects of the present disclosure are generally directed to design of random access channel (RACH) procedures. For example, certain aspects of the present disclosure provide a method for wireless communication by a user equipment (UE). The method generally includes receiving an indication of a RACH procedure capability of a network entity, and selecting a first RACH procedure or a second RACH procedure, based on the indication. The UE may then communicate one or more messages with the network entity based on the selected first RACH procedure or the selected second RACH procedure.
1. A method for wireless communication by a user equipment (UE), comprising: receiving an indication of a random-access channel (RACH) procedure capability of a network entity; selecting a first RACH procedure or a second RACH procedure, based on the indication; and communicating one or more messages with the network entity based on the selected first RACH procedure or the selected second RACH procedure. 2. The method of claim 1, further comprising receiving system information from the network entity, wherein the system information comprises the indication of the RACH procedure capability. 3. The method of claim 2, wherein the system information comprises minimum system information (MSI) or other system information (OSI). 4. The method of claim 1, wherein the indication of the RACH procedure is received from another network entity in a handover (HO) command message for HO of the UE to the network entity. 5. The method of claim 1, wherein the indication of the RACH procedure indicates that the network entity supports both the first RACH procedure and the second RACH procedure. 6. The method of claim 5, further comprising: selecting a RACH preamble type based on the selection of the first RACH procedure or the second RACH procedure; and generating a RACH preamble based on the selection of the RACH preamble type, wherein the communicating comprises communicating the RACH preamble with the network entity. 7. The method of claim 6, wherein the selection of the RACH preamble type comprises selecting a sequence for the RACH preamble. 8. The method of claim 5, wherein: the communication comprises transmitting a RACH preamble and a RACH message to the network entity over a first subband if the first RACH procedure is selected; and the communication comprises transmitting the RACH preamble to the network entity over a second subband if the second RACH procedure is selected. 9. The method of claim 1, wherein the selection comprises selecting the first RACH procedure, the method further comprising: detecting an unsuccessful communication of the one or more messages; selecting the second RACH procedure based on the detection; and communicating one or more other messages with the network entity based on the selection of the second RACH procedure. 10. The method of claim 1, wherein: the first RACH procedure comprises a two-step RACH procedure; and the communication of the one or more messages comprises: transmitting a RACH preamble and a RACH message; and receiving a random access response (RAR) in response to the transmission of the RACH preamble and the RACH message. 11. The method of claim 1, wherein: the second RACH procedure comprises a four-step RACH procedure; and the communication of the one or more messages comprises: transmitting a RACH preamble; receiving a random access response (RAR) in response to the transmission of the RACH preamble; transmitting a random access connection request in response to the reception of the RAR; and receiving a contention resolution message in response to the random access connection request. 12. A method for wireless communication, comprising: determining a random-access channel (RACH) procedure capability of a network entity; and transmitting, to a user equipment (UE), an indication of the RACH procedure capability of the network entity. 13. The method of claim 12, wherein: the transmission comprises transmitting a handover (HO) command message for HO of the UE to the network entity, the HO command message having the indication of the RACH procedure of the network entity. 14. The method of claim 12, wherein the transmission comprises transmitting system information to the UE, wherein the system information comprises the indication of the RACH procedure capability. 15. The method of claim 14, wherein the system information comprises minimum system information (MSI) or other system information (OSI). 16. The method of claim 12, further comprising communicating one or more messages with the UE based on the indicated RACH procedure capability. 17. The method of claim 16, wherein communicating the one or more messages comprise receiving a first message including a RACH preamble, the method further comprising: determining whether the first message comprises a RACH message based on the RACH preamble; and decoding the RACH message if the first message comprises the RACH message based on the determination. 18. The method of claim 17, wherein the determination of whether the first message comprises the RACH message is based on a sequence of the RACH preamble. 19. The method of claim 17, further comprising: monitoring a first subband and a second subband, for a RACH preamble, wherein the determination of whether the first message comprises the RACH message is based on whether the RACH preamble is received on the first subband or the second subband. 20. The method of claim 17, wherein the indication of the RACH procedure capability indicates that the network entity supports both a first RACH procedure and a second RACH procedure. 21. The method of claim 16, wherein: the one or more messages are communicated based on a two-step RACH procedure; and the communication of the one or more messages comprises: receiving a RACH preamble and a RACH message; and transmitting a random access response in response to the reception of the RACH preamble and the RACH message. 22. The method of claim 16, wherein: the one or more messages are communicated based on a four-step RACH procedure; and the communication of the one or more messages comprises: receiving a RACH preamble; transmitting a random access response (RAR) in response to the transmission of the RACH preamble; receiving a random access connection request in response to the reception of the RAR; and transmitting a contention resolution message in response to the random access connection request. 23. An apparatus for wireless communication by a user equipment (UE), comprising: a transceiver configured to receive an indication of a random-access channel (RACH) procedure capability of a network entity; a processing system configured to select a first RACH procedure or a second RACH procedure, based on the indication, wherein the transceiver is further configured to communicate one or more messages with the network entity based on the selected first RACH procedure or the selected second RACH procedure. 24. The apparatus of claim 23, wherein the indication of the RACH procedure is received from another network entity in a handover (HO) command message for HO of the UE to the network entity. 25. An apparatus for wireless communication, comprising: a processing system configured to determine a random-access channel (RACH) procedure capability of a network entity; and a transceiver configured to transmit, to a user equipment (UE), an indication of the RACH procedure capability of the network entity 26. The apparatus of claim 25, wherein the transmission comprises transmitting a handover (HO) command message for HO of the UE to the network entity, the HO command message having the indication of the RACH procedure of the network entity. 27. The apparatus of claim 25, wherein: the transceiver is further configured to communicate one or more messages with the UE based on the indicated RACH procedure capability.
Certain aspects of the present disclosure are generally directed to design of random access channel (RACH) procedures. For example, certain aspects of the present disclosure provide a method for wireless communication by a user equipment (UE). The method generally includes receiving an indication of a RACH procedure capability of a network entity, and selecting a first RACH procedure or a second RACH procedure, based on the indication. The UE may then communicate one or more messages with the network entity based on the selected first RACH procedure or the selected second RACH procedure.1. A method for wireless communication by a user equipment (UE), comprising: receiving an indication of a random-access channel (RACH) procedure capability of a network entity; selecting a first RACH procedure or a second RACH procedure, based on the indication; and communicating one or more messages with the network entity based on the selected first RACH procedure or the selected second RACH procedure. 2. The method of claim 1, further comprising receiving system information from the network entity, wherein the system information comprises the indication of the RACH procedure capability. 3. The method of claim 2, wherein the system information comprises minimum system information (MSI) or other system information (OSI). 4. The method of claim 1, wherein the indication of the RACH procedure is received from another network entity in a handover (HO) command message for HO of the UE to the network entity. 5. The method of claim 1, wherein the indication of the RACH procedure indicates that the network entity supports both the first RACH procedure and the second RACH procedure. 6. The method of claim 5, further comprising: selecting a RACH preamble type based on the selection of the first RACH procedure or the second RACH procedure; and generating a RACH preamble based on the selection of the RACH preamble type, wherein the communicating comprises communicating the RACH preamble with the network entity. 7. The method of claim 6, wherein the selection of the RACH preamble type comprises selecting a sequence for the RACH preamble. 8. The method of claim 5, wherein: the communication comprises transmitting a RACH preamble and a RACH message to the network entity over a first subband if the first RACH procedure is selected; and the communication comprises transmitting the RACH preamble to the network entity over a second subband if the second RACH procedure is selected. 9. The method of claim 1, wherein the selection comprises selecting the first RACH procedure, the method further comprising: detecting an unsuccessful communication of the one or more messages; selecting the second RACH procedure based on the detection; and communicating one or more other messages with the network entity based on the selection of the second RACH procedure. 10. The method of claim 1, wherein: the first RACH procedure comprises a two-step RACH procedure; and the communication of the one or more messages comprises: transmitting a RACH preamble and a RACH message; and receiving a random access response (RAR) in response to the transmission of the RACH preamble and the RACH message. 11. The method of claim 1, wherein: the second RACH procedure comprises a four-step RACH procedure; and the communication of the one or more messages comprises: transmitting a RACH preamble; receiving a random access response (RAR) in response to the transmission of the RACH preamble; transmitting a random access connection request in response to the reception of the RAR; and receiving a contention resolution message in response to the random access connection request. 12. A method for wireless communication, comprising: determining a random-access channel (RACH) procedure capability of a network entity; and transmitting, to a user equipment (UE), an indication of the RACH procedure capability of the network entity. 13. The method of claim 12, wherein: the transmission comprises transmitting a handover (HO) command message for HO of the UE to the network entity, the HO command message having the indication of the RACH procedure of the network entity. 14. The method of claim 12, wherein the transmission comprises transmitting system information to the UE, wherein the system information comprises the indication of the RACH procedure capability. 15. The method of claim 14, wherein the system information comprises minimum system information (MSI) or other system information (OSI). 16. The method of claim 12, further comprising communicating one or more messages with the UE based on the indicated RACH procedure capability. 17. The method of claim 16, wherein communicating the one or more messages comprise receiving a first message including a RACH preamble, the method further comprising: determining whether the first message comprises a RACH message based on the RACH preamble; and decoding the RACH message if the first message comprises the RACH message based on the determination. 18. The method of claim 17, wherein the determination of whether the first message comprises the RACH message is based on a sequence of the RACH preamble. 19. The method of claim 17, further comprising: monitoring a first subband and a second subband, for a RACH preamble, wherein the determination of whether the first message comprises the RACH message is based on whether the RACH preamble is received on the first subband or the second subband. 20. The method of claim 17, wherein the indication of the RACH procedure capability indicates that the network entity supports both a first RACH procedure and a second RACH procedure. 21. The method of claim 16, wherein: the one or more messages are communicated based on a two-step RACH procedure; and the communication of the one or more messages comprises: receiving a RACH preamble and a RACH message; and transmitting a random access response in response to the reception of the RACH preamble and the RACH message. 22. The method of claim 16, wherein: the one or more messages are communicated based on a four-step RACH procedure; and the communication of the one or more messages comprises: receiving a RACH preamble; transmitting a random access response (RAR) in response to the transmission of the RACH preamble; receiving a random access connection request in response to the reception of the RAR; and transmitting a contention resolution message in response to the random access connection request. 23. An apparatus for wireless communication by a user equipment (UE), comprising: a transceiver configured to receive an indication of a random-access channel (RACH) procedure capability of a network entity; a processing system configured to select a first RACH procedure or a second RACH procedure, based on the indication, wherein the transceiver is further configured to communicate one or more messages with the network entity based on the selected first RACH procedure or the selected second RACH procedure. 24. The apparatus of claim 23, wherein the indication of the RACH procedure is received from another network entity in a handover (HO) command message for HO of the UE to the network entity. 25. An apparatus for wireless communication, comprising: a processing system configured to determine a random-access channel (RACH) procedure capability of a network entity; and a transceiver configured to transmit, to a user equipment (UE), an indication of the RACH procedure capability of the network entity 26. The apparatus of claim 25, wherein the transmission comprises transmitting a handover (HO) command message for HO of the UE to the network entity, the HO command message having the indication of the RACH procedure of the network entity. 27. The apparatus of claim 25, wherein: the transceiver is further configured to communicate one or more messages with the UE based on the indicated RACH procedure capability.
2,600
10,942
10,942
16,233,055
2,654
A system and method for hosting a silent roller skating event. The system includes a first station that includes roller skates and a first group of headphones for users to rent. The system includes a second station that includes a second group of headphones for users to rent, but does not include roller skates. The system includes a music station that includes first, second and third substations in operative association with each headphone. Each headphone has first, second, and third channels and a channel selector switch. The first substation is configured to transmit a first type of music to the first channel of each headphone. The second substation is configured to transmit a second type of music to the second channel of each headphone. The third substation is configured to transmit a third type of music to the third channel of each headphone. Each headphone outputs the first type of music from the first substation in response to the channel selector switch being in a first position. Each headphone outputs the second type of music from the second substation in response to the channel selector switch being placed in a second position. Each headphone outputs the third type of music from the third substation in response to the channel selector switch being placed in a third position.
1. A method of hosting an event at a roller skating rink comprising: a) admitting users into the roller skating rink at a roller skating admission station; b) providing roller skates and headphones to users who desire to rent roller skates and headphones at a first station; c) providing headphones to users at a second station that does not provide roller skates; d) transmitting music of a first type from a first substation of a music station to a first channel of each headphone; f) transmitting music of a second type from a second substation of the music station to a second channel of each headphone; g) transmitting music of a third type from a third substation of the music station to a third channel of each headphone; and h) enabling the headphone to allow selection of one of the first channel, second channel, and third channel to output music from the selected channel. 2. The method of claim 1, wherein each of the headphones comprises first, second and third lights, wherein the first light illuminates light of a first color when a channel selector switch is in a first position, wherein second light illuminates light of a second color when the channel selector switch is in a second position, wherein the third light illuminates light of a third color when the channel selector switch is in the third position. 3. The method of claim 1, further including interrupting music outputted from the selected channel with audio inputted into a microphone and transmitted from the music station. 4. The method of claim 1, further including providing waiver forms at the first and second substations for users to sign before using the headphones. 5. The method of claim 4, further including storing the names of the users provided on the waiver form by the users in a computer database, interrupting music outputted from the selected channel with audio inputted into a microphone and transmitted from the music station to the headphones. 6. The method of claim 5, further including storing the names of the users provided on the waiver form by the users in a computer database, wherein the audio includes an announcement to the users to return the headphones. 7. The method of claim 6, wherein the announcement includes telling a group of users whose names have a beginning letter that is between two letters of the alphabet. 8. The method of claim 7, further including after interrupting music outputted from the selected channel with audio inputted into a microphone and transmitted from the music station, wherein the audio includes an announcement to the users to return the headphones, providing another announcement through the microphone to tell another group of users whose names have a beginning letter that is between two other letters of the alphabet to return the headphones. 9. A roller skating rink system for hosting an event comprising: a roller skating rink; a first station, wherein the first station includes roller skates and a first group of headphones for users to rent; a second station, wherein the second station includes a second group of headphones for users to rent, wherein the second station does not include roller skates; a music station, wherein the music station includes first, second, and third substations in operative association with each headphone; wherein each headphone has first, second, and third channels, wherein each headphone has a channel selector switch, wherein the first substation is configured to transmit a first type of music to the first channel of each headphone, wherein the second substation is configured to transmit a second type of music to the second channel of each headphone, wherein the third substation is configured to transmit a third type of music to the third channel of each headphone; and wherein each headphone outputs the first type of music from the first substation in response to the channel selector switch being in a first position, wherein each headphone outputs the second type of music from the second substation in response to the channel selector switch being placed in a second position, wherein each headphone outputs the third type of music from the third substation in response to the channel selector switch being placed in a third position. 10. The roller skating rink system of claim 9, further comprising a third station, wherein the third station comprises a rink admission station for admitting the users to the rink. 11. The roller skating rink system of claim 9, wherein each of the first and second stations includes a waiver forms for the users to sign before using the rental headphones. 12. The roller skating rink system of claim 9, wherein each of the headphones comprises first, second and third lights, wherein the first light illuminates light of a first color when the channel selector switch is in the first position, wherein second light illuminates light of a second color when the channel selector switch is in the second position, wherein the third light illuminates light of a third color when the channel selector switch is in the third position. 13. The roller skating rink system of claim 9, further comprising a microphone operatively associated with the music station, wherein the roller rink system is configured to enable audio inputted into the microphone to be outputted from each headphone in response to the microphone being turned on. 14. The roller skating rink system of claim 9, wherein each of the headphones is wireless.
A system and method for hosting a silent roller skating event. The system includes a first station that includes roller skates and a first group of headphones for users to rent. The system includes a second station that includes a second group of headphones for users to rent, but does not include roller skates. The system includes a music station that includes first, second and third substations in operative association with each headphone. Each headphone has first, second, and third channels and a channel selector switch. The first substation is configured to transmit a first type of music to the first channel of each headphone. The second substation is configured to transmit a second type of music to the second channel of each headphone. The third substation is configured to transmit a third type of music to the third channel of each headphone. Each headphone outputs the first type of music from the first substation in response to the channel selector switch being in a first position. Each headphone outputs the second type of music from the second substation in response to the channel selector switch being placed in a second position. Each headphone outputs the third type of music from the third substation in response to the channel selector switch being placed in a third position.1. A method of hosting an event at a roller skating rink comprising: a) admitting users into the roller skating rink at a roller skating admission station; b) providing roller skates and headphones to users who desire to rent roller skates and headphones at a first station; c) providing headphones to users at a second station that does not provide roller skates; d) transmitting music of a first type from a first substation of a music station to a first channel of each headphone; f) transmitting music of a second type from a second substation of the music station to a second channel of each headphone; g) transmitting music of a third type from a third substation of the music station to a third channel of each headphone; and h) enabling the headphone to allow selection of one of the first channel, second channel, and third channel to output music from the selected channel. 2. The method of claim 1, wherein each of the headphones comprises first, second and third lights, wherein the first light illuminates light of a first color when a channel selector switch is in a first position, wherein second light illuminates light of a second color when the channel selector switch is in a second position, wherein the third light illuminates light of a third color when the channel selector switch is in the third position. 3. The method of claim 1, further including interrupting music outputted from the selected channel with audio inputted into a microphone and transmitted from the music station. 4. The method of claim 1, further including providing waiver forms at the first and second substations for users to sign before using the headphones. 5. The method of claim 4, further including storing the names of the users provided on the waiver form by the users in a computer database, interrupting music outputted from the selected channel with audio inputted into a microphone and transmitted from the music station to the headphones. 6. The method of claim 5, further including storing the names of the users provided on the waiver form by the users in a computer database, wherein the audio includes an announcement to the users to return the headphones. 7. The method of claim 6, wherein the announcement includes telling a group of users whose names have a beginning letter that is between two letters of the alphabet. 8. The method of claim 7, further including after interrupting music outputted from the selected channel with audio inputted into a microphone and transmitted from the music station, wherein the audio includes an announcement to the users to return the headphones, providing another announcement through the microphone to tell another group of users whose names have a beginning letter that is between two other letters of the alphabet to return the headphones. 9. A roller skating rink system for hosting an event comprising: a roller skating rink; a first station, wherein the first station includes roller skates and a first group of headphones for users to rent; a second station, wherein the second station includes a second group of headphones for users to rent, wherein the second station does not include roller skates; a music station, wherein the music station includes first, second, and third substations in operative association with each headphone; wherein each headphone has first, second, and third channels, wherein each headphone has a channel selector switch, wherein the first substation is configured to transmit a first type of music to the first channel of each headphone, wherein the second substation is configured to transmit a second type of music to the second channel of each headphone, wherein the third substation is configured to transmit a third type of music to the third channel of each headphone; and wherein each headphone outputs the first type of music from the first substation in response to the channel selector switch being in a first position, wherein each headphone outputs the second type of music from the second substation in response to the channel selector switch being placed in a second position, wherein each headphone outputs the third type of music from the third substation in response to the channel selector switch being placed in a third position. 10. The roller skating rink system of claim 9, further comprising a third station, wherein the third station comprises a rink admission station for admitting the users to the rink. 11. The roller skating rink system of claim 9, wherein each of the first and second stations includes a waiver forms for the users to sign before using the rental headphones. 12. The roller skating rink system of claim 9, wherein each of the headphones comprises first, second and third lights, wherein the first light illuminates light of a first color when the channel selector switch is in the first position, wherein second light illuminates light of a second color when the channel selector switch is in the second position, wherein the third light illuminates light of a third color when the channel selector switch is in the third position. 13. The roller skating rink system of claim 9, further comprising a microphone operatively associated with the music station, wherein the roller rink system is configured to enable audio inputted into the microphone to be outputted from each headphone in response to the microphone being turned on. 14. The roller skating rink system of claim 9, wherein each of the headphones is wireless.
2,600
10,943
10,943
15,891,142
2,637
Methods and apparatuses are provided to modify existing overlay system architectures in a cost effective manner to meet the growing demand for narrowcast services and to position the existing overlay systems for additional future modifications. The implementations of the improved overlay system of this disclosure re-digitize narrowcast analog signals after they have been QAM modulated and upconverted to RF frequencies and replace the analog narrowcast transmitters with digital narrowcast transmitters. In the fiber nodes, the received narrowcast signals are converted back to analog signals and combined with analog broadcast signals for transmission to the service groups.
1-2. (canceled) 3. A fiber node comprising: a receiver configured to receive and convert an optical analog-modulated signal comprising one or more up-converted analog RF signals to a first RF electrical analog signal; a converter configured to receive and convert an optical digitally-modulated signal comprising one or more digital signals to a second RF electrical analog signal; and a combiner configured to combine the first RF electrical analog signal and the second RF electrical analog signal. 4. The fiber node of claim 3 further comprising a demultiplexer configured to receive a multiplexed signal comprising the optical analog-modulated signal and the optical digitally-modulated signal wherein the demultiplexer is configured to demultiplex the optical analog-modulated signal and the optical digitally-modulated signal and deliver the demultiplexed signals to the receiver and the converter respectively. 5. The fiber node of claim 3 wherein the receiver is configured to convert the optical analog-modulated signal to the first RF electrical analog signal at a first predetermined signal level. 6. The fiber node of claim 5 wherein the receiver is configured to change the first predetermined signal level. 7. The fiber node of claim 3 wherein the converter is configured to convert the optical digitally-modulated signal to the second RF electrical analog signal at a second predetermined signal level. 8. The fiber node of claim 7 wherein the converter is configured to change the second predetermined signal level. 9. The fiber node of claim 3 wherein the optical digitally-modulated signal comprises an instruction to change the output signal level of the receiver and/or the converter. 10-11. (canceled) 12. A method comprising: receiving and converting an optical analog-modulated signal comprising one or more up-converted analog RF signals to a first RF electrical analog signal; receiving and converting an optical digitally-modulated signal comprising one or more digital signals to a second RF electrical analog signal; and combining the first RF electrical analog signal and the second RF electrical analog signal. 13. The method of claim 12 further comprising converting the optical analog-modulated signal to the first RF electrical analog signal at a first predetermined signal level. 14. The method of claim 13 further comprising changing the first predetermined signal level. 15. The method of claim 12 further comprising converting the optical digitally-modulated signal to the second RF electrical analog signal at a second predetermined signal level. 15. The method of claim 15 further comprising changing the second predetermined signal level. 16. The method of claim 12 wherein the optical digitally-modulated signal comprises an instruction to change the output signal level of the receiver and/or the converter. 17. A fiber node comprising: means for receiving and converting an optical analog-modulated signal comprising one or more up-converted analog RF signals to a first RF electrical analog signal; means for receiving and converting an optical digitally-modulated signal comprising one or more digital signals to a second RF electrical analog signal; and means for combining the first RF electrical analog signal and the second RF electrical analog signal. 18. The fiber node of claim 18 further comprising means for converting the optical analog-modulated signal to the first RF electrical analog signal at a first predetermined signal level. 19. The fiber node of claim 19 further comprising means for changing the first predetermined signal level. 20. The fiber node of claim 18 further comprising means for the optical digitally-modulated signal to the second RF electrical analog signal at a second predetermined signal level. 21. The fiber node of claim 21 further comprising means for changing the second predetermined signal level.
Methods and apparatuses are provided to modify existing overlay system architectures in a cost effective manner to meet the growing demand for narrowcast services and to position the existing overlay systems for additional future modifications. The implementations of the improved overlay system of this disclosure re-digitize narrowcast analog signals after they have been QAM modulated and upconverted to RF frequencies and replace the analog narrowcast transmitters with digital narrowcast transmitters. In the fiber nodes, the received narrowcast signals are converted back to analog signals and combined with analog broadcast signals for transmission to the service groups.1-2. (canceled) 3. A fiber node comprising: a receiver configured to receive and convert an optical analog-modulated signal comprising one or more up-converted analog RF signals to a first RF electrical analog signal; a converter configured to receive and convert an optical digitally-modulated signal comprising one or more digital signals to a second RF electrical analog signal; and a combiner configured to combine the first RF electrical analog signal and the second RF electrical analog signal. 4. The fiber node of claim 3 further comprising a demultiplexer configured to receive a multiplexed signal comprising the optical analog-modulated signal and the optical digitally-modulated signal wherein the demultiplexer is configured to demultiplex the optical analog-modulated signal and the optical digitally-modulated signal and deliver the demultiplexed signals to the receiver and the converter respectively. 5. The fiber node of claim 3 wherein the receiver is configured to convert the optical analog-modulated signal to the first RF electrical analog signal at a first predetermined signal level. 6. The fiber node of claim 5 wherein the receiver is configured to change the first predetermined signal level. 7. The fiber node of claim 3 wherein the converter is configured to convert the optical digitally-modulated signal to the second RF electrical analog signal at a second predetermined signal level. 8. The fiber node of claim 7 wherein the converter is configured to change the second predetermined signal level. 9. The fiber node of claim 3 wherein the optical digitally-modulated signal comprises an instruction to change the output signal level of the receiver and/or the converter. 10-11. (canceled) 12. A method comprising: receiving and converting an optical analog-modulated signal comprising one or more up-converted analog RF signals to a first RF electrical analog signal; receiving and converting an optical digitally-modulated signal comprising one or more digital signals to a second RF electrical analog signal; and combining the first RF electrical analog signal and the second RF electrical analog signal. 13. The method of claim 12 further comprising converting the optical analog-modulated signal to the first RF electrical analog signal at a first predetermined signal level. 14. The method of claim 13 further comprising changing the first predetermined signal level. 15. The method of claim 12 further comprising converting the optical digitally-modulated signal to the second RF electrical analog signal at a second predetermined signal level. 15. The method of claim 15 further comprising changing the second predetermined signal level. 16. The method of claim 12 wherein the optical digitally-modulated signal comprises an instruction to change the output signal level of the receiver and/or the converter. 17. A fiber node comprising: means for receiving and converting an optical analog-modulated signal comprising one or more up-converted analog RF signals to a first RF electrical analog signal; means for receiving and converting an optical digitally-modulated signal comprising one or more digital signals to a second RF electrical analog signal; and means for combining the first RF electrical analog signal and the second RF electrical analog signal. 18. The fiber node of claim 18 further comprising means for converting the optical analog-modulated signal to the first RF electrical analog signal at a first predetermined signal level. 19. The fiber node of claim 19 further comprising means for changing the first predetermined signal level. 20. The fiber node of claim 18 further comprising means for the optical digitally-modulated signal to the second RF electrical analog signal at a second predetermined signal level. 21. The fiber node of claim 21 further comprising means for changing the second predetermined signal level.
2,600
10,944
10,944
15,777,489
2,668
Systems and methods for iteratively computing an image registration or an image segmentation. The registration and segmentation computations are driven by an optimization function that includes a similarity measure component whose effect on the iterative computations is relatively mitigated based on a monitoring of volume changes o volume elements at image locations during the iterations. There is also a system and a related 5 method to quantify for a registration error. This includes applying a series of edge detectors to input images and combining related filter responses into a combined response. The series of filters are parameterized with a filter parameter. An extremal value of the combined response is then found and a filter parameter associated with said extremal value is then returned as output. This filter parameter relates to a registration error at a given image 10 location.
1. An image processing system, comprising: an input port (IN) for receiving two or more input images for registration, including a source image and a target image; an iterative solver (SOLV-REG) configured to solve for a final registration transformation T(N) by iterating through at least one intermediate registration transformation T(N−j), wherein the iterative solver is driven by an optimization function comprising at least one first functional component based on a similarity measure that measures a similarity between the target image and a transformed image obtainable by applying the at least one intermediate registration transformation to the source image, a volume monitor (IVM) configured to monitor, over iteration cycles, for a change in volume of a predefined neighborhood around an image location in the source image when the intermediate registration transformation is being applied in a given iteration cycle; a volume evaluator (IVE) configured to evaluate said volume change against an acceptance condition; and a corrector (C) configured to mitigate or at least relatively mitigate, for a subsequent iteration cycle, an effect of the first functional component for said image location, when the change is found by the volume evaluator (IVE) to violate said acceptance condition. 2. The image processing system of claim 1, wherein said first functional component remains mitigated for said image location during a remainder of the iteration. 3. The image processing method, comprising: receiving two or more input images for registration, including a source image and a target image; iteratively solving for a final registration transformation T(N) by iterating through at least one intermediate registration transformation T(N−j), wherein the iterative solver is driven by an optimization function comprising at least one first functional component based on a similarity measure that measures a similarity between the target image and a transformed image obtainable by applying the at least one intermediate registration transformation to the source image, monitoring, over iteration cycles, for a change in volume of a predefined neighborhood around an image location in the source image when the respective intermediate registration transformation is being applied in a given iteration cycle; evaluating said volume change against an acceptance condition; and for a subsequent iteration cycle, mitigating or at least relatively mitigating an effect of the first functional component for said image location, when the change is found by the image evaluator (IVE) to violate said acceptance condition. 4-14. (canceled) 15. A non-transitory computer readable medium comprising a computer program that when executed by a processing unit is (PU), causes the PU to: receive, via an input port (IN), two or more input images for registration, including a source image and a target image; solve, via an iterative solver (SOLV-REG), for a final registration transformation T(N) by iterating through at least one intermediate registration transformation T(N−j), wherein the iterative solver is driven by an optimization function comprising at least one first functional component based on a similarity measure that measures a similarity between the target image and a transformed image obtainable by applying the at least one intermediate registration transformation to the source image; monitor, via a volume monitor (IVM), over iteration cycles, for a change in volume of a predefined neighborhood around an image location in the source image when the intermediate registration transformation is being applied in a given iteration cycle; evaluate, via a volume evaluator (IVE), the volume change against an acceptance condition; and mitigate, via a corrector (C), for a subsequent iteration cycle, an effect of the first functional component for said image location, when the change is found by the volume evaluator (IVE) to violate said acceptance condition. 16. (canceled)
Systems and methods for iteratively computing an image registration or an image segmentation. The registration and segmentation computations are driven by an optimization function that includes a similarity measure component whose effect on the iterative computations is relatively mitigated based on a monitoring of volume changes o volume elements at image locations during the iterations. There is also a system and a related 5 method to quantify for a registration error. This includes applying a series of edge detectors to input images and combining related filter responses into a combined response. The series of filters are parameterized with a filter parameter. An extremal value of the combined response is then found and a filter parameter associated with said extremal value is then returned as output. This filter parameter relates to a registration error at a given image 10 location.1. An image processing system, comprising: an input port (IN) for receiving two or more input images for registration, including a source image and a target image; an iterative solver (SOLV-REG) configured to solve for a final registration transformation T(N) by iterating through at least one intermediate registration transformation T(N−j), wherein the iterative solver is driven by an optimization function comprising at least one first functional component based on a similarity measure that measures a similarity between the target image and a transformed image obtainable by applying the at least one intermediate registration transformation to the source image, a volume monitor (IVM) configured to monitor, over iteration cycles, for a change in volume of a predefined neighborhood around an image location in the source image when the intermediate registration transformation is being applied in a given iteration cycle; a volume evaluator (IVE) configured to evaluate said volume change against an acceptance condition; and a corrector (C) configured to mitigate or at least relatively mitigate, for a subsequent iteration cycle, an effect of the first functional component for said image location, when the change is found by the volume evaluator (IVE) to violate said acceptance condition. 2. The image processing system of claim 1, wherein said first functional component remains mitigated for said image location during a remainder of the iteration. 3. The image processing method, comprising: receiving two or more input images for registration, including a source image and a target image; iteratively solving for a final registration transformation T(N) by iterating through at least one intermediate registration transformation T(N−j), wherein the iterative solver is driven by an optimization function comprising at least one first functional component based on a similarity measure that measures a similarity between the target image and a transformed image obtainable by applying the at least one intermediate registration transformation to the source image, monitoring, over iteration cycles, for a change in volume of a predefined neighborhood around an image location in the source image when the respective intermediate registration transformation is being applied in a given iteration cycle; evaluating said volume change against an acceptance condition; and for a subsequent iteration cycle, mitigating or at least relatively mitigating an effect of the first functional component for said image location, when the change is found by the image evaluator (IVE) to violate said acceptance condition. 4-14. (canceled) 15. A non-transitory computer readable medium comprising a computer program that when executed by a processing unit is (PU), causes the PU to: receive, via an input port (IN), two or more input images for registration, including a source image and a target image; solve, via an iterative solver (SOLV-REG), for a final registration transformation T(N) by iterating through at least one intermediate registration transformation T(N−j), wherein the iterative solver is driven by an optimization function comprising at least one first functional component based on a similarity measure that measures a similarity between the target image and a transformed image obtainable by applying the at least one intermediate registration transformation to the source image; monitor, via a volume monitor (IVM), over iteration cycles, for a change in volume of a predefined neighborhood around an image location in the source image when the intermediate registration transformation is being applied in a given iteration cycle; evaluate, via a volume evaluator (IVE), the volume change against an acceptance condition; and mitigate, via a corrector (C), for a subsequent iteration cycle, an effect of the first functional component for said image location, when the change is found by the volume evaluator (IVE) to violate said acceptance condition. 16. (canceled)
2,600
10,945
10,945
15,898,869
2,649
A wireless waveguide system for a heating, ventilation, and air conditioning (HVAC) system. The wireless waveguide system includes a sensor that detects an environmental condition and directs a signal indicative of the environmental condition along an interior of a ductwork. A signal sensor detects a strength of the signal within the interior of the ductwork. A repeater that operates based on the strength of the signal detected by the signal sensor and repeats the signal along a communication path at least partially within the interior of the ductwork to a controller of an HVAC unit.
1. A wireless waveguide system for a heating, ventilation, and air conditioning (HVAC) system, comprising: a sensor configured to detect an environmental condition and to direct a signal indicative of the environmental condition along an interior of a ductwork; a signal sensor configured to detect a strength of the signal within the interior of the ductwork; and a repeater configured to operate based on the strength of the signal detected by the signal sensor and configured to repeat the signal along a communication path at least partially within the interior of the ductwork to a controller of an HVAC unit. 2. The wireless waveguide system of claim 1, wherein the sensor comprises a dynamic pressure sensor, a temperature sensor, a flow rate sensor, a motion sensor, a carbon dioxide sensor, a humidity level sensor, an air quality sensor, or a combination thereof. 3. The wireless waveguide system of claim 1, comprising the ductwork including a bend of the ductwork, wherein the repeater is positioned inside the ductwork substantially at the bend. 4. The wireless waveguide system of claim 1, wherein the sensor is positioned within the ductwork. 5. The wireless waveguide system of claim 1, wherein the sensor is positioned outside of the ductwork. 6. The wireless waveguide system of claim 1, wherein the HVAC unit is a rooftop unit. 7. The wireless waveguide system of claim 1, wherein the sensor comprises a wireless transmitter configured to emit the signal indicative of the environmental condition. 8. The wireless waveguide system of claim 1, wherein the sensor comprises a wireless receiver, wherein the wireless receiver is configured to receive feedback from the controller. 9. The wireless waveguide system of claim 1, wherein the signal sensor is configured to activate the repeater if the strength of the signal is less than a threshold strength. 10. The wireless waveguide system of claim 1, wherein the controller is configured to couple to a transmitter, wherein the transmitter is configured to transmit a second signal through the interior of the ductwork. 11. The wireless waveguide system of claim 10, comprising a building management system controller configured to receive the second signal. 12. The wireless waveguide system of claim 10, wherein the repeater is configured to receive and repeat the second signal. 13. The wireless waveguide system of claim 12, wherein the signal sensor is configured to detect the second signal and to control the repeater in response to a second signal strength of the second signal. 14. A wireless waveguide system for a heating, ventilation, and air conditioning (HVAC) system, comprising: a sensor configured to detect a characteristic of air in an enclosed space and to emit a signal indicative of the characteristic along a communication path through an interior of a ductwork to a controller of an HVAC unit; and a repeater configured to be placed within the ductwork along the communication path and positioned proximate a bend in the ductwork, wherein the repeater is configured to repeat the signal indicative of the characteristic further along the communication path and toward the controller. 15. The wireless waveguide system of claim 14, comprising a signal sensor configured to be positioned within the ductwork, configured to detect a strength of the signal indicative of the characteristic, and configured to activate the repeater in response to the strength of the signal indicative of the characteristic being less than a threshold strength value. 16. The wireless waveguide system of claim 14, wherein the sensor comprises a dynamic pressure sensor, a temperature sensor, a motion sensor, a flow rate sensor, a carbon dioxide sensor, a humidity level sensor, an air quality sensor, or a combination thereof. 17. The wireless waveguide system of claim 14, wherein the HVAC unit is a rooftop unit. 18. The wireless waveguide system of claim 14, wherein the sensor comprises a wireless transmitter configured to emit the signal indicative of the characteristic. 19. The wireless waveguide system of claim 14, wherein the repeater is positioned proximate an opening in the ductwork. 20. A wireless waveguide system for a heating, ventilation, and air conditioning (HVAC) system, comprising: ductwork defining an airflow path therethrough; a controller of an HVAC unit; a sensor configured to detect an environmental condition and to direct a signal indicative of the environmental condition along the airflow path; and a repeater disposed within the ductwork along the airflow path, wherein the repeater is configured to repeat the signal indicative of the environmental condition further along the airflow path and toward the controller. 21. The wireless waveguide system of claim 20, wherein the ductwork comprises a bend and wherein the repeater is disposed approximately at the bend. 22. The wireless waveguide system of claim 20, wherein the ductwork defines an opening, and wherein the sensor is placed within a threshold distance of the opening. 23. The wireless waveguide system of claim 20, comprising a signal sensor, wherein the signal sensor is configured to activate the repeater if a strength of the signal indicative of the environmental condition is less than a threshold strength.
A wireless waveguide system for a heating, ventilation, and air conditioning (HVAC) system. The wireless waveguide system includes a sensor that detects an environmental condition and directs a signal indicative of the environmental condition along an interior of a ductwork. A signal sensor detects a strength of the signal within the interior of the ductwork. A repeater that operates based on the strength of the signal detected by the signal sensor and repeats the signal along a communication path at least partially within the interior of the ductwork to a controller of an HVAC unit.1. A wireless waveguide system for a heating, ventilation, and air conditioning (HVAC) system, comprising: a sensor configured to detect an environmental condition and to direct a signal indicative of the environmental condition along an interior of a ductwork; a signal sensor configured to detect a strength of the signal within the interior of the ductwork; and a repeater configured to operate based on the strength of the signal detected by the signal sensor and configured to repeat the signal along a communication path at least partially within the interior of the ductwork to a controller of an HVAC unit. 2. The wireless waveguide system of claim 1, wherein the sensor comprises a dynamic pressure sensor, a temperature sensor, a flow rate sensor, a motion sensor, a carbon dioxide sensor, a humidity level sensor, an air quality sensor, or a combination thereof. 3. The wireless waveguide system of claim 1, comprising the ductwork including a bend of the ductwork, wherein the repeater is positioned inside the ductwork substantially at the bend. 4. The wireless waveguide system of claim 1, wherein the sensor is positioned within the ductwork. 5. The wireless waveguide system of claim 1, wherein the sensor is positioned outside of the ductwork. 6. The wireless waveguide system of claim 1, wherein the HVAC unit is a rooftop unit. 7. The wireless waveguide system of claim 1, wherein the sensor comprises a wireless transmitter configured to emit the signal indicative of the environmental condition. 8. The wireless waveguide system of claim 1, wherein the sensor comprises a wireless receiver, wherein the wireless receiver is configured to receive feedback from the controller. 9. The wireless waveguide system of claim 1, wherein the signal sensor is configured to activate the repeater if the strength of the signal is less than a threshold strength. 10. The wireless waveguide system of claim 1, wherein the controller is configured to couple to a transmitter, wherein the transmitter is configured to transmit a second signal through the interior of the ductwork. 11. The wireless waveguide system of claim 10, comprising a building management system controller configured to receive the second signal. 12. The wireless waveguide system of claim 10, wherein the repeater is configured to receive and repeat the second signal. 13. The wireless waveguide system of claim 12, wherein the signal sensor is configured to detect the second signal and to control the repeater in response to a second signal strength of the second signal. 14. A wireless waveguide system for a heating, ventilation, and air conditioning (HVAC) system, comprising: a sensor configured to detect a characteristic of air in an enclosed space and to emit a signal indicative of the characteristic along a communication path through an interior of a ductwork to a controller of an HVAC unit; and a repeater configured to be placed within the ductwork along the communication path and positioned proximate a bend in the ductwork, wherein the repeater is configured to repeat the signal indicative of the characteristic further along the communication path and toward the controller. 15. The wireless waveguide system of claim 14, comprising a signal sensor configured to be positioned within the ductwork, configured to detect a strength of the signal indicative of the characteristic, and configured to activate the repeater in response to the strength of the signal indicative of the characteristic being less than a threshold strength value. 16. The wireless waveguide system of claim 14, wherein the sensor comprises a dynamic pressure sensor, a temperature sensor, a motion sensor, a flow rate sensor, a carbon dioxide sensor, a humidity level sensor, an air quality sensor, or a combination thereof. 17. The wireless waveguide system of claim 14, wherein the HVAC unit is a rooftop unit. 18. The wireless waveguide system of claim 14, wherein the sensor comprises a wireless transmitter configured to emit the signal indicative of the characteristic. 19. The wireless waveguide system of claim 14, wherein the repeater is positioned proximate an opening in the ductwork. 20. A wireless waveguide system for a heating, ventilation, and air conditioning (HVAC) system, comprising: ductwork defining an airflow path therethrough; a controller of an HVAC unit; a sensor configured to detect an environmental condition and to direct a signal indicative of the environmental condition along the airflow path; and a repeater disposed within the ductwork along the airflow path, wherein the repeater is configured to repeat the signal indicative of the environmental condition further along the airflow path and toward the controller. 21. The wireless waveguide system of claim 20, wherein the ductwork comprises a bend and wherein the repeater is disposed approximately at the bend. 22. The wireless waveguide system of claim 20, wherein the ductwork defines an opening, and wherein the sensor is placed within a threshold distance of the opening. 23. The wireless waveguide system of claim 20, comprising a signal sensor, wherein the signal sensor is configured to activate the repeater if a strength of the signal indicative of the environmental condition is less than a threshold strength.
2,600
10,946
10,946
16,537,992
2,685
An informatics system that can be head-worn under a helmet and used to provide a wearer's vital statistics and other information to a remote monitoring station, for example in connection with pre-hospital emergency care.
1. An informatics system, comprising a sensor package configured for monitoring of a wearer's vital statistics and a telemetry transmitter for transmitting a record of the wearer's vital statistics to a remote monitoring location, said informatics system integrated within a helmet retention system that is adapted to be removably secured to a helmet. 2. The informatics system of claim 1, wherein the sensor package comprises one or more sensor pads configured to contact a wearer at one or more points on the wearer's body, the sensor pads coupled to provide electrical signal inputs to a processor of the informatics system, which processor is configured to sample the signals from the sensor pads periodically and to transmit a record of the sampled signals to the remote monitoring location via the telemetry transmitter. 3. The informatics system of claim 2, wherein said processor is further configured to store a record of the sampled signals in a writable memory of the informatics system. 4. The informatics system of claim 2, further comprising a power supply included within the helmet retention system. 5. The informatics system of claim 2, further comprising an accelerometer coupled to provide an input to said processor. 6. A headband comprising a sensor arrangement configured for monitoring of a wearer's vital statistics, said sensor arrangement including one or more sensor pads configured to contact a wearer at one or more points on the wearer's body, the sensor pads coupled to provide electrical signal inputs to a processor of the sensor arrangement that is configured to sample the signals from the sensor pads periodically, said headband wearable under a helmet such that removal of the helmet from the wearer will not cause removal of the sensor arrangement. 7. The headband of claim 6, wherein the headband comprises a helmet retention system adapted to be removably securable to said helmet. 8. The headband of claim 7, wherein the helmet retention system includes a telemetry transmitter coupled to the processor. 9. The headband of claim 8, wherein the helmet retention system includes a power source for said sensor arrangement. 10. The headband of claim 7, wherein the sensor arrangement includes an accelerometer coupled to the processor. 11. The headband of claim 7, wherein the sensor arrangement includes a writable memory coupled to the processor. 12. A method, comprising periodically sampling, by a processor of a head-worn informatics system, electrical signals provided by one or more sensor pads of the informatics system, said sensor pads configured for electrophysiological monitoring of a wearer of the informatics system, transmitting a record of the sampled signals to a monitoring facility remote from the informatics system via a telemetry transmitter of the informatics system, and continuing to sample the electrical signals and transmit the record of the sampled signals to the monitoring facility subsequent to removal of a helmet from the wearer. 13. The method of claim 12, wherein removal of the helmet from the wearer of the head-worn informatics system does not cause removal of the head-worn informatics system from the wearer. 14. The method of claim 12, wherein removal of the helmet from the wearer of the head-worn informatics system does not cause removal of a helmet retention system in which the head-worn informatics system is integrated from the wearer. 15. The method of claim 12, wherein removal of the helmet from the wearer of the head-worn informatics system does not cause removal of sensor package of the head-worn informatics system from the wearer. 16. The method of claim 12, wherein removal of the helmet from the wearer of the head-worn informatics system does not cause removal of a headband in which the sensor pads of the head-worn informatics system are integrated from the wearer. 17. The method of claim 12, wherein prior to removal of the helmet from the wearer of the head-worn informatics system, dissociating the head-worn informatics system from a primary power source, and the head-worn informatic system reverting to using a secondary power source. 18. The method of claim 17, wherein subsequent to removal of the helmet from the wearer of the head-worn informatics system and the head-worn informatic system reverting to using a secondary power source, restoring primary power to the head-worn informatics system from a transportable power supply. 19. The method of claim 12, further comprising storing the record of the sampled signals in a memory of the head-worn informatics system. 20. The method of claim 12, further comprising providing inputs concerning rapid accelerations/decelerations of the wearer's head to a processor of the head-worn informatics system from one or more accelerometers of the head-worn informatics system.
An informatics system that can be head-worn under a helmet and used to provide a wearer's vital statistics and other information to a remote monitoring station, for example in connection with pre-hospital emergency care.1. An informatics system, comprising a sensor package configured for monitoring of a wearer's vital statistics and a telemetry transmitter for transmitting a record of the wearer's vital statistics to a remote monitoring location, said informatics system integrated within a helmet retention system that is adapted to be removably secured to a helmet. 2. The informatics system of claim 1, wherein the sensor package comprises one or more sensor pads configured to contact a wearer at one or more points on the wearer's body, the sensor pads coupled to provide electrical signal inputs to a processor of the informatics system, which processor is configured to sample the signals from the sensor pads periodically and to transmit a record of the sampled signals to the remote monitoring location via the telemetry transmitter. 3. The informatics system of claim 2, wherein said processor is further configured to store a record of the sampled signals in a writable memory of the informatics system. 4. The informatics system of claim 2, further comprising a power supply included within the helmet retention system. 5. The informatics system of claim 2, further comprising an accelerometer coupled to provide an input to said processor. 6. A headband comprising a sensor arrangement configured for monitoring of a wearer's vital statistics, said sensor arrangement including one or more sensor pads configured to contact a wearer at one or more points on the wearer's body, the sensor pads coupled to provide electrical signal inputs to a processor of the sensor arrangement that is configured to sample the signals from the sensor pads periodically, said headband wearable under a helmet such that removal of the helmet from the wearer will not cause removal of the sensor arrangement. 7. The headband of claim 6, wherein the headband comprises a helmet retention system adapted to be removably securable to said helmet. 8. The headband of claim 7, wherein the helmet retention system includes a telemetry transmitter coupled to the processor. 9. The headband of claim 8, wherein the helmet retention system includes a power source for said sensor arrangement. 10. The headband of claim 7, wherein the sensor arrangement includes an accelerometer coupled to the processor. 11. The headband of claim 7, wherein the sensor arrangement includes a writable memory coupled to the processor. 12. A method, comprising periodically sampling, by a processor of a head-worn informatics system, electrical signals provided by one or more sensor pads of the informatics system, said sensor pads configured for electrophysiological monitoring of a wearer of the informatics system, transmitting a record of the sampled signals to a monitoring facility remote from the informatics system via a telemetry transmitter of the informatics system, and continuing to sample the electrical signals and transmit the record of the sampled signals to the monitoring facility subsequent to removal of a helmet from the wearer. 13. The method of claim 12, wherein removal of the helmet from the wearer of the head-worn informatics system does not cause removal of the head-worn informatics system from the wearer. 14. The method of claim 12, wherein removal of the helmet from the wearer of the head-worn informatics system does not cause removal of a helmet retention system in which the head-worn informatics system is integrated from the wearer. 15. The method of claim 12, wherein removal of the helmet from the wearer of the head-worn informatics system does not cause removal of sensor package of the head-worn informatics system from the wearer. 16. The method of claim 12, wherein removal of the helmet from the wearer of the head-worn informatics system does not cause removal of a headband in which the sensor pads of the head-worn informatics system are integrated from the wearer. 17. The method of claim 12, wherein prior to removal of the helmet from the wearer of the head-worn informatics system, dissociating the head-worn informatics system from a primary power source, and the head-worn informatic system reverting to using a secondary power source. 18. The method of claim 17, wherein subsequent to removal of the helmet from the wearer of the head-worn informatics system and the head-worn informatic system reverting to using a secondary power source, restoring primary power to the head-worn informatics system from a transportable power supply. 19. The method of claim 12, further comprising storing the record of the sampled signals in a memory of the head-worn informatics system. 20. The method of claim 12, further comprising providing inputs concerning rapid accelerations/decelerations of the wearer's head to a processor of the head-worn informatics system from one or more accelerometers of the head-worn informatics system.
2,600
10,947
10,947
15,318,096
2,627
Embodiments of the present invention provide a terminal, a protective case, and a sensing method. The terminal is suitable for removable installation of a protective case that protects a touchscreen of the terminal, and includes: a capacitance detection module, configured to detect a capacitance value of a capacitor module in the touchscreen, where the capacitance value is generated according to the capacitor module and a capacitive sensing body in the protective case; and a processing module, configured to: when it is determined that a change of the capacitance value detected by the capacitance detection module conforms to a first preset rule, determine that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determine that the protective case is near the terminal.
1-4. (canceled) 5. A protective case, wherein the protective case is removably installed on a terminal to protect a touchscreen of the terminal, comprising: a capacitive sensing body, configured to generate a capacitance value with a capacitor module in the touchscreen of the terminal, so that the terminal detects the capacitance value and determines, according to a preset rule, whether the protective case is far away from or near the terminal. 6. The protective case according to claim 5, wherein the capacitive sensing body is at least one type of metal, conductive fabric, conductive paint, silica gel, or conductive fiber. 7. The protective case according to claim 5, wherein the capacitive sensing body is disposed on a side that is near the terminal when the protective case covers the terminal; and the capacitive sensing body is evenly disposed in the protective case, or the capacitive sensing body is disposed at a position, corresponding to a preset position of the touchscreen of the terminal, in the protective case. 8. A terminal, wherein the terminal is suitable for removable installation of a protective case that protects a touchscreen of the terminal, comprising: a touchscreen comprising a capacitance detection module and a capacitor module; a memory; and a processor coupled to the memory, wherein the capacitance detection module is configured to detect a capacitance value of the capacitor module, and the capacitance value is generated according to the capacitor module and a capacitive sensing body in the protective case; and the processor is configured to: when it is determined that a change of the capacitance value detected by the capacitance detection module conforms to an acquired first preset rule stored in the memory, determine that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to an acquired second preset rule stored in the memory, determine that the protective case is near the terminal. 9. The terminal according to claim 8, wherein the processor is specifically configured to: when it is determined that the capacitance value detected by the capacitance detection module is greater than a first preset threshold or it is determined according to the capacitance value that a charging time of the capacitor module is greater than a first preset threshold, determine that the protective case is near the terminal; and if when it is determined that the capacitance value detected by the capacitance detection module is less than a second preset threshold or it is determined according to the capacitance value that a discharging time of the capacitor module is less than a second preset threshold, determine that the protective case is far away from the terminal. 10. The terminal according to claim 8, wherein the processor is specifically configured to: when it is determined that a sequence of triggering caused when capacitance values, of preset positions in the capacitor module, detected by the capacitance detection module exceed capacitance thresholds is the same as a first preset trigger sequence, determine that the protective case is far away from the terminal; and when it is determined that the sequence of triggering caused when the capacitance values, of the preset positions in the capacitor module, detected by the capacitance detection module exceed the capacitance thresholds is the same as a second preset trigger sequence, determine that the protective case is near the terminal. 11. The terminal according to claim 8, wherein the processor is further configured to: when it is determined that the protective case is far away from the terminal, wake up the touchscreen; and when it is determined that the protective case is near the terminal, make the touchscreen sleep. 12. A sensing method, wherein the method is applied to a terminal and the terminal is suitable for removable installation of a protective case that protects a touchscreen of the terminal, comprising: detecting a capacitance value of a capacitor module in the touchscreen, wherein the capacitance value is generated according to the capacitor module and a capacitive sensing body in the protective case; and when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal. 13. The method according to claim 12, wherein when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal comprises: when it is determined that the detected capacitance value is greater than a first preset threshold or it is determined according to the capacitance value that a charging time of the capacitor module is greater than a first preset threshold, determining that the protective case is near the terminal; and when it is determined that the detected capacitance value is less than a second preset threshold or it is determined according to the capacitance value that a discharging time of the capacitor module is less than a second preset threshold, determining that the protective case is far away from the terminal. 14. The method according to claim 12, wherein when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal comprises: when it is determined that a sequence of triggering caused when detected capacitance values, of preset positions, in the capacitor module exceed capacitance thresholds is the same as a first preset trigger sequence, determining that the protective case is far away from the terminal; and when it is determined that the sequence of triggering caused when the detected capacitance values, of the preset positions, in the capacitor module exceed the capacitance thresholds is the same as a second preset trigger sequence, determining that the protective case is near the terminal. 15. The method according to claim 12, wherein after when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal, the method further comprises: when it is determined that the protective case is far away from the terminal, waking up the touchscreen; and when it is determined that the protective case is near the terminal, making the touchscreen sleep. 16. A computer readable medium storing instructions to cause a terminal to perform operations comprising: detecting a capacitance value of a capacitor module in the touchscreen, wherein the capacitance value is generated according to the capacitor module and a capacitive sensing body in the protective case; and when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal. 17. The computer readable medium according to claim 12, wherein when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal comprises: when it is determined that the detected capacitance value is greater than a first preset threshold or it is determined according to the capacitance value that a charging time of the capacitor module is greater than a first preset threshold, determining that the protective case is near the terminal; and when it is determined that the detected capacitance value is less than a second preset threshold or it is determined according to the capacitance value that a discharging time of the capacitor module is less than a second preset threshold, determining that the protective case is far away from the terminal. 18. The computer readable medium according to claim 12, wherein when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal comprises: when it is determined that a sequence of triggering caused when detected capacitance values, of preset positions, in the capacitor module exceed capacitance thresholds is the same as a first preset trigger sequence, determining that the protective case is far away from the terminal; and when it is determined that the sequence of triggering caused when the detected capacitance values, of the preset positions, in the capacitor module exceed the capacitance thresholds is the same as a second preset trigger sequence, determining that the protective case is near the terminal. 19. The computer readable medium according to claim 12, wherein after when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal, the method further comprises: when it is determined that the protective case is far away from the terminal, waking up the touchscreen; and when it is determined that the protective case is near the terminal, making the touchscreen sleep.
Embodiments of the present invention provide a terminal, a protective case, and a sensing method. The terminal is suitable for removable installation of a protective case that protects a touchscreen of the terminal, and includes: a capacitance detection module, configured to detect a capacitance value of a capacitor module in the touchscreen, where the capacitance value is generated according to the capacitor module and a capacitive sensing body in the protective case; and a processing module, configured to: when it is determined that a change of the capacitance value detected by the capacitance detection module conforms to a first preset rule, determine that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determine that the protective case is near the terminal.1-4. (canceled) 5. A protective case, wherein the protective case is removably installed on a terminal to protect a touchscreen of the terminal, comprising: a capacitive sensing body, configured to generate a capacitance value with a capacitor module in the touchscreen of the terminal, so that the terminal detects the capacitance value and determines, according to a preset rule, whether the protective case is far away from or near the terminal. 6. The protective case according to claim 5, wherein the capacitive sensing body is at least one type of metal, conductive fabric, conductive paint, silica gel, or conductive fiber. 7. The protective case according to claim 5, wherein the capacitive sensing body is disposed on a side that is near the terminal when the protective case covers the terminal; and the capacitive sensing body is evenly disposed in the protective case, or the capacitive sensing body is disposed at a position, corresponding to a preset position of the touchscreen of the terminal, in the protective case. 8. A terminal, wherein the terminal is suitable for removable installation of a protective case that protects a touchscreen of the terminal, comprising: a touchscreen comprising a capacitance detection module and a capacitor module; a memory; and a processor coupled to the memory, wherein the capacitance detection module is configured to detect a capacitance value of the capacitor module, and the capacitance value is generated according to the capacitor module and a capacitive sensing body in the protective case; and the processor is configured to: when it is determined that a change of the capacitance value detected by the capacitance detection module conforms to an acquired first preset rule stored in the memory, determine that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to an acquired second preset rule stored in the memory, determine that the protective case is near the terminal. 9. The terminal according to claim 8, wherein the processor is specifically configured to: when it is determined that the capacitance value detected by the capacitance detection module is greater than a first preset threshold or it is determined according to the capacitance value that a charging time of the capacitor module is greater than a first preset threshold, determine that the protective case is near the terminal; and if when it is determined that the capacitance value detected by the capacitance detection module is less than a second preset threshold or it is determined according to the capacitance value that a discharging time of the capacitor module is less than a second preset threshold, determine that the protective case is far away from the terminal. 10. The terminal according to claim 8, wherein the processor is specifically configured to: when it is determined that a sequence of triggering caused when capacitance values, of preset positions in the capacitor module, detected by the capacitance detection module exceed capacitance thresholds is the same as a first preset trigger sequence, determine that the protective case is far away from the terminal; and when it is determined that the sequence of triggering caused when the capacitance values, of the preset positions in the capacitor module, detected by the capacitance detection module exceed the capacitance thresholds is the same as a second preset trigger sequence, determine that the protective case is near the terminal. 11. The terminal according to claim 8, wherein the processor is further configured to: when it is determined that the protective case is far away from the terminal, wake up the touchscreen; and when it is determined that the protective case is near the terminal, make the touchscreen sleep. 12. A sensing method, wherein the method is applied to a terminal and the terminal is suitable for removable installation of a protective case that protects a touchscreen of the terminal, comprising: detecting a capacitance value of a capacitor module in the touchscreen, wherein the capacitance value is generated according to the capacitor module and a capacitive sensing body in the protective case; and when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal. 13. The method according to claim 12, wherein when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal comprises: when it is determined that the detected capacitance value is greater than a first preset threshold or it is determined according to the capacitance value that a charging time of the capacitor module is greater than a first preset threshold, determining that the protective case is near the terminal; and when it is determined that the detected capacitance value is less than a second preset threshold or it is determined according to the capacitance value that a discharging time of the capacitor module is less than a second preset threshold, determining that the protective case is far away from the terminal. 14. The method according to claim 12, wherein when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal comprises: when it is determined that a sequence of triggering caused when detected capacitance values, of preset positions, in the capacitor module exceed capacitance thresholds is the same as a first preset trigger sequence, determining that the protective case is far away from the terminal; and when it is determined that the sequence of triggering caused when the detected capacitance values, of the preset positions, in the capacitor module exceed the capacitance thresholds is the same as a second preset trigger sequence, determining that the protective case is near the terminal. 15. The method according to claim 12, wherein after when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal, the method further comprises: when it is determined that the protective case is far away from the terminal, waking up the touchscreen; and when it is determined that the protective case is near the terminal, making the touchscreen sleep. 16. A computer readable medium storing instructions to cause a terminal to perform operations comprising: detecting a capacitance value of a capacitor module in the touchscreen, wherein the capacitance value is generated according to the capacitor module and a capacitive sensing body in the protective case; and when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal. 17. The computer readable medium according to claim 12, wherein when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal comprises: when it is determined that the detected capacitance value is greater than a first preset threshold or it is determined according to the capacitance value that a charging time of the capacitor module is greater than a first preset threshold, determining that the protective case is near the terminal; and when it is determined that the detected capacitance value is less than a second preset threshold or it is determined according to the capacitance value that a discharging time of the capacitor module is less than a second preset threshold, determining that the protective case is far away from the terminal. 18. The computer readable medium according to claim 12, wherein when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal comprises: when it is determined that a sequence of triggering caused when detected capacitance values, of preset positions, in the capacitor module exceed capacitance thresholds is the same as a first preset trigger sequence, determining that the protective case is far away from the terminal; and when it is determined that the sequence of triggering caused when the detected capacitance values, of the preset positions, in the capacitor module exceed the capacitance thresholds is the same as a second preset trigger sequence, determining that the protective case is near the terminal. 19. The computer readable medium according to claim 12, wherein after when it is determined that a change of the detected capacitance value conforms to a first preset rule, determining that the protective case is far away from the terminal; and when it is determined that the change of the capacitance value conforms to a second preset rule, determining that the protective case is near the terminal, the method further comprises: when it is determined that the protective case is far away from the terminal, waking up the touchscreen; and when it is determined that the protective case is near the terminal, making the touchscreen sleep.
2,600
10,948
10,948
15,995,877
2,637
In one implementation, a receiver has a module to calculate the cross-correlation between a portion of a digital representation of a received signal and a reference signal. The receiver also has a module to generate an estimate of a portion of a message potentially included in the digital representation of the received signal and a screening module to determine the likelihood that the received signal includes a message. For a received signal that is determined likely to include a message, the receiver includes a carrier refinement module to shift the frequency of carrier pulses in the digital representation of the received signal toward a desired frequency and to align the phase of carrier pulses in the digital representation of the received signal with a desired phase and a coherent matched filter to recover the message from the digital representation of the received signal.
1. A receiver for receiving 1090 MHz Mode S Extended Squitter (“ES”) ADS-B messages comprising: an analog-to-digital converter configured to convert a received analog signal into a digital representation of the received signal; a carrier detection module configured to determine if a spectral component within a range of 1090 MHz is present within a portion of the digital representation of the received signal; a cross-correlation module configured to: calculate, responsive to a determination by the carrier detection module that a spectral component within the range of 1090 MHz is present within the portion of the digital representation of the received signal, a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for a specific portion of a 1090 MHz Mode S ES ADS-B message, the calculated measure of the cross-correlation representing a first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message, and determine if the first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a first condition; a signal estimator module configured to generate, responsive to a determination that the first measure satisfies the first condition, an estimate of a portion of a 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal corresponding to the portion of the digital representation of the received signal; a screening module configured to: generate a feature vector representing n≥2 features of the estimate of the portion of the 1090 MHz Mode S ES ADS-B potentially included in the digital representation of the received signal, project the feature vector into a corresponding n-dimensional feature space, determine, based on the projection of the feature vector into the feature space, a second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message, and determine if the second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a second condition; a carrier refinement module configured to shift the frequency of carrier pulses in the digital representation of the received signal toward a desired frequency and to align the phase of carrier pulses in the digital representation of the received signal with a desired phase responsive to a determination that the second measure satisfies the second condition; and a coherent matched filter that is phase-matched to the desired phase and configured to recover a 1090 MHz Mode S ES ADS-B message from the digital representation of the received signal. 2. The receiver of claim 1, wherein: the screening module is configured to: generate a feature vector representing: a measure of pule amplitude consistency of pulses within the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal, a measure of the phase consistency of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal, and a measure of the residual phase error of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal; project the feature vector into a feature space having at least three dimensions representing pulse amplitude consistency, phase consistency, and residual phase error; determine a distance, within the feature space, from the projection of the feature vector to a cluster representing expected pulse amplitude consistency of pulses within a 1090 MHz Mode S ES ADS-B message, expected phase consistency within a 1090 MHz Mode S ES ADS-B message, and expected residual phase error of a 1090 MHz Mode S ES ADS-B message; and determine if the second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a second condition by determining if the distance, within the feature space, from the projection of the feature vector to the cluster is less than a defined threshold value. 3. The receiver of claim 2, wherein the screening module is configured to determine a distance, within the feature space, from the projection of the feature vector to the cluster by determining a Mahalanobis distance from the projection of the feature vector to a distribution representing expected pulse amplitude consistency of pulses within a 1090 MHz Mode S ES ADS-B message, expected phase consistency within a 1090 MHz Mode S ES ADS-B message, and expected residual phase error of a 1090 MHz Mode S ES ADS-B message. 4. The receiver of claim 1, wherein the cross-correlation module is configured to calculate a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for a specific portion of a 1090 MHz Mode S ES ADS-B message by calculating a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for the preamble and at least the next 5 bit periods of a 1090 MHz Mode S ES ADS-B message. 5. The receiver of claim 1, wherein: the receiver further comprises a constant false alarm rate detection module configured to determine if the power in the portion of the digital representation of the received signal exceeds a threshold level; and the cross-correlation module is configured to calculate the measure of the cross-correlation between the portion of the digital representation of the received signal and the reference signal responsive to a determination by the carrier detection module that a spectral component within the range of 1090 MHz is present within the portion of the digital representation of the received signal and a determination by the constant false alarm rate detection module that the power in the portion of the digital representation of the received signal exceeds the threshold level. 6. The receiver of claim 1 wherein the signal estimator module is a minimum mean square error signal estimator configured to generate, responsive to a determination that the first measure satisfies the first condition, a minimum mean square error estimation of a portion of a 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal corresponding to the portion of the digital representation of the received signal. 7. The receiver of claim 1, wherein the analog-to-digital converter is configured to sample the received analog signal at least 15 times per μsec. 8. The receiver of claim 1, wherein the carrier detection module is configured to: generate a Fourier transform of at least a segment of the digital representation of the received signal that represents the segment of the digital representation of the received signal in the frequency domain, and determine if a spectral component within a range of 1090 MHz is present within a portion of the digital representation of the received signal based on the Fourier transform of the segment of the digital representation of the received signal. 9. The receiver of claim 1, wherein the carrier detection module, the cross-correlation module, the signal estimator module, the screening module, the carrier refinement module, and the coherent matched filter are implemented in one or more field programmable gate arrays. 10. The receiver of claim 1, wherein the carrier detection module, the cross-correlation module, the signal estimator module, the screening module, the carrier refinement module, and the coherent matched filter are implemented by one or more microprocessors executing machine-readable instructions. 11. The receiver of claim 1, wherein the cross-correlation module is configured to determine if the first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a first condition by determining if the calculated measure of the cross-correlation exceeds a defined threshold value. 12. The receiver of claim 1, wherein the carrier refinement module rotates energy of the digital representation of the received signal from the imaginary axis to the real axis. 13. The receiver of claim 1, wherein the impulse response of the coherent matched filter is a 0.5 μsec 1090 MHz pulse with the desired phase. 14. The receiver of claim 1, wherein the receiver is configured to be integrated with a satellite and to receive a 1090 MHz Mode S ES ADS-B message in low-Earth orbit. 15. The receiver of claim 1, wherein the receiver is configured to successfully recover a 1090 MHz Mode S ES ADS-B message from a received signal when the Eb/No of the digital representation of the received signal is not more than 10 dB. 16. A method for recovering a 1090 MHz Mode S Extended Squitter (“ES”) ADS-B message from a received signal, the method comprising: converting a received analog signal into a digital representation of the received signal; determining if a spectral component within a range of 1090 MHz is present within a portion of the digital representation of the received signal; calculating, as a consequence of having determined that a spectral component within the range of 1090 MHz is present within the portion of the digital representation of the received signal, a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for a specific portion of a 1090 MHz Mode S ES ADS-B message, the calculated measure of the cross-correlation representing a first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message; determining if the first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a first condition; generating, as a consequence of having determined that the first measure satisfies the first condition, an estimate of a portion of a 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal corresponding to the portion of the digital representation of the received signal; generating a feature vector representing n≥2 features of the estimate of the portion of the 1090 MHz Mode S ES ADS-B potentially included in the digital representation of the received signal; projecting the feature vector into a corresponding n-dimensional feature space; determining, based on the projection of the feature vector into the feature space, a second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message; determining if the second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a second condition; and as a consequence of having determined that the second measure satisfies the second condition: shifting the frequency of carrier pulses in the digital representation of the received signal toward a desired frequency, aligning the phase of carrier pulses in the digital representation of the received signal with a desired phase, and using a coherent matched filter that is phase-matched to the desired phase to recover a 1090 MHz Mode S ES ADS-B message from the digital representation of the received signal. 17. The method of claim 16, wherein: generating a feature vector representing n≥2 features of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal includes generating a feature vector representing: a measure of pule amplitude consistency of pulses within the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal, a measure of the phase consistency of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal, and a measure of the residual phase error of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal; projecting the feature vector into a corresponding n-dimensional feature space includes projecting the feature vector into a feature space having at least three dimensions representing pulse amplitude consistency, phase consistency, and residual phase error; determining, based on the projection of the feature vector into the feature space, a second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message includes determining a distance, within the feature space, from the projection of the feature vector to a cluster representing expected pulse amplitude consistency of pulses within a 1090 MHz Mode S ES ADS-B message, expected phase consistency within a 1090 MHz Mode S ES ADS-B message, and expected residual phase error of a 1090 MHz Mode S ES ADS-B message; and determining if the second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a second condition includes determining if the distance, within the feature space, from the projection of the feature vector to the cluster is less than a defined threshold value. 18. The method of claim 17, wherein determining a distance, within the feature space, from the projection of the feature vector to the cluster includes determining a Mahalanobis distance from the projection of the feature vector to a distribution representing expected pulse amplitude consistency of pulses within a 1090 MHz Mode S ES ADS-B message, expected phase consistency within a 1090 MHz Mode S ES ADS-B message, and expected residual phase error of a 1090 MHz Mode S ES ADS-B message. 19. The method of claim 16, wherein calculating a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for a specific portion of a 1090 MHz Mode S ES ADS-B message includes calculating a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for the preamble and at least the next 5 bit periods of a 1090 MHz Mode S ES ADS-B message. 20. A receiver for receiving 1090 MHz Mode S Extended Squitter (“ES”) ADS-B messages in low-Earth orbit comprising: a field programmable gate array having: a first logic block configured to detect the presence of a spectral component within a range of 1090 MHz within a portion of a digital representation of a received signal, a second logic block configured to compare, as a consequence of having detected the presence of the spectral component within the range of 1090 MHz within the portion of the digital representation of the received signal, the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for the first n≥13 bit periods of a 1090 MHz Mode S ES ADS-B message, a third logical block configured to make a first determination that the digital representation of the received signal is likely to include a 1090 MHz Mode S ES ADS-B message as a result of the comparison of the portion of the digital representation of the received signal and the reference signal, a fourth logic block configured to generate, as a consequence of having made the first determination that the digital representation of the received signal is likely to include a 1090 MHz Mode S ES ADS-B message, a minimum mean square error estimation of the first n bit periods of a 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal from the portion of the digital representation of the received signal, a fifth logic block configured to compare features of the minimum mean square error estimation of the first n bit periods of the estimate of the 1090 MHz Mode S ES ADS-B message potentially included in the received signal to expected features of the first n bit periods of a 1090 MHz Mode S ES ADS-B message, a sixth logic block configured to make a second determination that the digital representation of the received signal is likely to include a 1090 MHz Mode S ES ADS-B message as a result of the comparison of the features of the minimum mean square error estimation of the first n bit periods of the estimate of the 1090 MHz Mode S ES ADS-B message potentially included in the received signal to expected features of the first n bit periods of a 1090 MHz Mode S ES ADS-B message, a seventh logic block configured to shift the frequency of carrier pulses in the digital representation of the received signal toward a desired frequency and to align the phase of carrier pulses in the digital representation of the received signal with a desired phase responsive to a determination that the second measure satisfies the second condition, and an eighth logic block configured to provide a coherent matched filter that is phase-matched to the desired phase to recover, from the digital representation of the received signal, a 1090 MHz Mode S ES ADS-B message.
In one implementation, a receiver has a module to calculate the cross-correlation between a portion of a digital representation of a received signal and a reference signal. The receiver also has a module to generate an estimate of a portion of a message potentially included in the digital representation of the received signal and a screening module to determine the likelihood that the received signal includes a message. For a received signal that is determined likely to include a message, the receiver includes a carrier refinement module to shift the frequency of carrier pulses in the digital representation of the received signal toward a desired frequency and to align the phase of carrier pulses in the digital representation of the received signal with a desired phase and a coherent matched filter to recover the message from the digital representation of the received signal.1. A receiver for receiving 1090 MHz Mode S Extended Squitter (“ES”) ADS-B messages comprising: an analog-to-digital converter configured to convert a received analog signal into a digital representation of the received signal; a carrier detection module configured to determine if a spectral component within a range of 1090 MHz is present within a portion of the digital representation of the received signal; a cross-correlation module configured to: calculate, responsive to a determination by the carrier detection module that a spectral component within the range of 1090 MHz is present within the portion of the digital representation of the received signal, a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for a specific portion of a 1090 MHz Mode S ES ADS-B message, the calculated measure of the cross-correlation representing a first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message, and determine if the first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a first condition; a signal estimator module configured to generate, responsive to a determination that the first measure satisfies the first condition, an estimate of a portion of a 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal corresponding to the portion of the digital representation of the received signal; a screening module configured to: generate a feature vector representing n≥2 features of the estimate of the portion of the 1090 MHz Mode S ES ADS-B potentially included in the digital representation of the received signal, project the feature vector into a corresponding n-dimensional feature space, determine, based on the projection of the feature vector into the feature space, a second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message, and determine if the second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a second condition; a carrier refinement module configured to shift the frequency of carrier pulses in the digital representation of the received signal toward a desired frequency and to align the phase of carrier pulses in the digital representation of the received signal with a desired phase responsive to a determination that the second measure satisfies the second condition; and a coherent matched filter that is phase-matched to the desired phase and configured to recover a 1090 MHz Mode S ES ADS-B message from the digital representation of the received signal. 2. The receiver of claim 1, wherein: the screening module is configured to: generate a feature vector representing: a measure of pule amplitude consistency of pulses within the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal, a measure of the phase consistency of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal, and a measure of the residual phase error of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal; project the feature vector into a feature space having at least three dimensions representing pulse amplitude consistency, phase consistency, and residual phase error; determine a distance, within the feature space, from the projection of the feature vector to a cluster representing expected pulse amplitude consistency of pulses within a 1090 MHz Mode S ES ADS-B message, expected phase consistency within a 1090 MHz Mode S ES ADS-B message, and expected residual phase error of a 1090 MHz Mode S ES ADS-B message; and determine if the second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a second condition by determining if the distance, within the feature space, from the projection of the feature vector to the cluster is less than a defined threshold value. 3. The receiver of claim 2, wherein the screening module is configured to determine a distance, within the feature space, from the projection of the feature vector to the cluster by determining a Mahalanobis distance from the projection of the feature vector to a distribution representing expected pulse amplitude consistency of pulses within a 1090 MHz Mode S ES ADS-B message, expected phase consistency within a 1090 MHz Mode S ES ADS-B message, and expected residual phase error of a 1090 MHz Mode S ES ADS-B message. 4. The receiver of claim 1, wherein the cross-correlation module is configured to calculate a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for a specific portion of a 1090 MHz Mode S ES ADS-B message by calculating a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for the preamble and at least the next 5 bit periods of a 1090 MHz Mode S ES ADS-B message. 5. The receiver of claim 1, wherein: the receiver further comprises a constant false alarm rate detection module configured to determine if the power in the portion of the digital representation of the received signal exceeds a threshold level; and the cross-correlation module is configured to calculate the measure of the cross-correlation between the portion of the digital representation of the received signal and the reference signal responsive to a determination by the carrier detection module that a spectral component within the range of 1090 MHz is present within the portion of the digital representation of the received signal and a determination by the constant false alarm rate detection module that the power in the portion of the digital representation of the received signal exceeds the threshold level. 6. The receiver of claim 1 wherein the signal estimator module is a minimum mean square error signal estimator configured to generate, responsive to a determination that the first measure satisfies the first condition, a minimum mean square error estimation of a portion of a 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal corresponding to the portion of the digital representation of the received signal. 7. The receiver of claim 1, wherein the analog-to-digital converter is configured to sample the received analog signal at least 15 times per μsec. 8. The receiver of claim 1, wherein the carrier detection module is configured to: generate a Fourier transform of at least a segment of the digital representation of the received signal that represents the segment of the digital representation of the received signal in the frequency domain, and determine if a spectral component within a range of 1090 MHz is present within a portion of the digital representation of the received signal based on the Fourier transform of the segment of the digital representation of the received signal. 9. The receiver of claim 1, wherein the carrier detection module, the cross-correlation module, the signal estimator module, the screening module, the carrier refinement module, and the coherent matched filter are implemented in one or more field programmable gate arrays. 10. The receiver of claim 1, wherein the carrier detection module, the cross-correlation module, the signal estimator module, the screening module, the carrier refinement module, and the coherent matched filter are implemented by one or more microprocessors executing machine-readable instructions. 11. The receiver of claim 1, wherein the cross-correlation module is configured to determine if the first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a first condition by determining if the calculated measure of the cross-correlation exceeds a defined threshold value. 12. The receiver of claim 1, wherein the carrier refinement module rotates energy of the digital representation of the received signal from the imaginary axis to the real axis. 13. The receiver of claim 1, wherein the impulse response of the coherent matched filter is a 0.5 μsec 1090 MHz pulse with the desired phase. 14. The receiver of claim 1, wherein the receiver is configured to be integrated with a satellite and to receive a 1090 MHz Mode S ES ADS-B message in low-Earth orbit. 15. The receiver of claim 1, wherein the receiver is configured to successfully recover a 1090 MHz Mode S ES ADS-B message from a received signal when the Eb/No of the digital representation of the received signal is not more than 10 dB. 16. A method for recovering a 1090 MHz Mode S Extended Squitter (“ES”) ADS-B message from a received signal, the method comprising: converting a received analog signal into a digital representation of the received signal; determining if a spectral component within a range of 1090 MHz is present within a portion of the digital representation of the received signal; calculating, as a consequence of having determined that a spectral component within the range of 1090 MHz is present within the portion of the digital representation of the received signal, a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for a specific portion of a 1090 MHz Mode S ES ADS-B message, the calculated measure of the cross-correlation representing a first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message; determining if the first measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a first condition; generating, as a consequence of having determined that the first measure satisfies the first condition, an estimate of a portion of a 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal corresponding to the portion of the digital representation of the received signal; generating a feature vector representing n≥2 features of the estimate of the portion of the 1090 MHz Mode S ES ADS-B potentially included in the digital representation of the received signal; projecting the feature vector into a corresponding n-dimensional feature space; determining, based on the projection of the feature vector into the feature space, a second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message; determining if the second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a second condition; and as a consequence of having determined that the second measure satisfies the second condition: shifting the frequency of carrier pulses in the digital representation of the received signal toward a desired frequency, aligning the phase of carrier pulses in the digital representation of the received signal with a desired phase, and using a coherent matched filter that is phase-matched to the desired phase to recover a 1090 MHz Mode S ES ADS-B message from the digital representation of the received signal. 17. The method of claim 16, wherein: generating a feature vector representing n≥2 features of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal includes generating a feature vector representing: a measure of pule amplitude consistency of pulses within the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal, a measure of the phase consistency of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal, and a measure of the residual phase error of the estimate of the portion of the 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal; projecting the feature vector into a corresponding n-dimensional feature space includes projecting the feature vector into a feature space having at least three dimensions representing pulse amplitude consistency, phase consistency, and residual phase error; determining, based on the projection of the feature vector into the feature space, a second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message includes determining a distance, within the feature space, from the projection of the feature vector to a cluster representing expected pulse amplitude consistency of pulses within a 1090 MHz Mode S ES ADS-B message, expected phase consistency within a 1090 MHz Mode S ES ADS-B message, and expected residual phase error of a 1090 MHz Mode S ES ADS-B message; and determining if the second measure of the likelihood that the digital representation of the received signal includes a 1090 MHz Mode S ES ADS-B message satisfies a second condition includes determining if the distance, within the feature space, from the projection of the feature vector to the cluster is less than a defined threshold value. 18. The method of claim 17, wherein determining a distance, within the feature space, from the projection of the feature vector to the cluster includes determining a Mahalanobis distance from the projection of the feature vector to a distribution representing expected pulse amplitude consistency of pulses within a 1090 MHz Mode S ES ADS-B message, expected phase consistency within a 1090 MHz Mode S ES ADS-B message, and expected residual phase error of a 1090 MHz Mode S ES ADS-B message. 19. The method of claim 16, wherein calculating a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for a specific portion of a 1090 MHz Mode S ES ADS-B message includes calculating a measure of the cross-correlation between the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for the preamble and at least the next 5 bit periods of a 1090 MHz Mode S ES ADS-B message. 20. A receiver for receiving 1090 MHz Mode S Extended Squitter (“ES”) ADS-B messages in low-Earth orbit comprising: a field programmable gate array having: a first logic block configured to detect the presence of a spectral component within a range of 1090 MHz within a portion of a digital representation of a received signal, a second logic block configured to compare, as a consequence of having detected the presence of the spectral component within the range of 1090 MHz within the portion of the digital representation of the received signal, the portion of the digital representation of the received signal and a reference signal representing an expected pulse pattern for the first n≥13 bit periods of a 1090 MHz Mode S ES ADS-B message, a third logical block configured to make a first determination that the digital representation of the received signal is likely to include a 1090 MHz Mode S ES ADS-B message as a result of the comparison of the portion of the digital representation of the received signal and the reference signal, a fourth logic block configured to generate, as a consequence of having made the first determination that the digital representation of the received signal is likely to include a 1090 MHz Mode S ES ADS-B message, a minimum mean square error estimation of the first n bit periods of a 1090 MHz Mode S ES ADS-B message potentially included in the digital representation of the received signal from the portion of the digital representation of the received signal, a fifth logic block configured to compare features of the minimum mean square error estimation of the first n bit periods of the estimate of the 1090 MHz Mode S ES ADS-B message potentially included in the received signal to expected features of the first n bit periods of a 1090 MHz Mode S ES ADS-B message, a sixth logic block configured to make a second determination that the digital representation of the received signal is likely to include a 1090 MHz Mode S ES ADS-B message as a result of the comparison of the features of the minimum mean square error estimation of the first n bit periods of the estimate of the 1090 MHz Mode S ES ADS-B message potentially included in the received signal to expected features of the first n bit periods of a 1090 MHz Mode S ES ADS-B message, a seventh logic block configured to shift the frequency of carrier pulses in the digital representation of the received signal toward a desired frequency and to align the phase of carrier pulses in the digital representation of the received signal with a desired phase responsive to a determination that the second measure satisfies the second condition, and an eighth logic block configured to provide a coherent matched filter that is phase-matched to the desired phase to recover, from the digital representation of the received signal, a 1090 MHz Mode S ES ADS-B message.
2,600
10,949
10,949
15,690,811
2,677
A method in a data processing system comprising a processor and a memory, for processing data entries, the method comprising receiving, by the data processing system, a data entry, parsing, by the data processing system, the data entry for features by using natural language processing (NLP), identifying, by the data processing system, data sets from a corpus of information that are relevant to the data entry, and linking, by the data processing system, the identified data sets to the data entry.
1. A method, in a data processing system comprising a processor and a memory, for processing data entries, the method comprising: receiving, by the data processing system, a data entry; parsing, by the data processing system, the data entry for features by using natural language processing (NLP); identifying, by the data processing system, data sets from a corpus of information that are relevant to the data entry; and linking, by the data processing system, the identified data sets to the data entry. 2. The method of claim 1 wherein the data entry comprises a data structure that includes text, characters, and numbers that are arranged in expressions selected from the group consisting of: terms, acronyms, numbers, codes, and phrases. 3. The method of claim 1 wherein the data entry comprises model rules that are related to data sets including regulations, policies, obligations, or guidance from the corpus of information. 4. The method of claim 1 wherein receiving the data entry further comprises receiving, by the data processing system, a selection of data from a local or remote database. 5. The method of claim 1 wherein receiving the data entry further comprises receiving, by the data processing system, a manual data entry from a client device. 6. The method of claim 1 wherein parsing the data entry further comprises: decomposing, by the data processing system, the data entry into text fragments; comparing, by the data processing system, the text fragments to the corpus of information; identifying, by the data processing system, the features based on the comparison; and assigning, by the data processing system, scores to the text fragments, wherein the scores are indicative of a degree to which the identified features of the text fragments match one or more data sets from the corpus of information. 7. The method of claim 6 wherein identifying data sets from a corpus of information that are relevant to the data entry further comprises identifying the data sets based on the scores of the text fragments. 8. The method of claim 1 wherein linking the identified data sets to the data entry further comprises adding one or more links to the data entry. 9. The method of claim 1 wherein parsing the data entry further comprises identifying elements in the data entry that are semantically or logically related to the data sets. 10. A computer system for processing data entries, the computer system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computer system to carry out the steps of: receiving a data entry from a database; parsing the data entry for features by using natural language processing (NLP); identifying data sets from a corpus of information that are relevant to the data entry; linking the identified data sets to the data entry; detecting a change to the identified data sets; and indicating the change to the identified data set in the data entry. 11. A computer system for generating data entries, the computer system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computer system to carry out the steps of: identifying features of one or more data sets from a corpus of information; clustering the one or more data sets based on the identified features; generating one or more data entries from the clustered data sets; requesting review of the one or more generated data entries; receiving the review of the one or more generated data entries; and storing the one or more generated data entries in a files database based on the review. 12. The computer system of claim 11 wherein identifying the features of the one or more data sets further comprises the computer system identifying given terms or phrases from structured and unstructured data. 13. The computer system of claim 11 wherein identifying the features of the one or more data sets further comprises the computer system analyzing metadata and tags associated with the one or more data sets, the metadata and tags including representations of the features of the one or more data sets. 14. The computer system of claim 11 wherein clustering the one or more data sets further comprises the computer system clustering the one or more data sets according to a degree of similarity of the identified features. 15. The computer system of claim 11 wherein clustering the one or more data sets further comprises the computer system generating clusters of the one or more data sets. 16. The computer system of claim 15 wherein the one or more generated data entries are representative of the generated clusters. 17. The computer system of claim 11 wherein receiving the review of the one or more generated data entries further comprises the computer system receiving acceptance of the one or more generated data entries. 18. The computer system of claim 11 wherein receiving the review of the one or more generated data entries further comprises the computer system receiving edited versions of the one or more generated data entries. 19. The computer system of claim 18 further comprising storing the edited versions of the one or more generated data entries in the files database. 20. A computer program product for processing data entries, said computer program product comprising: a computer readable storage medium having stored thereon: program instructions executable by a processor to cause the processor to receive a data entry; program instructions executable by a processor to cause the processor to parse the data entry for features by using natural language processing (NLP); program instructions executable by a processor to cause the processor to identify data sets from a corpus of information that are relevant to the data entry; and program instructions executable by the processor to cause the processor to link the identified data sets to the data entry. 21. The computer program product of claim 20 wherein the data entry comprises model rules that are related to data sets including regulations, policies, obligations, or guidance from the corpus of information. 22. The computer program product of claim 20 further comprising program instructions executable by the computer to cause the computer to receive a selection of data from a local or remote database. 23. The computer program product of claim 20 further comprising program instructions executable by the computer to cause the computer to receive a manual data entry from a client device. 24. The computer program product of claim 20 further comprising: program instructions executable by the computer to cause the computer to decompose the data entry into text fragments; program instructions executable by the computer to cause the computer to compare the text fragments to the corpus of information; program instructions executable by the computer to cause the computer to identify the features based on the comparison; and program instructions executable by the computer to cause the computer to assign scores to the text fragments, wherein the scores are indicative of a degree to which the identified features of the text fragments match one or more data sets from the corpus of information. 25. A computer program product for processing data entries, said computer program product comprising: a computer readable storage medium having stored thereon: program instructions executable by a processor to cause the processor to receive a data entry from a database; program instructions executable by the processor to cause the processor to parse the data entry for features by using NLP; program instructions executable by the processor to cause the processor to identify data sets from a corpus of information that are relevant to the data entry; program instructions executable by the processor to cause the processor to link the identified data sets to the data entry; program instructions executable by the processor to cause the processor to detect a change to the identified data sets; and program instructions executable by the processor to cause the processor to indicate the change to the identified data set in the data entry.
A method in a data processing system comprising a processor and a memory, for processing data entries, the method comprising receiving, by the data processing system, a data entry, parsing, by the data processing system, the data entry for features by using natural language processing (NLP), identifying, by the data processing system, data sets from a corpus of information that are relevant to the data entry, and linking, by the data processing system, the identified data sets to the data entry.1. A method, in a data processing system comprising a processor and a memory, for processing data entries, the method comprising: receiving, by the data processing system, a data entry; parsing, by the data processing system, the data entry for features by using natural language processing (NLP); identifying, by the data processing system, data sets from a corpus of information that are relevant to the data entry; and linking, by the data processing system, the identified data sets to the data entry. 2. The method of claim 1 wherein the data entry comprises a data structure that includes text, characters, and numbers that are arranged in expressions selected from the group consisting of: terms, acronyms, numbers, codes, and phrases. 3. The method of claim 1 wherein the data entry comprises model rules that are related to data sets including regulations, policies, obligations, or guidance from the corpus of information. 4. The method of claim 1 wherein receiving the data entry further comprises receiving, by the data processing system, a selection of data from a local or remote database. 5. The method of claim 1 wherein receiving the data entry further comprises receiving, by the data processing system, a manual data entry from a client device. 6. The method of claim 1 wherein parsing the data entry further comprises: decomposing, by the data processing system, the data entry into text fragments; comparing, by the data processing system, the text fragments to the corpus of information; identifying, by the data processing system, the features based on the comparison; and assigning, by the data processing system, scores to the text fragments, wherein the scores are indicative of a degree to which the identified features of the text fragments match one or more data sets from the corpus of information. 7. The method of claim 6 wherein identifying data sets from a corpus of information that are relevant to the data entry further comprises identifying the data sets based on the scores of the text fragments. 8. The method of claim 1 wherein linking the identified data sets to the data entry further comprises adding one or more links to the data entry. 9. The method of claim 1 wherein parsing the data entry further comprises identifying elements in the data entry that are semantically or logically related to the data sets. 10. A computer system for processing data entries, the computer system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computer system to carry out the steps of: receiving a data entry from a database; parsing the data entry for features by using natural language processing (NLP); identifying data sets from a corpus of information that are relevant to the data entry; linking the identified data sets to the data entry; detecting a change to the identified data sets; and indicating the change to the identified data set in the data entry. 11. A computer system for generating data entries, the computer system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computer system to carry out the steps of: identifying features of one or more data sets from a corpus of information; clustering the one or more data sets based on the identified features; generating one or more data entries from the clustered data sets; requesting review of the one or more generated data entries; receiving the review of the one or more generated data entries; and storing the one or more generated data entries in a files database based on the review. 12. The computer system of claim 11 wherein identifying the features of the one or more data sets further comprises the computer system identifying given terms or phrases from structured and unstructured data. 13. The computer system of claim 11 wherein identifying the features of the one or more data sets further comprises the computer system analyzing metadata and tags associated with the one or more data sets, the metadata and tags including representations of the features of the one or more data sets. 14. The computer system of claim 11 wherein clustering the one or more data sets further comprises the computer system clustering the one or more data sets according to a degree of similarity of the identified features. 15. The computer system of claim 11 wherein clustering the one or more data sets further comprises the computer system generating clusters of the one or more data sets. 16. The computer system of claim 15 wherein the one or more generated data entries are representative of the generated clusters. 17. The computer system of claim 11 wherein receiving the review of the one or more generated data entries further comprises the computer system receiving acceptance of the one or more generated data entries. 18. The computer system of claim 11 wherein receiving the review of the one or more generated data entries further comprises the computer system receiving edited versions of the one or more generated data entries. 19. The computer system of claim 18 further comprising storing the edited versions of the one or more generated data entries in the files database. 20. A computer program product for processing data entries, said computer program product comprising: a computer readable storage medium having stored thereon: program instructions executable by a processor to cause the processor to receive a data entry; program instructions executable by a processor to cause the processor to parse the data entry for features by using natural language processing (NLP); program instructions executable by a processor to cause the processor to identify data sets from a corpus of information that are relevant to the data entry; and program instructions executable by the processor to cause the processor to link the identified data sets to the data entry. 21. The computer program product of claim 20 wherein the data entry comprises model rules that are related to data sets including regulations, policies, obligations, or guidance from the corpus of information. 22. The computer program product of claim 20 further comprising program instructions executable by the computer to cause the computer to receive a selection of data from a local or remote database. 23. The computer program product of claim 20 further comprising program instructions executable by the computer to cause the computer to receive a manual data entry from a client device. 24. The computer program product of claim 20 further comprising: program instructions executable by the computer to cause the computer to decompose the data entry into text fragments; program instructions executable by the computer to cause the computer to compare the text fragments to the corpus of information; program instructions executable by the computer to cause the computer to identify the features based on the comparison; and program instructions executable by the computer to cause the computer to assign scores to the text fragments, wherein the scores are indicative of a degree to which the identified features of the text fragments match one or more data sets from the corpus of information. 25. A computer program product for processing data entries, said computer program product comprising: a computer readable storage medium having stored thereon: program instructions executable by a processor to cause the processor to receive a data entry from a database; program instructions executable by the processor to cause the processor to parse the data entry for features by using NLP; program instructions executable by the processor to cause the processor to identify data sets from a corpus of information that are relevant to the data entry; program instructions executable by the processor to cause the processor to link the identified data sets to the data entry; program instructions executable by the processor to cause the processor to detect a change to the identified data sets; and program instructions executable by the processor to cause the processor to indicate the change to the identified data set in the data entry.
2,600
10,950
10,950
16,411,743
2,668
An input interface is configured to receive at least one sensor signal corresponding to information of the exterior of the vehicle sensed by at least one sensor. A processor is configured to, based on the sensor signal, generate: a first data corresponding to first information sensed in a first area; and a second data corresponding to second information sensed in a second area located outside the first area. An output interface is configured to output the first data and the second data independently from one another.
1. A sensor data generating device adapted to be mounted on a vehicle, comprising: an input interface configured to receive at least one sensor signal corresponding to information of the exterior of the vehicle sensed by at least one sensor; a processor configured to, based on the sensor signal, generate: a first data corresponding to first information sensed in a first area; and a second data corresponding to second information sensed in a second area located outside the first area; and an output interface configured to output the first data and the second data independently from one another. 2. A sensor data generating device adapted to be mounted on a vehicle, comprising: an input interface configured to receive at least one sensor signal corresponding to information of the exterior of the vehicle sensed by at least one sensor; a processor configured to, based on the sensor signal, generate: a first data corresponding to first information sensed in a first area including a sensing reference position of the sensor; and a second data corresponding to second information sensed in a second area not including the sensing reference position; and an output interface configured to output the first data and the second data independently from one another. 3. The sensor data generating device according to claim 1, wherein the processor is configured to generate the second data so as to include information related to at least one of recognition processing, determination processing, and analysis processing performed with respect to the second information. 4. The sensor data generating device according to claim 3, wherein the second data includes alert information that is based on the second information. 5. The sensor data generating device according to claim 3, wherein the processor is configured to monitor temporal change of a position of a reference arranged in the second area; and wherein the second data includes offset information of the sensor that is based on the temporal change. 6. The sensor data generating device according to claim 1, wherein the processor is configured to monitor temporal change of a position of a reference arranged in the second area, and to change a position of the first area based on the temporal change. 7. The sensor data generating device according to claim 1, wherein the at least one sensor includes a first sensor and a second sensor; and wherein the processor associated one sensor of the first sensor and the second sensor is configured to enlarge the first area of the one sensor based on information indicating abnormality of the other sensor of the first sensor and the second sensor. 8. The sensor data generating device according to claim 1, wherein the at least one sensor includes a first sensor and a second sensor; wherein a priority is assigned in accordance with a position in a sensible area of the first sensor and a position in a sensible area of the second sensor; and wherein the processor is configured to generate the first data and the second data so as to include information indicating the priority. 9. The sensor data generating device according to claim 1, wherein a generation frequency of the second data is lower than a generation frequency of the first data. 10. The sensor data generating device according to claim 1, wherein the at least one sensor includes at least one of a LiDAR sensor, a camera, and a millimeter wave sensor. 11. The sensor data generating device according to claim 2, wherein the processor is configured to generate the second data so as to include information related to at least one of recognition processing, determination processing, and analysis processing performed with respect to the second information. 12. The sensor data generating device according to claim 11, wherein the second data includes alert information that is based on the second information. 13. The sensor data generating device according to claim 11, wherein the processor is configured to monitor temporal change of a position of a reference arranged in the second area; and wherein the second data includes offset information of the sensor that is based on the temporal change. 14. The sensor data generating device according to claim 2, wherein the processor is configured to monitor temporal change of a position of a reference arranged in the second area, and to change a position of the first area based on the temporal change. 15. The sensor data generating device according to claim 2, wherein the at least one sensor includes a first sensor and a second sensor; and wherein the processor associated one sensor of the first sensor and the second sensor is configured to enlarge the first area of the one sensor based on information indicating abnormality of the other sensor of the first sensor and the second sensor. 16. The sensor data generating device according to claim 2, wherein the at least one sensor includes a first sensor and a second sensor; wherein a priority is assigned in accordance with a position in a sensible area of the first sensor and a position in a sensible area of the second sensor; and wherein the processor is configured to generate the first data and the second data so as to include information indicating the priority. 17. The sensor data generating device according to claim 2, wherein a generation frequency of the second data is lower than a generation frequency of the first data. 18. The sensor data generating device according to claim 2, wherein the at least one sensor includes at least one of a LiDAR sensor, a camera, and a millimeter wave sensor.
An input interface is configured to receive at least one sensor signal corresponding to information of the exterior of the vehicle sensed by at least one sensor. A processor is configured to, based on the sensor signal, generate: a first data corresponding to first information sensed in a first area; and a second data corresponding to second information sensed in a second area located outside the first area. An output interface is configured to output the first data and the second data independently from one another.1. A sensor data generating device adapted to be mounted on a vehicle, comprising: an input interface configured to receive at least one sensor signal corresponding to information of the exterior of the vehicle sensed by at least one sensor; a processor configured to, based on the sensor signal, generate: a first data corresponding to first information sensed in a first area; and a second data corresponding to second information sensed in a second area located outside the first area; and an output interface configured to output the first data and the second data independently from one another. 2. A sensor data generating device adapted to be mounted on a vehicle, comprising: an input interface configured to receive at least one sensor signal corresponding to information of the exterior of the vehicle sensed by at least one sensor; a processor configured to, based on the sensor signal, generate: a first data corresponding to first information sensed in a first area including a sensing reference position of the sensor; and a second data corresponding to second information sensed in a second area not including the sensing reference position; and an output interface configured to output the first data and the second data independently from one another. 3. The sensor data generating device according to claim 1, wherein the processor is configured to generate the second data so as to include information related to at least one of recognition processing, determination processing, and analysis processing performed with respect to the second information. 4. The sensor data generating device according to claim 3, wherein the second data includes alert information that is based on the second information. 5. The sensor data generating device according to claim 3, wherein the processor is configured to monitor temporal change of a position of a reference arranged in the second area; and wherein the second data includes offset information of the sensor that is based on the temporal change. 6. The sensor data generating device according to claim 1, wherein the processor is configured to monitor temporal change of a position of a reference arranged in the second area, and to change a position of the first area based on the temporal change. 7. The sensor data generating device according to claim 1, wherein the at least one sensor includes a first sensor and a second sensor; and wherein the processor associated one sensor of the first sensor and the second sensor is configured to enlarge the first area of the one sensor based on information indicating abnormality of the other sensor of the first sensor and the second sensor. 8. The sensor data generating device according to claim 1, wherein the at least one sensor includes a first sensor and a second sensor; wherein a priority is assigned in accordance with a position in a sensible area of the first sensor and a position in a sensible area of the second sensor; and wherein the processor is configured to generate the first data and the second data so as to include information indicating the priority. 9. The sensor data generating device according to claim 1, wherein a generation frequency of the second data is lower than a generation frequency of the first data. 10. The sensor data generating device according to claim 1, wherein the at least one sensor includes at least one of a LiDAR sensor, a camera, and a millimeter wave sensor. 11. The sensor data generating device according to claim 2, wherein the processor is configured to generate the second data so as to include information related to at least one of recognition processing, determination processing, and analysis processing performed with respect to the second information. 12. The sensor data generating device according to claim 11, wherein the second data includes alert information that is based on the second information. 13. The sensor data generating device according to claim 11, wherein the processor is configured to monitor temporal change of a position of a reference arranged in the second area; and wherein the second data includes offset information of the sensor that is based on the temporal change. 14. The sensor data generating device according to claim 2, wherein the processor is configured to monitor temporal change of a position of a reference arranged in the second area, and to change a position of the first area based on the temporal change. 15. The sensor data generating device according to claim 2, wherein the at least one sensor includes a first sensor and a second sensor; and wherein the processor associated one sensor of the first sensor and the second sensor is configured to enlarge the first area of the one sensor based on information indicating abnormality of the other sensor of the first sensor and the second sensor. 16. The sensor data generating device according to claim 2, wherein the at least one sensor includes a first sensor and a second sensor; wherein a priority is assigned in accordance with a position in a sensible area of the first sensor and a position in a sensible area of the second sensor; and wherein the processor is configured to generate the first data and the second data so as to include information indicating the priority. 17. The sensor data generating device according to claim 2, wherein a generation frequency of the second data is lower than a generation frequency of the first data. 18. The sensor data generating device according to claim 2, wherein the at least one sensor includes at least one of a LiDAR sensor, a camera, and a millimeter wave sensor.
2,600
10,951
10,951
15,739,293
2,684
Electronic surveillance system ( 100 ) comprising an electronic surveillance bracelet ( 1 ) adapted to be secured around a limb of a wearer, the electronic surveillance bracelet ( 1 ) comprising an energy source ( 5 ), a processing unit ( 7 ), an identification generator ( 11 ), a localisation system ( 12 ), and at least one wireless communication system ( 9 ) adapted to communicate with a base station ( 15 ), characterised in that the wireless communication system ( 9 ) is adapted to receive communication from at least one further device ( 17 ) and to transmit a result of this communication to said base station ( 15 ) together with an identification signal generated by the identification generator ( 11 ).
1-22. (canceled) 23. Electronic surveillance system comprising an electronic surveillance bracelet adapted to be secured around a limb of a wearer, the electronic surveillance bracelet comprising an energy source, a processing unit, an identification generator, a localisation system, and at least one wireless communication system adapted to communicate with a base station wherein the wireless communication system is adapted to receive communication from at least one terrestrially-based further device and to transmit a result of this communication to said base station together with an identification signal generated by the identification generator. 24. Electronic surveillance system according to claim 23, wherein the at least one further device is a chemical sensor system adapted to carry out a test to detect a foreign substance such as alcohol, pharmaceuticals or drugs, said chemical sensor system being provided with a further wireless communication system adapted to communicate with said at least one wireless communication system in the electronic surveillance bracelet. 25. Electronic surveillance system according to claim 24, further comprising an interlock system adapted to be installed in a vehicle, the interlock system being adapted to communicate wirelessly with said electronic surveillance bracelet in order to permit or to prevent use of the vehicle based on a result of said test. 26. Electronic surveillance system according to claim 23, wherein the at least one further device is a wireless beacon adapted to be installed in a building. 27. Electronic surveillance system according to claim 23, wherein the at least one further device is a further electronic surveillance bracelet. 28. Electronic surveillance system according to claim 27, wherein each electronic surveillance bracelet is adapted to communicate an identification signal with the other electronic surveillance bracelet. 29. Electronic surveillance system according to claim 23, wherein the wireless communication bracelet comprises a memory adapted to store said result of communication with said at least one further device. 30. Electronic surveillance system according to claim 23, wherein the electronic surveillance bracelet comprises a motion sensor system, and wherein the processing unit is adapted to identify classes of movements of the electronic surveillance bracelet based on an output of the motion sensor system. 31. Method of performing electronic surveillance of an individual, comprising the steps of: providing an electronic surveillance system electronic surveillance bracelet adapted to be secured around a limb of a wearer, the electronic surveillance bracelet comprising an energy source, a processing unit, an identification generator, a localisation system, and at least one wireless communication system adapted to communicate with a base station wherein the wireless communication system is adapted to receive communication from at least one terrestrially-based further device and to transmit a result of this communication to said base station together with an identification signal generated by the identification generator; securing the electronic surveillance bracelet around a limb of a wearer; attempting to initiate communication between the electronic surveillance bracelet and the said at least one further device; initiating communication between the electronic surveillance bracelet and the base station; transmitting a result of said communication with said at least one further device, together with an identification signal generated by the identification generator, to a base station; receiving said result transmitted by the wireless communication system at the base station. 32. Method according to claim 31, wherein the electronic surveillance system is provided as a chemical sensor system adapted to carry out a test to detect a foreign substance such as alcohol, pharmaceuticals or drugs, said chemical sensor system being provided with a further wireless communication system adapted to communicate with said at least one wireless communication system in the electronic surveillance bracelet, and further comprising the steps of: performing a chemical test by means of the chemical sensor system; subsequently transmitting a result of said test to the electronic surveillance bracelet; and subsequently performing said step of transmitting a result of said test together with an identification signal generated by the identification generator to a base station. 33. Method according to claim 31, wherein the electronic surveillance system is provided with an interlock system adapted to be installed in a vehicle, the interlock system being adapted to communicate wirelessly with said electronic surveillance bracelet in order to permit or to prevent use of the vehicle based on a result of said test, and further comprising the steps of: performing a chemical test by means of the chemical sensor system; subsequently transmitting a result of said test to the electronic surveillance bracelet; subsequently transmitting said result of said test to said interlock system; and permitting or denying use of said vehicle. 34. Method according to claim 31, wherein the at least one further device is provided as a wireless beacon adapted to be installed in a building, and further comprising steps of: communicating with said wireless beacon; and transmitting a result of said communication to the base station. 35. Method according to claim 31, wherein the at least one further device is a further electronic surveillance bracelet, and further comprising steps of: receiving at said electronic surveillance bracelet an identification signal from the other electronic surveillance bracelet; and transmitting an identification signal received from the other electronic surveillance bracelet to the base station. 36. Method according claim 35, further comprising steps of: comparing the identification signal received from the other electronic surveillance bracelet with a list of identification signals associated with individuals with whom the wearer is not allowed to interact; and if the identification signal received from the other electronic surveillance bracelet is comprised in said list, informing an authority and/or activating an audible and/or visual alarm on the wearer's electronic surveillance bracelet. 37. Method according to claim 36, wherein said comparison is carried out in the bracelet. 38. Method according to claim 36, wherein said comparison is carried out remotely and said alarm is activated in response to a command sent via said base station. 39. Method according to claim 31, further comprising the steps of: receiving communication from a plurality of said further devices; comparing a result of each of said communications with a database of further devices; and determining the location of the electronic surveillance bracelet based on said comparison. 40. Method according to claim 39, wherein a result of each of said communications comprises at least one identifier of the corresponding further device. 41. Method according to claim 40, wherein each identifier comprises at least one of: transmission frequency; type of signal encoding; protocols used; and at least one identification code such as an IP address, device name, type of device, network name. 42. Method according to claim 39, further comprising determining a relative signal strength between at least two signals received from corresponding further devices, this relative signal strength being compared with said database. 43. Method according to claim 39, wherein said database comprises information relating to the geographic location of a plurality of further devices and at least one of said identifiers associated with each further device. 44. Method according to claim 39, wherein said database comprises information relating to relative signal strength between at least two signals emanating from corresponding further devices, said relative signal strength being determined at a plurality of geographic locations.
Electronic surveillance system ( 100 ) comprising an electronic surveillance bracelet ( 1 ) adapted to be secured around a limb of a wearer, the electronic surveillance bracelet ( 1 ) comprising an energy source ( 5 ), a processing unit ( 7 ), an identification generator ( 11 ), a localisation system ( 12 ), and at least one wireless communication system ( 9 ) adapted to communicate with a base station ( 15 ), characterised in that the wireless communication system ( 9 ) is adapted to receive communication from at least one further device ( 17 ) and to transmit a result of this communication to said base station ( 15 ) together with an identification signal generated by the identification generator ( 11 ).1-22. (canceled) 23. Electronic surveillance system comprising an electronic surveillance bracelet adapted to be secured around a limb of a wearer, the electronic surveillance bracelet comprising an energy source, a processing unit, an identification generator, a localisation system, and at least one wireless communication system adapted to communicate with a base station wherein the wireless communication system is adapted to receive communication from at least one terrestrially-based further device and to transmit a result of this communication to said base station together with an identification signal generated by the identification generator. 24. Electronic surveillance system according to claim 23, wherein the at least one further device is a chemical sensor system adapted to carry out a test to detect a foreign substance such as alcohol, pharmaceuticals or drugs, said chemical sensor system being provided with a further wireless communication system adapted to communicate with said at least one wireless communication system in the electronic surveillance bracelet. 25. Electronic surveillance system according to claim 24, further comprising an interlock system adapted to be installed in a vehicle, the interlock system being adapted to communicate wirelessly with said electronic surveillance bracelet in order to permit or to prevent use of the vehicle based on a result of said test. 26. Electronic surveillance system according to claim 23, wherein the at least one further device is a wireless beacon adapted to be installed in a building. 27. Electronic surveillance system according to claim 23, wherein the at least one further device is a further electronic surveillance bracelet. 28. Electronic surveillance system according to claim 27, wherein each electronic surveillance bracelet is adapted to communicate an identification signal with the other electronic surveillance bracelet. 29. Electronic surveillance system according to claim 23, wherein the wireless communication bracelet comprises a memory adapted to store said result of communication with said at least one further device. 30. Electronic surveillance system according to claim 23, wherein the electronic surveillance bracelet comprises a motion sensor system, and wherein the processing unit is adapted to identify classes of movements of the electronic surveillance bracelet based on an output of the motion sensor system. 31. Method of performing electronic surveillance of an individual, comprising the steps of: providing an electronic surveillance system electronic surveillance bracelet adapted to be secured around a limb of a wearer, the electronic surveillance bracelet comprising an energy source, a processing unit, an identification generator, a localisation system, and at least one wireless communication system adapted to communicate with a base station wherein the wireless communication system is adapted to receive communication from at least one terrestrially-based further device and to transmit a result of this communication to said base station together with an identification signal generated by the identification generator; securing the electronic surveillance bracelet around a limb of a wearer; attempting to initiate communication between the electronic surveillance bracelet and the said at least one further device; initiating communication between the electronic surveillance bracelet and the base station; transmitting a result of said communication with said at least one further device, together with an identification signal generated by the identification generator, to a base station; receiving said result transmitted by the wireless communication system at the base station. 32. Method according to claim 31, wherein the electronic surveillance system is provided as a chemical sensor system adapted to carry out a test to detect a foreign substance such as alcohol, pharmaceuticals or drugs, said chemical sensor system being provided with a further wireless communication system adapted to communicate with said at least one wireless communication system in the electronic surveillance bracelet, and further comprising the steps of: performing a chemical test by means of the chemical sensor system; subsequently transmitting a result of said test to the electronic surveillance bracelet; and subsequently performing said step of transmitting a result of said test together with an identification signal generated by the identification generator to a base station. 33. Method according to claim 31, wherein the electronic surveillance system is provided with an interlock system adapted to be installed in a vehicle, the interlock system being adapted to communicate wirelessly with said electronic surveillance bracelet in order to permit or to prevent use of the vehicle based on a result of said test, and further comprising the steps of: performing a chemical test by means of the chemical sensor system; subsequently transmitting a result of said test to the electronic surveillance bracelet; subsequently transmitting said result of said test to said interlock system; and permitting or denying use of said vehicle. 34. Method according to claim 31, wherein the at least one further device is provided as a wireless beacon adapted to be installed in a building, and further comprising steps of: communicating with said wireless beacon; and transmitting a result of said communication to the base station. 35. Method according to claim 31, wherein the at least one further device is a further electronic surveillance bracelet, and further comprising steps of: receiving at said electronic surveillance bracelet an identification signal from the other electronic surveillance bracelet; and transmitting an identification signal received from the other electronic surveillance bracelet to the base station. 36. Method according claim 35, further comprising steps of: comparing the identification signal received from the other electronic surveillance bracelet with a list of identification signals associated with individuals with whom the wearer is not allowed to interact; and if the identification signal received from the other electronic surveillance bracelet is comprised in said list, informing an authority and/or activating an audible and/or visual alarm on the wearer's electronic surveillance bracelet. 37. Method according to claim 36, wherein said comparison is carried out in the bracelet. 38. Method according to claim 36, wherein said comparison is carried out remotely and said alarm is activated in response to a command sent via said base station. 39. Method according to claim 31, further comprising the steps of: receiving communication from a plurality of said further devices; comparing a result of each of said communications with a database of further devices; and determining the location of the electronic surveillance bracelet based on said comparison. 40. Method according to claim 39, wherein a result of each of said communications comprises at least one identifier of the corresponding further device. 41. Method according to claim 40, wherein each identifier comprises at least one of: transmission frequency; type of signal encoding; protocols used; and at least one identification code such as an IP address, device name, type of device, network name. 42. Method according to claim 39, further comprising determining a relative signal strength between at least two signals received from corresponding further devices, this relative signal strength being compared with said database. 43. Method according to claim 39, wherein said database comprises information relating to the geographic location of a plurality of further devices and at least one of said identifiers associated with each further device. 44. Method according to claim 39, wherein said database comprises information relating to relative signal strength between at least two signals emanating from corresponding further devices, said relative signal strength being determined at a plurality of geographic locations.
2,600
10,952
10,952
16,253,858
2,627
A method of processing signals from a touch panel for combined capacitive and force sensing includes receiving, from the touch panel, pressure signals from a plurality of piezoelectric sensors and capacitance signals from a plurality of capacitive touch sensors. The method also includes determining, based on the capacitance signals, a user interaction period during which a user interaction with the touch panel occurs. The method also includes generating processed pressure signals based on the received pressure signals. The method also includes measuring a force applied to each of the plurality of piezoelectric sensors by the user interaction during the user interaction period by conditionally integrating the corresponding processed pressure signals according to a state register corresponding to the user interaction. The state register takes one of two or more values. Each user interaction is initialised in a first state value. The user interaction transitions between state register values in dependence upon the current state register value, and one or more pressure signal properties.
1. A method comprising: receiving, from a touch panel, pressure signals from a plurality of piezoelectric sensors and capacitance signals from a plurality of capacitive touch sensors; determining, based on the capacitance signals, a user interaction period during which a user interaction with the touch panel occurs; generating processed pressure signals based on the received pressure signals; measuring a force applied to each of the plurality of piezoelectric sensors by the user interaction during the user interaction period by conditionally integrating the corresponding processed pressure signals according to a state register corresponding to the user interaction; wherein the state register takes one of two or more values, wherein each user interaction is initialised in a first state value, and wherein the user interaction transitions between state register values in dependence upon the current state register value and one or more pressure signal properties. 2. A method according to claim 1, wherein generating the processed pressure signals comprises, for each piezoelectric sensor, subtracting a DC offset value from the received pressure signal; wherein each DC offset value is initialised after a warm-up period has elapsed following switching on the touch panel, and the initial DC offset value is based on the received pressure signals in the absence of a user interaction. 3. A method according to claim 2, further comprising, for each piezoelectric sensor, in response to determining that there is no user interaction: maintaining a regression buffer of received pressure signal values; determining a gradient and variance of the values stored in the regression buffer; and in response to the gradient and variance being less than predetermined threshold values, updating the DC offset value based on the values stored in the regression buffer. 4. A method according to claim 1, further comprising, for each piezoelectric sensor: in response to detecting the start of a user interaction, setting a residual DC offset value to zero; during the user interaction period: maintaining a sample buffer of processed pressure signal values; determining a gradient and variance of the values stored in the sample buffer; determining a difference between the residual DC offset value and the average value of the values stored in the sample buffer; and in response to the gradient and variance being less than corresponding flat-period threshold values and the difference being greater than an offset-shift threshold, updating the residual DC offset value to the average value of the values stored in the sample buffer; subtracting the residual DC offset value from the processed pressure signal before integration. 5. A method according to claim 4, further comprising setting a movement flag to a value of true in response to determining, based on the capacitance signals, that the location of a user interaction is moving; in response to the movement flag does not have a value of true, setting the flat-period threshold values to first predetermined flat-period threshold values; in response to the movement flag has a value of true, setting the flat-period threshold values to second predetermined flat-period threshold values. 6. A method according to claim 1, further comprising, for each piezoelectric sensor, locating and determining an initial peak value of the processed pressure signal during the user interaction period. 7. A method according to claim 6, further comprising, in response to locating an initial peak value: setting a user interaction type register to correspond to a soft touch value in response to the elapsed time since the start of the user interaction period exceeds a predetermined threshold value; setting the user interaction type register to correspond to a hard touch value in response to the elapsed time since the start of the user interaction period does not exceed the predetermined threshold value. 8. A method according to claim 4, further comprising: for each piezoelectric sensor, locating and determining an initial peak value of the processed pressure signal during the user interaction period; in response to locating an initial peak value: setting a user interaction type register to correspond to a soft touch value in response to the elapsed time since the start of the user interaction period exceeds a predetermined threshold value; setting the user interaction type register to correspond to a hard touch value in response to the elapsed time since the start of the user interaction period does not exceed the predetermined threshold value; further comprising setting the user interaction type register to the hard touch value in response to: the residual DC offset value is updated; and the user interaction type register corresponds to the soft touch value. 9. A method according to claim 8, further comprising setting the user interaction type register to the soft touch value in response to: the processed pressure signal exceeds a predetermined fraction of the initial peak value; the gradient of the values stored in the sample buffer exceed a soft-transition threshold; and the user interaction type register corresponds to the hard touch value. 10. A method according to claim 1, further comprising setting the state register to a second state value in response to: the state register corresponds to the first state value; a time elapsed since the start of the user interaction exceeds a minimum duration; and the processed pressure signal has a sign corresponding to an increasing force; wherein if the state register corresponds to the second state value, all processed pressure signal values are integrated. 11. A method according to claim 10, further comprising setting the state register to a third state value in response to: the state register corresponds to the second state value; and the processed pressure signal has a sign corresponding to a decreasing force; wherein if the state register corresponds to the third state value, no processed pressure signal values are integrated. 12. A method according to claim 11, further comprising setting the state register to a third state value in response to: the state register corresponds to the second state value; and a user interaction type register corresponds to a soft touch value. 13. A method according to claim ii, further comprising: determining a signal gradient of the processed pressure signal during the user interaction; and setting the state register to a fourth state value in response to: the state register corresponds to the third state value; and the processed pressure signal has a signal gradient below a signal gradient threshold; wherein if the state register corresponds to the fourth state value, processed pressure signal values which exceed a noise threshold are integrated and processed pressure signal values which do not exceed the noise threshold are not integrated. 14. A method according to claim 1, wherein if the state register corresponds to the first state value, processed pressure signal values having a sign corresponding to an increasing force are integrated and processed pressure signal values corresponding to a decreasing force are not integrated. 15. A method according to claim 1, wherein the state register is set separately for each of the plurality of piezoelectric sensors. 16. A method according to claim 1, comprising: determining, based on the capacitance signals, two or more user interactions with the touch panel; determining a location of each user interaction based on the capacitance signals; assigning a piezoelectric sensor which is closest to the location of each user interaction as a decision making sensor; assigning each other piezoelectric sensor to correspond to the closest decision making sensor; in response to a piezoelectric sensor is a decision making sensor, updating a state register corresponding to the piezoelectric sensor independently; in response to a piezoelectric sensor is not a decision making sensor, updating a state register corresponding to the piezoelectric sensor to be equal to the state register of the corresponding decision making sensor. 17. A method according to claim 16, further comprising processing signals from decision making sensors before processing signals from the other piezoelectric sensors. 18. A computer program stored on a non-transitory computer readable medium and comprising instructions for causing a data processing apparatus to execute a method according to claim 1. 19. A controller configured for connection to a touch panel comprising a plurality of piezoelectric sensors and a plurality of capacitive touch sensors, the controller configured to: receive pressure signals from the plurality of piezoelectric sensors and capacitance signals from the plurality of capacitive touch sensors; determine, based on the capacitance signals, a user interaction period during which a user interaction with the touch panel occurs; generate processed pressure signals based on the received pressure signals; measure a force applied to each of the plurality of piezoelectric sensors by the user interaction during the user interaction period by conditionally integrating the corresponding processed pressure signals according to a state register corresponding to the user interaction; wherein the state register takes one of two or more values, wherein each user interaction is initialised in a first state value, and wherein the, user interaction transitions between state register values in dependence upon the current state register value and one or more pressure signal properties. 20. Apparatus comprising: the controller according to claim 19; and a touch panel comprising a plurality of piezoelectric sensors and a plurality of capacitive touch sensors.
A method of processing signals from a touch panel for combined capacitive and force sensing includes receiving, from the touch panel, pressure signals from a plurality of piezoelectric sensors and capacitance signals from a plurality of capacitive touch sensors. The method also includes determining, based on the capacitance signals, a user interaction period during which a user interaction with the touch panel occurs. The method also includes generating processed pressure signals based on the received pressure signals. The method also includes measuring a force applied to each of the plurality of piezoelectric sensors by the user interaction during the user interaction period by conditionally integrating the corresponding processed pressure signals according to a state register corresponding to the user interaction. The state register takes one of two or more values. Each user interaction is initialised in a first state value. The user interaction transitions between state register values in dependence upon the current state register value, and one or more pressure signal properties.1. A method comprising: receiving, from a touch panel, pressure signals from a plurality of piezoelectric sensors and capacitance signals from a plurality of capacitive touch sensors; determining, based on the capacitance signals, a user interaction period during which a user interaction with the touch panel occurs; generating processed pressure signals based on the received pressure signals; measuring a force applied to each of the plurality of piezoelectric sensors by the user interaction during the user interaction period by conditionally integrating the corresponding processed pressure signals according to a state register corresponding to the user interaction; wherein the state register takes one of two or more values, wherein each user interaction is initialised in a first state value, and wherein the user interaction transitions between state register values in dependence upon the current state register value and one or more pressure signal properties. 2. A method according to claim 1, wherein generating the processed pressure signals comprises, for each piezoelectric sensor, subtracting a DC offset value from the received pressure signal; wherein each DC offset value is initialised after a warm-up period has elapsed following switching on the touch panel, and the initial DC offset value is based on the received pressure signals in the absence of a user interaction. 3. A method according to claim 2, further comprising, for each piezoelectric sensor, in response to determining that there is no user interaction: maintaining a regression buffer of received pressure signal values; determining a gradient and variance of the values stored in the regression buffer; and in response to the gradient and variance being less than predetermined threshold values, updating the DC offset value based on the values stored in the regression buffer. 4. A method according to claim 1, further comprising, for each piezoelectric sensor: in response to detecting the start of a user interaction, setting a residual DC offset value to zero; during the user interaction period: maintaining a sample buffer of processed pressure signal values; determining a gradient and variance of the values stored in the sample buffer; determining a difference between the residual DC offset value and the average value of the values stored in the sample buffer; and in response to the gradient and variance being less than corresponding flat-period threshold values and the difference being greater than an offset-shift threshold, updating the residual DC offset value to the average value of the values stored in the sample buffer; subtracting the residual DC offset value from the processed pressure signal before integration. 5. A method according to claim 4, further comprising setting a movement flag to a value of true in response to determining, based on the capacitance signals, that the location of a user interaction is moving; in response to the movement flag does not have a value of true, setting the flat-period threshold values to first predetermined flat-period threshold values; in response to the movement flag has a value of true, setting the flat-period threshold values to second predetermined flat-period threshold values. 6. A method according to claim 1, further comprising, for each piezoelectric sensor, locating and determining an initial peak value of the processed pressure signal during the user interaction period. 7. A method according to claim 6, further comprising, in response to locating an initial peak value: setting a user interaction type register to correspond to a soft touch value in response to the elapsed time since the start of the user interaction period exceeds a predetermined threshold value; setting the user interaction type register to correspond to a hard touch value in response to the elapsed time since the start of the user interaction period does not exceed the predetermined threshold value. 8. A method according to claim 4, further comprising: for each piezoelectric sensor, locating and determining an initial peak value of the processed pressure signal during the user interaction period; in response to locating an initial peak value: setting a user interaction type register to correspond to a soft touch value in response to the elapsed time since the start of the user interaction period exceeds a predetermined threshold value; setting the user interaction type register to correspond to a hard touch value in response to the elapsed time since the start of the user interaction period does not exceed the predetermined threshold value; further comprising setting the user interaction type register to the hard touch value in response to: the residual DC offset value is updated; and the user interaction type register corresponds to the soft touch value. 9. A method according to claim 8, further comprising setting the user interaction type register to the soft touch value in response to: the processed pressure signal exceeds a predetermined fraction of the initial peak value; the gradient of the values stored in the sample buffer exceed a soft-transition threshold; and the user interaction type register corresponds to the hard touch value. 10. A method according to claim 1, further comprising setting the state register to a second state value in response to: the state register corresponds to the first state value; a time elapsed since the start of the user interaction exceeds a minimum duration; and the processed pressure signal has a sign corresponding to an increasing force; wherein if the state register corresponds to the second state value, all processed pressure signal values are integrated. 11. A method according to claim 10, further comprising setting the state register to a third state value in response to: the state register corresponds to the second state value; and the processed pressure signal has a sign corresponding to a decreasing force; wherein if the state register corresponds to the third state value, no processed pressure signal values are integrated. 12. A method according to claim 11, further comprising setting the state register to a third state value in response to: the state register corresponds to the second state value; and a user interaction type register corresponds to a soft touch value. 13. A method according to claim ii, further comprising: determining a signal gradient of the processed pressure signal during the user interaction; and setting the state register to a fourth state value in response to: the state register corresponds to the third state value; and the processed pressure signal has a signal gradient below a signal gradient threshold; wherein if the state register corresponds to the fourth state value, processed pressure signal values which exceed a noise threshold are integrated and processed pressure signal values which do not exceed the noise threshold are not integrated. 14. A method according to claim 1, wherein if the state register corresponds to the first state value, processed pressure signal values having a sign corresponding to an increasing force are integrated and processed pressure signal values corresponding to a decreasing force are not integrated. 15. A method according to claim 1, wherein the state register is set separately for each of the plurality of piezoelectric sensors. 16. A method according to claim 1, comprising: determining, based on the capacitance signals, two or more user interactions with the touch panel; determining a location of each user interaction based on the capacitance signals; assigning a piezoelectric sensor which is closest to the location of each user interaction as a decision making sensor; assigning each other piezoelectric sensor to correspond to the closest decision making sensor; in response to a piezoelectric sensor is a decision making sensor, updating a state register corresponding to the piezoelectric sensor independently; in response to a piezoelectric sensor is not a decision making sensor, updating a state register corresponding to the piezoelectric sensor to be equal to the state register of the corresponding decision making sensor. 17. A method according to claim 16, further comprising processing signals from decision making sensors before processing signals from the other piezoelectric sensors. 18. A computer program stored on a non-transitory computer readable medium and comprising instructions for causing a data processing apparatus to execute a method according to claim 1. 19. A controller configured for connection to a touch panel comprising a plurality of piezoelectric sensors and a plurality of capacitive touch sensors, the controller configured to: receive pressure signals from the plurality of piezoelectric sensors and capacitance signals from the plurality of capacitive touch sensors; determine, based on the capacitance signals, a user interaction period during which a user interaction with the touch panel occurs; generate processed pressure signals based on the received pressure signals; measure a force applied to each of the plurality of piezoelectric sensors by the user interaction during the user interaction period by conditionally integrating the corresponding processed pressure signals according to a state register corresponding to the user interaction; wherein the state register takes one of two or more values, wherein each user interaction is initialised in a first state value, and wherein the, user interaction transitions between state register values in dependence upon the current state register value and one or more pressure signal properties. 20. Apparatus comprising: the controller according to claim 19; and a touch panel comprising a plurality of piezoelectric sensors and a plurality of capacitive touch sensors.
2,600
10,953
10,953
15,572,180
2,661
An ultrasound system for performing cancer grade mapping includes an ultrasound imaging device ( 10 ) that acquires ultrasound imaging data. An electronic data processing device ( 30 ) is programmed to generate an ultrasound image ( 34 ) from the ultrasound imaging data, and to generate a cancer grade map ( 42 ) by (i) extracting sets of local features from the ultrasound imaging data that represent map pixels of the cancer grade map and (ii) classifying the sets of local features using a cancer grading classifier ( 46 ) to generate cancer grades for the map pixels of the cancer grade map. A display component ( 20 ) displays the cancer grade map, for example overlaid on the ultrasound image as a color-coded cancer grade map overlay. The cancer grading classifier is learned from a training data set ( 64 ) comprising sets of local features extracted from ultrasound imaging data at biopsy locations and labeled with histopathology cancer grades.
1. An ultrasound system comprising: an ultrasound imaging device configured to acquire ultrasound imaging data; an electronic data processing device programmed to generate a cancer grade map by (i) extracting sets of local features from the ultrasound imaging data that represent map pixels of the cancer grade map and (ii) classifying the sets of local features using a cancer grading classifier to generate cancer grades for the map pixels of the cancer grade map; and a display component configured to display the cancer grade map. 2. The ultrasound system of claim 1 wherein the electronic data processing device is programmed to extract the sets of local features representing map pixels from RF time series ultrasound imaging data. 3. The ultrasound system of claim 1 wherein: the ultrasound imaging device is configured to acquire ultrasound imaging data including elastography imaging data in which ultrasonic pulses at a lower frequency are applied by the ultrasound device to induce tissue vibration; and the electronic data processing device is programmed to extract the sets of local features representing map pixels from elastography imaging data. 4. The ultrasound system of a claim 1 wherein: the electronic data processing device is further programmed to generate an ultrasound image from the ultrasound imaging data and to generate a cancer grade map overlay from the cancer grade map that is aligned with the ultrasound image; and the display component is configured to display a fused image that combines the ultrasound image and the cancer grade map; and wherein the electronic data processing device is programmed to generate the ultrasound image as a brightness (b-mode) image from ultrasound imaging data comprising RF time series ultrasound imaging data. 5. (canceled) 6. The ultrasound system of claim 4 wherein the electronic data processing device is programmed to generate the fused image as the ultrasound image overlaid with a color coded cancer grade map overlay in which cancer grades of the cancer grade map are represented by color coding; and wherein the electronic data processing device is programmed to extract the sets of local features representing map pixels of the cancer grade map including one or more of (1) texture features, (2) wavelet-based features, and (3) spectral features. 7. The ultrasound system of claim 4 wherein the ultrasound system is configured to continuously acquire ultrasound imaging data and to update the ultrasound image, the cancer grade map, and the fused image in real-time using the continuously acquired ultrasound imaging data. 8. The ultrasound system of claim 1 wherein each map pixel of the cancer grade map consists of a contiguous n×n array of pixels of an ultrasound image generated from the acquired ultrasound imaging data, wherein n≥1. 9. (canceled) 10. (canceled) 11. The ultrasound system of claim 1 further comprising: a rectal ultrasound probe) connected with the ultrasound imaging device wherein the ultrasound imaging device is configured to acquire ultrasound imaging data of a prostate organ using the rectal ultrasound probe, the electronic data processing device is programmed to generate a prostate cancer grade map by (i) extracting sets of local features from the ultrasound imaging data that represent map pixels of the prostate cancer grade map and (ii) classifying the sets of local features using a prostate cancer grading classifier to generate prostate cancer grades for the map pixels of the prostate cancer grade map, and the display component is configured to display the prostate cancer grade map. 12. The ultrasound system of claim 11 further comprising: a rectal biopsy tool connected with the rectal ultrasound probe and configured to collect a prostate tissue biopsy sample; wherein the electronic data processing device is further programmed to generate a prostate ultrasound image from the ultrasound imaging data and the display component is further configured to display a fused image combining the prostate ultrasound image and the prostate cancer grade map. 13. The ultrasound system of claim 1 further comprising: an electronic data processing device programmed to generate the cancer grading classifier by machine learning on a labeled training data set comprising training sets of local features extracted from ultrasound imaging data at biopsy locations and labeled with histopathology cancer grades. 14. An ultrasound method comprising: acquiring ultrasound imaging data; generating an ultrasound image from the ultrasound imaging data; generating a cancer grade map from the ultrasound imaging data by applying a cancer grading classifier to sets of local features extracted from the ultrasound imaging data; and displaying at least one of (i) the cancer grade map and (ii) a fused image combining the ultrasound image and the cancer grade map. 15. The ultrasound method of claim 14 wherein: the ultrasound imaging data includes RF time series ultrasound imaging data; the ultrasound image comprises a brightness mode (b-mode) image generated from the RF time series ultrasound imaging data; and the cancer grade map is generated from the RF time series ultrasound imaging data. 16. The ultrasound method of claim 14 wherein the displaying comprises displaying a fused image comprising the ultrasound image overlaid with a color-coded overlay representation of the cancer grade map. 17. The ultrasound method of claim 14 further comprising iteratively repeating the acquiring, the generating of the ultrasound image, the generating of the cancer grade map, and the displaying to update the displayed cancer grade map or fused image in real time. 18. (canceled) 19. (canceled) 20. The ultrasound method of claim 14 further comprising: training the cancer grading classifier on a labeled training data set comprising training sets of local features extracted from ultrasound imaging data at biopsy locations and labeled with histopathology cancer grades. 21. (canceled)
An ultrasound system for performing cancer grade mapping includes an ultrasound imaging device ( 10 ) that acquires ultrasound imaging data. An electronic data processing device ( 30 ) is programmed to generate an ultrasound image ( 34 ) from the ultrasound imaging data, and to generate a cancer grade map ( 42 ) by (i) extracting sets of local features from the ultrasound imaging data that represent map pixels of the cancer grade map and (ii) classifying the sets of local features using a cancer grading classifier ( 46 ) to generate cancer grades for the map pixels of the cancer grade map. A display component ( 20 ) displays the cancer grade map, for example overlaid on the ultrasound image as a color-coded cancer grade map overlay. The cancer grading classifier is learned from a training data set ( 64 ) comprising sets of local features extracted from ultrasound imaging data at biopsy locations and labeled with histopathology cancer grades.1. An ultrasound system comprising: an ultrasound imaging device configured to acquire ultrasound imaging data; an electronic data processing device programmed to generate a cancer grade map by (i) extracting sets of local features from the ultrasound imaging data that represent map pixels of the cancer grade map and (ii) classifying the sets of local features using a cancer grading classifier to generate cancer grades for the map pixels of the cancer grade map; and a display component configured to display the cancer grade map. 2. The ultrasound system of claim 1 wherein the electronic data processing device is programmed to extract the sets of local features representing map pixels from RF time series ultrasound imaging data. 3. The ultrasound system of claim 1 wherein: the ultrasound imaging device is configured to acquire ultrasound imaging data including elastography imaging data in which ultrasonic pulses at a lower frequency are applied by the ultrasound device to induce tissue vibration; and the electronic data processing device is programmed to extract the sets of local features representing map pixels from elastography imaging data. 4. The ultrasound system of a claim 1 wherein: the electronic data processing device is further programmed to generate an ultrasound image from the ultrasound imaging data and to generate a cancer grade map overlay from the cancer grade map that is aligned with the ultrasound image; and the display component is configured to display a fused image that combines the ultrasound image and the cancer grade map; and wherein the electronic data processing device is programmed to generate the ultrasound image as a brightness (b-mode) image from ultrasound imaging data comprising RF time series ultrasound imaging data. 5. (canceled) 6. The ultrasound system of claim 4 wherein the electronic data processing device is programmed to generate the fused image as the ultrasound image overlaid with a color coded cancer grade map overlay in which cancer grades of the cancer grade map are represented by color coding; and wherein the electronic data processing device is programmed to extract the sets of local features representing map pixels of the cancer grade map including one or more of (1) texture features, (2) wavelet-based features, and (3) spectral features. 7. The ultrasound system of claim 4 wherein the ultrasound system is configured to continuously acquire ultrasound imaging data and to update the ultrasound image, the cancer grade map, and the fused image in real-time using the continuously acquired ultrasound imaging data. 8. The ultrasound system of claim 1 wherein each map pixel of the cancer grade map consists of a contiguous n×n array of pixels of an ultrasound image generated from the acquired ultrasound imaging data, wherein n≥1. 9. (canceled) 10. (canceled) 11. The ultrasound system of claim 1 further comprising: a rectal ultrasound probe) connected with the ultrasound imaging device wherein the ultrasound imaging device is configured to acquire ultrasound imaging data of a prostate organ using the rectal ultrasound probe, the electronic data processing device is programmed to generate a prostate cancer grade map by (i) extracting sets of local features from the ultrasound imaging data that represent map pixels of the prostate cancer grade map and (ii) classifying the sets of local features using a prostate cancer grading classifier to generate prostate cancer grades for the map pixels of the prostate cancer grade map, and the display component is configured to display the prostate cancer grade map. 12. The ultrasound system of claim 11 further comprising: a rectal biopsy tool connected with the rectal ultrasound probe and configured to collect a prostate tissue biopsy sample; wherein the electronic data processing device is further programmed to generate a prostate ultrasound image from the ultrasound imaging data and the display component is further configured to display a fused image combining the prostate ultrasound image and the prostate cancer grade map. 13. The ultrasound system of claim 1 further comprising: an electronic data processing device programmed to generate the cancer grading classifier by machine learning on a labeled training data set comprising training sets of local features extracted from ultrasound imaging data at biopsy locations and labeled with histopathology cancer grades. 14. An ultrasound method comprising: acquiring ultrasound imaging data; generating an ultrasound image from the ultrasound imaging data; generating a cancer grade map from the ultrasound imaging data by applying a cancer grading classifier to sets of local features extracted from the ultrasound imaging data; and displaying at least one of (i) the cancer grade map and (ii) a fused image combining the ultrasound image and the cancer grade map. 15. The ultrasound method of claim 14 wherein: the ultrasound imaging data includes RF time series ultrasound imaging data; the ultrasound image comprises a brightness mode (b-mode) image generated from the RF time series ultrasound imaging data; and the cancer grade map is generated from the RF time series ultrasound imaging data. 16. The ultrasound method of claim 14 wherein the displaying comprises displaying a fused image comprising the ultrasound image overlaid with a color-coded overlay representation of the cancer grade map. 17. The ultrasound method of claim 14 further comprising iteratively repeating the acquiring, the generating of the ultrasound image, the generating of the cancer grade map, and the displaying to update the displayed cancer grade map or fused image in real time. 18. (canceled) 19. (canceled) 20. The ultrasound method of claim 14 further comprising: training the cancer grading classifier on a labeled training data set comprising training sets of local features extracted from ultrasound imaging data at biopsy locations and labeled with histopathology cancer grades. 21. (canceled)
2,600
10,954
10,954
16,333,060
2,699
An access control system includes an energy efficient access control that operates as a Wi-Fi access point that broadcasts a Service Set Identifier (SSID) as indicator of access control level to communicate with a mobile device with WPA2 PSK.
1. An access control system, comprising: an access control that operates as a Wi-Fi access point that broadcasts a Service Set Identifier (SSID) to communicate with a mobile device with WPA2. 2. The system as recited in claim 1, wherein the Service Set Identifier (SSID) represents a different user access level, to reduce energy needed for sophisticated access control. 3. The system as recited in claim 1, wherein the access control wakes up only periodically to save energy while achieving access control wakeup when mobile devices approaches the access control unit, that requires only WiFi without using NFC. 4. The system as recited in claim 1, wherein the access control does not support DHCP and routing to save energy. 5. The system as recited in claim 1, wherein the mobile device includes credentials authorized remotely. 6. The system as recited in claim 5, wherein the credentials includes the Service Set Identifier (SSID) and a channel number. 7. The system as recited in claim 1, wherein the access control is a door lock. 8. A method of managing an access control system, the method comprising: operating an access control as a Wi-Fi access point. 9. The method as recited in claim 8, further comprising periodically broadcasting a Service Set Identifier (SSID) to communicate with a mobile device. 10. The method as recited in claim 9, further comprising communicating via WPA2 with PSK only to save energy.
An access control system includes an energy efficient access control that operates as a Wi-Fi access point that broadcasts a Service Set Identifier (SSID) as indicator of access control level to communicate with a mobile device with WPA2 PSK.1. An access control system, comprising: an access control that operates as a Wi-Fi access point that broadcasts a Service Set Identifier (SSID) to communicate with a mobile device with WPA2. 2. The system as recited in claim 1, wherein the Service Set Identifier (SSID) represents a different user access level, to reduce energy needed for sophisticated access control. 3. The system as recited in claim 1, wherein the access control wakes up only periodically to save energy while achieving access control wakeup when mobile devices approaches the access control unit, that requires only WiFi without using NFC. 4. The system as recited in claim 1, wherein the access control does not support DHCP and routing to save energy. 5. The system as recited in claim 1, wherein the mobile device includes credentials authorized remotely. 6. The system as recited in claim 5, wherein the credentials includes the Service Set Identifier (SSID) and a channel number. 7. The system as recited in claim 1, wherein the access control is a door lock. 8. A method of managing an access control system, the method comprising: operating an access control as a Wi-Fi access point. 9. The method as recited in claim 8, further comprising periodically broadcasting a Service Set Identifier (SSID) to communicate with a mobile device. 10. The method as recited in claim 9, further comprising communicating via WPA2 with PSK only to save energy.
2,600
10,955
10,955
15,621,387
2,659
A process at an electronic digital assistant (EDA) computing device uses natural language detection of a user status change to make corresponding modification of a user interface associated with the user. The EDA monitors a private or talkgroup voice call associated with a user and detects first user speech from the user. The EDA identifies a current status of the user of on-assignment or not-on-assignment and determines that the first user speech is indicative of a first or second user status change. When it is the first user status change, the EDA causes a mobile or portable computing device associated with the user to automatically swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application, and vice versa when it is the second user status change.
1. A method at an electronic digital assistant computing device for natural language detection of a user status change and corresponding modification of a user interface, the method comprising: monitoring, at an electronic computing device, one of a private voice call and a talkgroup voice call associated with an in-field user; detecting, by the electronic computing device over the one of the private voice call and the talkgroup voice call associated with the in-field user, first user speech from the in-field user; identifying, by the electronic computing device, a current status of the in-field user of one of an on-assignment related status and a not-on-assignment related status; determining, by the electronic computing device, that the first user speech is indicative of one of (i) a first status change of the in-field user in which the current status of the in-field user is the not-on-assignment related status and the first user speech is indicative of a change to the on-assignment related status and (ii) a second status change of the in-field user in which the current status of the in-field user is the on-assignment related status and the first user speech is indicative of a change to the not-on-assignment related status; and when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, responsively: causing, by the electronic computing device, one of a mobile and a portable computing device associated with the in-field user to automatically and responsively swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, responsively: causing, by the electronic computing device, one of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application. 2. The method of claim 1, wherein the electronic computing device is an infrastructure computing device, and: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, causing the one of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application comprises identifying the one of the mobile and the portable computing device associated with the in-field user via an in-field-user to mobile or portable computing device mapping and transmitting, to the identified one of the mobile and the portable computing device associated with the in-field user, an instruction to swap the foreground not-on-assignment related application with the not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, causing the one of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application comprises identifying the one of the mobile and the portable computing device associated with the in-field user via an in-field-user to mobile or portable computing device mapping and transmitting, to the identified one of the mobile and the portable computing device associated with the in-field user, an instruction to swap the foreground on-assignment related application with the not-previously-in-foreground not-on-assignment related application. 3. The method of claim 1, wherein the electronic computing device is the one of the mobile and the portable computing device associated with the in-field user, and: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, the electronic computing device responsively swapping a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, the electronic computing device responsively swapping a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application. 4. The method of claim 1, wherein: the on-assignment related status is an in-incident related status relative to a public safety incident that the in-field user is responding to, the not-on-assignment related status is a not-in-incident related status in which there is no current public safety incident to which the in-field user is responding to, the on-assignment related application is an in-incident related application, and the not-on-assignment related application is a not-in-incident related application; and the determining, by the electronic computing device, is that the first user speech is indicative of the first status change; and the method further comprising swapping the foreground not-in-incident related application comprising one of a patrol route mapping application, a departmental contact list application, an incident monitor list application listing all active and/or recent incidents associated with a department to which the in-field user belongs, a not-in-incident task list application identifying non-incident related tasks for the in-field user to complete, and a non-in-incident related talkgroup status indicator application with a not-previously-in-foreground in-incident-related application comprising one of an in-incident location mapping application indicating locations of other users assigned to a same incident, an in-incident contact list application indicating callable other users assigned to a same incident, an in-incident task list application identifying incident related tasks for the in-field user or other users assigned to the incident to complete, and an in-incident related talkgroup status indicator application. 5. The method of claim 4, wherein the foreground not-in-incident related application is swapped with a different type of not-previously-in-foreground in-incident-related application. 6. The method of claim 5, wherein the foreground not-in-incident related application is the incident monitor list application and the not-previously-in-foreground in-incident-related application is one of the in-incident location mapping application, the in-incident contact list application, the in-incident task list application, and the in-incident related talkgroup status indicator application. 7. The method of claim 1, wherein the determining, by the electronic computing device, is that the first user speech is indicative of the second status change; the method further comprising swapping an in-foreground on-assignment related application comprising one of an on-assignment location mapping application indicating locations of other users assigned to a same assignment, an on-assignment contact list application indicating callable other users assigned to a same assignment, an on-assignment task list application identifying assignment related tasks for the in-field user or other users assigned to the assignment to complete, and an on-assignment related talkgroup status indicator application with a not-previously-in-foreground not-on-assignment related application comprising one of a patrol route mapping application, a departmental contact list application, an assignment monitor list application listing all active and/or recent assignments associated with a department to which the in-field user belongs, a not-on-assignment task list application identifying non-assignment related tasks for the in-field user to complete, and a non-assignment related talkgroup status indicator application. 8. The method of claim 7, wherein the in-foreground on-assignment related application is swapped with a different type of not-previously-in-foreground not-on-assignment related application. 9. The method of claim 7, wherein the in-foreground on-assignment related application is one of the on-assignment location mapping application, the on-assignment contact list application, the on-assignment task list application, and the on-assignment related talkgroup status indicator application and the not-previously-in-foreground not-on-assignment related application is the assignment monitor list application. 10. The method of claim 1, wherein the one of the mobile and the portable computing device associated with the in-field user is the portable computing device worn on a body of the in-field user. 11. The method of claim 1, wherein the one of the mobile and the portable computing device associated with the in-field user is the mobile computing device coupled to a vehicle associated with the in-field user. 12. The method of claim 1, wherein: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change: causing, by the electronic computing device, both of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change: causing, by the electronic computing device, both of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application. 13. The method of claim 12, wherein: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change: one of the foreground not-on-assignment related application and the not-previously-in-foreground on-assignment related application swapped by the mobile computing device is different than one of the foreground not-on-assignment related application and the not-previously-in-foreground on-assignment related application swapped by the portable computing device; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change: one of the foreground on-assignment related application and the not-previously-in-foreground not-on-assignment related application swapped by the mobile computing device is different than one of the foreground on-assignment related application and the not-previously-in-foreground not-on-assignment related application swapped by the portable computing device. 14. The method of claim 1, further wherein: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, responsively: causing, by the electronic computing device, a state of the swapped in not-previously-in-foreground on-assignment related application to be modified based on information obtained from one of a plurality of foreground not-on-assignment related applications existing in a foreground prior to the first status change; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, responsively: causing, by the electronic computing device, a state of the swapped in not-previously-in-foreground not-on-assignment related application to be modified based on information obtained from one of a plurality of foreground on-assignment related applications existing in a foreground prior to the second status change. 15. The method of claim 1, further comprising transmitting, by the electronic computing device to an infrastructure computer aided dispatch (CAD) computing device, a message indicating one of the first and the second status change. 16. The method of claim 1, further comprising recording, by the electronic computing device in an assignment timeline application associated with an assignment, one of the first and the second status change associated with the in-field user. 17. The method of claim 1, further wherein: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, responsively: a first time in which the first status change is detected, prompting the in-field user to confirm that the foreground not-on-assignment related application will be swapped with the not-previously-in-foreground on-assignment related application; and receiving confirmation from the in-field user; and subsequent times that the first status change is detected, automatically and without prompting the in-field user, swapping the foreground not-on-assignment related application with the not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, responsively: a first time in which the second status change is detected, prompting the in-field user to confirm that the foreground on-assignment related application will be swapped with the not-previously-in-foreground not-on-assignment related application; and receiving confirmation from the in-field user; and subsequent times that the second status change is detected, automatically and without prompting the in-field user, swapping the foreground on-assignment related application with the not-previously-in-foreground not-on-assignment related application. 18. The method of claim 1, wherein: the on-assignment related status is a customer-service-assistance-event related status relative to a retail environment that the in-field user is currently responding to, and the not-on-assignment related status is a currently-available-to-assist-customers related status in which there is no current particular customer assistance event to which the in-field user is responding to. 19. The method of claim 18, further wherein the determining, by the electronic computing device, is that the first user speech is indicative of the first status change; and the method further comprising swapping a foreground currently-available-to-assist-customers related application comprising one of a mapping application providing an indoor department route for the in-field user to follow indoors to ensure that his or her department is covered and visible to customers, a PTT application for speaking to a talkgroup associated with all other employees or other users of a same department or store as the in-field user, a task list setting forth one or more tasks that the in-field user may choose to perform or accept, an incident list setting forth one or more current or past security, customer, or hazardous spill incidents associated with the in-field user or an organization to which the in-field user belongs, a status indicator application setting forth a status of the in-field user and/or other users in a same organization, a contact list setting forth identities of one or more other users or other employees of a same organization to which the in-field user belongs, and a general note taking application in which the in-field user may record notes relative to the indoor department route with a not-previously-in-foreground customer-service-assistance-event-related application comprising one of an indoor mapping application providing a route for the in-field user to follow to arrive at a location at which a customer has requested assistance, a PTT application for speaking to a talkgroup associated with a particularly assigned task associated with a retail incident, a task list setting forth one or more sub-tasks associated with a particularly assigned retail task, a status indicator application setting forth a status of the in-field user and/or the other users or other persons associated with a same assigned retail task, a contact list setting forth identities of one or more other users or other employees or persons associated with a same assigned retail task, and a task-specific note taking application in which the in-field user may record notes relative to the assigned task. 20. A computing device implementing an electronic digital assistant for natural language detection of a user status change and corresponding modification of a user interface, the electronic computing device comprising: a memory storing non-transitory computer-readable instructions; a transceiver; and one or more processors configured to, in response to executing the non-transitory computer-readable instructions, perform a first set of functions comprising: monitoring one of a private voice call and a talkgroup voice call associated with an in-field user; detect, over the one of the private voice call and the talkgroup voice call associated with the in-field user, first user speech from the in-field user; identify a current status of the in-field user of one of an on-assignment related status and a not-on-assignment related status; determine that the first user speech is indicative of one of (i) a first status change of the in-field user in which the current status of the in-field user is the not-on-assignment related status and the first user speech is indicative of a change to the on-assignment related status and (ii) a second status change of the in-field user in which the current status of the in-field user is the on-assignment related status and the first user speech is indicative of a change to the not-on-assignment related status; and when the determining is that the first user speech is indicative of the first status change, responsively: cause one of a mobile and a portable computing device associated with the in-field user to automatically and responsively swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application; and when the determining is that the first user speech is indicative of the second status change, responsively: cause one of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application.
A process at an electronic digital assistant (EDA) computing device uses natural language detection of a user status change to make corresponding modification of a user interface associated with the user. The EDA monitors a private or talkgroup voice call associated with a user and detects first user speech from the user. The EDA identifies a current status of the user of on-assignment or not-on-assignment and determines that the first user speech is indicative of a first or second user status change. When it is the first user status change, the EDA causes a mobile or portable computing device associated with the user to automatically swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application, and vice versa when it is the second user status change.1. A method at an electronic digital assistant computing device for natural language detection of a user status change and corresponding modification of a user interface, the method comprising: monitoring, at an electronic computing device, one of a private voice call and a talkgroup voice call associated with an in-field user; detecting, by the electronic computing device over the one of the private voice call and the talkgroup voice call associated with the in-field user, first user speech from the in-field user; identifying, by the electronic computing device, a current status of the in-field user of one of an on-assignment related status and a not-on-assignment related status; determining, by the electronic computing device, that the first user speech is indicative of one of (i) a first status change of the in-field user in which the current status of the in-field user is the not-on-assignment related status and the first user speech is indicative of a change to the on-assignment related status and (ii) a second status change of the in-field user in which the current status of the in-field user is the on-assignment related status and the first user speech is indicative of a change to the not-on-assignment related status; and when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, responsively: causing, by the electronic computing device, one of a mobile and a portable computing device associated with the in-field user to automatically and responsively swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, responsively: causing, by the electronic computing device, one of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application. 2. The method of claim 1, wherein the electronic computing device is an infrastructure computing device, and: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, causing the one of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application comprises identifying the one of the mobile and the portable computing device associated with the in-field user via an in-field-user to mobile or portable computing device mapping and transmitting, to the identified one of the mobile and the portable computing device associated with the in-field user, an instruction to swap the foreground not-on-assignment related application with the not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, causing the one of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application comprises identifying the one of the mobile and the portable computing device associated with the in-field user via an in-field-user to mobile or portable computing device mapping and transmitting, to the identified one of the mobile and the portable computing device associated with the in-field user, an instruction to swap the foreground on-assignment related application with the not-previously-in-foreground not-on-assignment related application. 3. The method of claim 1, wherein the electronic computing device is the one of the mobile and the portable computing device associated with the in-field user, and: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, the electronic computing device responsively swapping a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, the electronic computing device responsively swapping a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application. 4. The method of claim 1, wherein: the on-assignment related status is an in-incident related status relative to a public safety incident that the in-field user is responding to, the not-on-assignment related status is a not-in-incident related status in which there is no current public safety incident to which the in-field user is responding to, the on-assignment related application is an in-incident related application, and the not-on-assignment related application is a not-in-incident related application; and the determining, by the electronic computing device, is that the first user speech is indicative of the first status change; and the method further comprising swapping the foreground not-in-incident related application comprising one of a patrol route mapping application, a departmental contact list application, an incident monitor list application listing all active and/or recent incidents associated with a department to which the in-field user belongs, a not-in-incident task list application identifying non-incident related tasks for the in-field user to complete, and a non-in-incident related talkgroup status indicator application with a not-previously-in-foreground in-incident-related application comprising one of an in-incident location mapping application indicating locations of other users assigned to a same incident, an in-incident contact list application indicating callable other users assigned to a same incident, an in-incident task list application identifying incident related tasks for the in-field user or other users assigned to the incident to complete, and an in-incident related talkgroup status indicator application. 5. The method of claim 4, wherein the foreground not-in-incident related application is swapped with a different type of not-previously-in-foreground in-incident-related application. 6. The method of claim 5, wherein the foreground not-in-incident related application is the incident monitor list application and the not-previously-in-foreground in-incident-related application is one of the in-incident location mapping application, the in-incident contact list application, the in-incident task list application, and the in-incident related talkgroup status indicator application. 7. The method of claim 1, wherein the determining, by the electronic computing device, is that the first user speech is indicative of the second status change; the method further comprising swapping an in-foreground on-assignment related application comprising one of an on-assignment location mapping application indicating locations of other users assigned to a same assignment, an on-assignment contact list application indicating callable other users assigned to a same assignment, an on-assignment task list application identifying assignment related tasks for the in-field user or other users assigned to the assignment to complete, and an on-assignment related talkgroup status indicator application with a not-previously-in-foreground not-on-assignment related application comprising one of a patrol route mapping application, a departmental contact list application, an assignment monitor list application listing all active and/or recent assignments associated with a department to which the in-field user belongs, a not-on-assignment task list application identifying non-assignment related tasks for the in-field user to complete, and a non-assignment related talkgroup status indicator application. 8. The method of claim 7, wherein the in-foreground on-assignment related application is swapped with a different type of not-previously-in-foreground not-on-assignment related application. 9. The method of claim 7, wherein the in-foreground on-assignment related application is one of the on-assignment location mapping application, the on-assignment contact list application, the on-assignment task list application, and the on-assignment related talkgroup status indicator application and the not-previously-in-foreground not-on-assignment related application is the assignment monitor list application. 10. The method of claim 1, wherein the one of the mobile and the portable computing device associated with the in-field user is the portable computing device worn on a body of the in-field user. 11. The method of claim 1, wherein the one of the mobile and the portable computing device associated with the in-field user is the mobile computing device coupled to a vehicle associated with the in-field user. 12. The method of claim 1, wherein: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change: causing, by the electronic computing device, both of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change: causing, by the electronic computing device, both of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application. 13. The method of claim 12, wherein: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change: one of the foreground not-on-assignment related application and the not-previously-in-foreground on-assignment related application swapped by the mobile computing device is different than one of the foreground not-on-assignment related application and the not-previously-in-foreground on-assignment related application swapped by the portable computing device; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change: one of the foreground on-assignment related application and the not-previously-in-foreground not-on-assignment related application swapped by the mobile computing device is different than one of the foreground on-assignment related application and the not-previously-in-foreground not-on-assignment related application swapped by the portable computing device. 14. The method of claim 1, further wherein: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, responsively: causing, by the electronic computing device, a state of the swapped in not-previously-in-foreground on-assignment related application to be modified based on information obtained from one of a plurality of foreground not-on-assignment related applications existing in a foreground prior to the first status change; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, responsively: causing, by the electronic computing device, a state of the swapped in not-previously-in-foreground not-on-assignment related application to be modified based on information obtained from one of a plurality of foreground on-assignment related applications existing in a foreground prior to the second status change. 15. The method of claim 1, further comprising transmitting, by the electronic computing device to an infrastructure computer aided dispatch (CAD) computing device, a message indicating one of the first and the second status change. 16. The method of claim 1, further comprising recording, by the electronic computing device in an assignment timeline application associated with an assignment, one of the first and the second status change associated with the in-field user. 17. The method of claim 1, further wherein: when the determining, by the electronic computing device, is that the first user speech is indicative of the first status change, responsively: a first time in which the first status change is detected, prompting the in-field user to confirm that the foreground not-on-assignment related application will be swapped with the not-previously-in-foreground on-assignment related application; and receiving confirmation from the in-field user; and subsequent times that the first status change is detected, automatically and without prompting the in-field user, swapping the foreground not-on-assignment related application with the not-previously-in-foreground on-assignment related application; and when the determining, by the electronic computing device, is that the first user speech is indicative of the second status change, responsively: a first time in which the second status change is detected, prompting the in-field user to confirm that the foreground on-assignment related application will be swapped with the not-previously-in-foreground not-on-assignment related application; and receiving confirmation from the in-field user; and subsequent times that the second status change is detected, automatically and without prompting the in-field user, swapping the foreground on-assignment related application with the not-previously-in-foreground not-on-assignment related application. 18. The method of claim 1, wherein: the on-assignment related status is a customer-service-assistance-event related status relative to a retail environment that the in-field user is currently responding to, and the not-on-assignment related status is a currently-available-to-assist-customers related status in which there is no current particular customer assistance event to which the in-field user is responding to. 19. The method of claim 18, further wherein the determining, by the electronic computing device, is that the first user speech is indicative of the first status change; and the method further comprising swapping a foreground currently-available-to-assist-customers related application comprising one of a mapping application providing an indoor department route for the in-field user to follow indoors to ensure that his or her department is covered and visible to customers, a PTT application for speaking to a talkgroup associated with all other employees or other users of a same department or store as the in-field user, a task list setting forth one or more tasks that the in-field user may choose to perform or accept, an incident list setting forth one or more current or past security, customer, or hazardous spill incidents associated with the in-field user or an organization to which the in-field user belongs, a status indicator application setting forth a status of the in-field user and/or other users in a same organization, a contact list setting forth identities of one or more other users or other employees of a same organization to which the in-field user belongs, and a general note taking application in which the in-field user may record notes relative to the indoor department route with a not-previously-in-foreground customer-service-assistance-event-related application comprising one of an indoor mapping application providing a route for the in-field user to follow to arrive at a location at which a customer has requested assistance, a PTT application for speaking to a talkgroup associated with a particularly assigned task associated with a retail incident, a task list setting forth one or more sub-tasks associated with a particularly assigned retail task, a status indicator application setting forth a status of the in-field user and/or the other users or other persons associated with a same assigned retail task, a contact list setting forth identities of one or more other users or other employees or persons associated with a same assigned retail task, and a task-specific note taking application in which the in-field user may record notes relative to the assigned task. 20. A computing device implementing an electronic digital assistant for natural language detection of a user status change and corresponding modification of a user interface, the electronic computing device comprising: a memory storing non-transitory computer-readable instructions; a transceiver; and one or more processors configured to, in response to executing the non-transitory computer-readable instructions, perform a first set of functions comprising: monitoring one of a private voice call and a talkgroup voice call associated with an in-field user; detect, over the one of the private voice call and the talkgroup voice call associated with the in-field user, first user speech from the in-field user; identify a current status of the in-field user of one of an on-assignment related status and a not-on-assignment related status; determine that the first user speech is indicative of one of (i) a first status change of the in-field user in which the current status of the in-field user is the not-on-assignment related status and the first user speech is indicative of a change to the on-assignment related status and (ii) a second status change of the in-field user in which the current status of the in-field user is the on-assignment related status and the first user speech is indicative of a change to the not-on-assignment related status; and when the determining is that the first user speech is indicative of the first status change, responsively: cause one of a mobile and a portable computing device associated with the in-field user to automatically and responsively swap a foreground not-on-assignment related application with a not-previously-in-foreground on-assignment related application; and when the determining is that the first user speech is indicative of the second status change, responsively: cause one of the mobile and the portable computing device associated with the in-field user to automatically and responsively swap a foreground on-assignment related application with a not-previously-in-foreground not-on-assignment related application.
2,600
10,956
10,956
16,378,219
2,685
Methods, systems and apparatuses are described herein to provide adaptive severity functions for alerts, particularly security alerts. The adaptive severity functions may be aligned with an existing global security situation to upgrade or downgrade the severity of new and existing alerts. By taking into consideration the time factor along with other parameters, the alerts may be prioritized or reprioritized appropriately. The modification of the severity level for the alerts may be made based on rules and/or one or more triggering events or by using severity functions with or without the aid of artificial intelligence based on best-practice preferences.
1. A method, comprising: receiving a security alert associated with an entity; determining a first severity level associated with the security alert; receiving data indicative of at least one environmental factor; generating, based at least on the received data, a second severity level associated with the security alert; and providing the security alert and associated second severity level to a user associated with the entity. 2. The method of claim 1, further comprising: updating the second severity level to a third severity level for the security alert based at least on a time factor. 3. The method of claim 1, further comprising: selecting a severity function template from a set of severity function templates based on a type of the security alert; and assigning the selected severity function template to the security alert. 4. The method of claim 3, further comprising: generating a severity function based on the severity function template and one or more environmental factors related to a security situation; and associating the severity function with the security alert. 5. The method of claim 4, wherein said generating, based at least on the received data, a second severity level associated with the security alert comprises: generating the second severity level based on the received data applied as input to the severity function. 6. The method of claim 5, wherein said generating a severity function based on the severity function template comprises: applying the one or more environmental factors to a machine learning algorithm to generate the severity function, the one or more environmental factors comprising at least one of an alert confidence measure, a resource importance indicator, a time factor, an alert type, other alerts and information thereof, a similarity of alerts measure, user information, or a similarity of users measure. 7. The method of claim 1, wherein said providing the security alert and associated second severity level to the user comprises at least one of: displaying the security alert and associated second severity level on a user interface; or notifying the user of the security alert and associated second severity level. 8. The method of claim 1, wherein said providing the security alert and associated second severity level to the user further comprises: providing explanatory information relating to a change from the first severity level to the second severity level of the security alert. 9. A system, comprising: one or more processing circuits; and one or more memory devices connected to the one or more processing circuits, the one or more memory devices storing program code that are executable by the one or more processing circuits, the program code comprising: a severity determiner configured to receive a security alert associated with an entity, and determine a first severity level associated with the security alert; a severity modifier configured to receive data indicative of at least one environmental factor, and generate, based at least on the received data, a second severity level associated with the security alert; and an alert manager configured to provide the security alert and associated second severity level to a user associated with the entity. 10. The system of claim 9, wherein the severity modifier is further configured to: update the second severity level to a third severity level for the security alert based at least on a time factor. 11. The system of claim 9, wherein the severity modifier is further configured to: select a severity function template from a set of severity function templates based on a type of the security alert; and assign the selected severity function template to the security alert. 12. The system of claim 11, wherein the severity modifier is further configured to: generate a severity function based on the severity function template and one or more environmental factors related to a security situation; and associate the severity function with the security alert. 13. The system of claim 12, wherein the severity modifier is further configured to generate the second severity level based on the received data applied as input to the severity function. 14. The system of claim 13, wherein the severity modifier is further configured to apply the one or more environmental factors to a machine learning algorithm to generate the severity function, the one or more environmental factors comprising at least one of an alert confidence measure, a resource importance indicator, a time factor, an alert type, other alerts and information thereof, a similarity of alerts measure, user information, or a similarity of users measure. 15. The system of claim 9, wherein the alert manager is further configured to provide the security alert and associated second severity level to the user by at least one of: displaying the security alert and associated second severity level on a user interface; or notifying the user of the security alert and associated second severity level. 16. The system of claim 9, wherein the alert manager is further configured to provide explanatory information relating to a change from the first severity level to the second severity level of the security alert. 17. A computer-readable memory device having program instructions recorded thereon that, when executed by at least one processing circuit, perform a method on a computing device for determining a severity level of a security alert, the method comprising: receiving a security alert associated with an entity; determining a first severity level associated with the security alert; receiving data indicative of at least one environmental factor; generating, based at least on the received data, a second severity level associated with the security alert; and providing the security alert and associated second severity level to a user associated with the entity. 18. The computer-readable memory device of claim 17, wherein the method further comprises: selecting a severity function template from a set of severity function templates based on a type of the security alert; and assigning the selected severity function template to the security alert. 19. The computer-readable storage device of claim 18, wherein the method further comprises: generating a severity function based on the severity function template and one or more environmental factors related to a security situation; and associating the severity function with the security alert. 20. The computer-readable memory device of claim 19, wherein the method further comprises: generating the second severity level based on the received data applied as input to the severity function.
Methods, systems and apparatuses are described herein to provide adaptive severity functions for alerts, particularly security alerts. The adaptive severity functions may be aligned with an existing global security situation to upgrade or downgrade the severity of new and existing alerts. By taking into consideration the time factor along with other parameters, the alerts may be prioritized or reprioritized appropriately. The modification of the severity level for the alerts may be made based on rules and/or one or more triggering events or by using severity functions with or without the aid of artificial intelligence based on best-practice preferences.1. A method, comprising: receiving a security alert associated with an entity; determining a first severity level associated with the security alert; receiving data indicative of at least one environmental factor; generating, based at least on the received data, a second severity level associated with the security alert; and providing the security alert and associated second severity level to a user associated with the entity. 2. The method of claim 1, further comprising: updating the second severity level to a third severity level for the security alert based at least on a time factor. 3. The method of claim 1, further comprising: selecting a severity function template from a set of severity function templates based on a type of the security alert; and assigning the selected severity function template to the security alert. 4. The method of claim 3, further comprising: generating a severity function based on the severity function template and one or more environmental factors related to a security situation; and associating the severity function with the security alert. 5. The method of claim 4, wherein said generating, based at least on the received data, a second severity level associated with the security alert comprises: generating the second severity level based on the received data applied as input to the severity function. 6. The method of claim 5, wherein said generating a severity function based on the severity function template comprises: applying the one or more environmental factors to a machine learning algorithm to generate the severity function, the one or more environmental factors comprising at least one of an alert confidence measure, a resource importance indicator, a time factor, an alert type, other alerts and information thereof, a similarity of alerts measure, user information, or a similarity of users measure. 7. The method of claim 1, wherein said providing the security alert and associated second severity level to the user comprises at least one of: displaying the security alert and associated second severity level on a user interface; or notifying the user of the security alert and associated second severity level. 8. The method of claim 1, wherein said providing the security alert and associated second severity level to the user further comprises: providing explanatory information relating to a change from the first severity level to the second severity level of the security alert. 9. A system, comprising: one or more processing circuits; and one or more memory devices connected to the one or more processing circuits, the one or more memory devices storing program code that are executable by the one or more processing circuits, the program code comprising: a severity determiner configured to receive a security alert associated with an entity, and determine a first severity level associated with the security alert; a severity modifier configured to receive data indicative of at least one environmental factor, and generate, based at least on the received data, a second severity level associated with the security alert; and an alert manager configured to provide the security alert and associated second severity level to a user associated with the entity. 10. The system of claim 9, wherein the severity modifier is further configured to: update the second severity level to a third severity level for the security alert based at least on a time factor. 11. The system of claim 9, wherein the severity modifier is further configured to: select a severity function template from a set of severity function templates based on a type of the security alert; and assign the selected severity function template to the security alert. 12. The system of claim 11, wherein the severity modifier is further configured to: generate a severity function based on the severity function template and one or more environmental factors related to a security situation; and associate the severity function with the security alert. 13. The system of claim 12, wherein the severity modifier is further configured to generate the second severity level based on the received data applied as input to the severity function. 14. The system of claim 13, wherein the severity modifier is further configured to apply the one or more environmental factors to a machine learning algorithm to generate the severity function, the one or more environmental factors comprising at least one of an alert confidence measure, a resource importance indicator, a time factor, an alert type, other alerts and information thereof, a similarity of alerts measure, user information, or a similarity of users measure. 15. The system of claim 9, wherein the alert manager is further configured to provide the security alert and associated second severity level to the user by at least one of: displaying the security alert and associated second severity level on a user interface; or notifying the user of the security alert and associated second severity level. 16. The system of claim 9, wherein the alert manager is further configured to provide explanatory information relating to a change from the first severity level to the second severity level of the security alert. 17. A computer-readable memory device having program instructions recorded thereon that, when executed by at least one processing circuit, perform a method on a computing device for determining a severity level of a security alert, the method comprising: receiving a security alert associated with an entity; determining a first severity level associated with the security alert; receiving data indicative of at least one environmental factor; generating, based at least on the received data, a second severity level associated with the security alert; and providing the security alert and associated second severity level to a user associated with the entity. 18. The computer-readable memory device of claim 17, wherein the method further comprises: selecting a severity function template from a set of severity function templates based on a type of the security alert; and assigning the selected severity function template to the security alert. 19. The computer-readable storage device of claim 18, wherein the method further comprises: generating a severity function based on the severity function template and one or more environmental factors related to a security situation; and associating the severity function with the security alert. 20. The computer-readable memory device of claim 19, wherein the method further comprises: generating the second severity level based on the received data applied as input to the severity function.
2,600
10,957
10,957
12,796,425
2,683
A lock system includes an access point including a portion movable between an open position and a closed position and a lock mechanism coupled to the access point and movable between a locked position in which the access point is maintained in the closed position and an unlocked position in which the access point is freely movable between the open position and the closed position. A wireless module is coupled to the lock mechanism and is operable to move the lock mechanism between the locked position and the unlocked position. The wireless module includes a receiver operable in a first mode to periodically listen for a signal and operable in a second mode in response to receipt of the signal to receive data, the first mode consuming a first amount of power that is less than a second amount of power consumed during operation in the second mode.
1. A lock system comprising: an access point including a portion movable between an open position and a closed position; a lock mechanism coupled to the access point and movable between a locked position in which the access point is maintained in the closed position and an unlocked position in which the access point is freely movable between the open position and the closed position; and a wireless module coupled to the lock mechanism and operable to move the lock mechanism between the locked position and the unlocked position, the wireless module including a receiver operable in a first mode to periodically listen for a signal and operable in a second mode in response to receipt of the signal to receive data, the first mode consuming a first amount of power that is less than a second amount of power consumed during operation in the second mode. 2. The lock system of claim 1, further comprising a credential reader coupled to the access point and operable to collect a user identifier from a user attempting to move the movable portion to the open position. 3. The lock system of claim 2, wherein the wireless module includes a processor and a memory storage device. 4. The lock system of claim 3, wherein the memory storage device includes a database including valid user identifiers, and wherein the processor is operable to compare the collected user identifier to the stored valid user identifiers to make an access decision at the access point. 5. The lock system of claim 4, wherein the only communication required to make an access decision is a communication between the credential reader and the wireless module. 6. The lock system of claim 2, wherein the credential reader includes at least one of a card reader, a keypad, a biometric reader, and a proximity detector. 7. The lock system of claim 1, wherein the received data is a global command and the lock mechanism is transitioned to and maintained in one of a locked and unlocked state in response to the receipt of the global command. 8. The lock system of claim 7, wherein the access point is one of a plurality of access points, the lock mechanism is one of a plurality of lock mechanisms each associated with one of the access points and the wireless module is one of a plurality of wireless modules, each wireless module associated with one of the lock mechanisms. 9. The lock system of claim 8, wherein each receiver listens for the signal at a period that is selected to assure that each of the lock mechanisms transitions to the locked or unlocked state less than five seconds after the transmission of the global command. 10. The lock system of claim 1, wherein the receiver listens for the signal at an interval between about 5 seconds and 15 seconds. 11. The lock system of claim 1, wherein the receiver includes a first receiver that operates at a first power level to listen for the signal and a second receiver separate from the first receiver and operable at a second power level greater than the first power level to receive the data. 12. The lock system of claim 1, wherein the receiver is a part of a transceiver. 13. A lock system comprising: an access point having a portion that is movable between an open position and a closed position; a lock mechanism coupled to the access point and movable between a locked position in which the access point is maintained in the closed position and an unlocked position in which the access point is freely movable between the open position and the closed position; a first receiver periodically operable at a first power level to detect a beacon; and a second receiver separate from the first receiver and operable at a second power level greater than the first power level to receive data, the second receiver operable in a sleep mode and an active mode, the second receiver transitioning from the sleep mode to the active mode in response to receipt of the beacon by the first receiver. 14. The lock system of claim 13, further comprising a credential reader coupled to the access point and operable to collect a user identifier from a user attempting to move the movable portion to the open position. 15. The lock system of claim 14, further comprising a processor and a memory storage device, wherein the memory storage device includes a database including valid user identifiers, and wherein the processor is operable to compare the collected user identifier to the stored valid user identifiers to make an access decision at the access point. 16. The lock system of claim 14, wherein the credential reader includes at least one of a card reader, a keypad, a biometric reader, and a proximity detector. 17. The lock system of claim 13, wherein the received data is a global command and the lock mechanism is transitioned to and maintained in one of a locked and unlocked state in response to the receipt of the global command. 18. The lock system of claim 17, wherein the access point is one of a plurality of access points, the lock mechanism is one of a plurality of lock mechanisms each associated with one of the access points, the first receiver is one of a plurality of first receivers, and the second receiver is one of a plurality of second receivers, each first receiver and second receiver associated with one of the lock mechanisms. 19. The lock system of claim 18, wherein each first receiver listens for the signal at a period that is selected to assure that each of the lock mechanisms transitions to the locked or unlocked state less than five seconds after the transmission of the global command. 20. The lock system of claim 13, wherein the receiver listens for the signal at an interval between about 5 seconds and 15 seconds. 21. The lock system of claim 13, wherein the second receiver is a part of a transceiver. 22. A method of operating a wireless lock system, the method comprising: periodically operating a plurality of receivers at a low power level to detect a beacon, each receiver associated with a different lock mechanism; transitioning each receiver to a high power level in response to detection of the beacon; receiving data using each receiver while operating at the high power level; and transitioning each lock mechanism to one of a locked position and an unlocked position in response to the received data. 23. The method of claim 20, wherein the period at which the receivers attempt to detect the beacon is between about 5 seconds and 15 seconds.
A lock system includes an access point including a portion movable between an open position and a closed position and a lock mechanism coupled to the access point and movable between a locked position in which the access point is maintained in the closed position and an unlocked position in which the access point is freely movable between the open position and the closed position. A wireless module is coupled to the lock mechanism and is operable to move the lock mechanism between the locked position and the unlocked position. The wireless module includes a receiver operable in a first mode to periodically listen for a signal and operable in a second mode in response to receipt of the signal to receive data, the first mode consuming a first amount of power that is less than a second amount of power consumed during operation in the second mode.1. A lock system comprising: an access point including a portion movable between an open position and a closed position; a lock mechanism coupled to the access point and movable between a locked position in which the access point is maintained in the closed position and an unlocked position in which the access point is freely movable between the open position and the closed position; and a wireless module coupled to the lock mechanism and operable to move the lock mechanism between the locked position and the unlocked position, the wireless module including a receiver operable in a first mode to periodically listen for a signal and operable in a second mode in response to receipt of the signal to receive data, the first mode consuming a first amount of power that is less than a second amount of power consumed during operation in the second mode. 2. The lock system of claim 1, further comprising a credential reader coupled to the access point and operable to collect a user identifier from a user attempting to move the movable portion to the open position. 3. The lock system of claim 2, wherein the wireless module includes a processor and a memory storage device. 4. The lock system of claim 3, wherein the memory storage device includes a database including valid user identifiers, and wherein the processor is operable to compare the collected user identifier to the stored valid user identifiers to make an access decision at the access point. 5. The lock system of claim 4, wherein the only communication required to make an access decision is a communication between the credential reader and the wireless module. 6. The lock system of claim 2, wherein the credential reader includes at least one of a card reader, a keypad, a biometric reader, and a proximity detector. 7. The lock system of claim 1, wherein the received data is a global command and the lock mechanism is transitioned to and maintained in one of a locked and unlocked state in response to the receipt of the global command. 8. The lock system of claim 7, wherein the access point is one of a plurality of access points, the lock mechanism is one of a plurality of lock mechanisms each associated with one of the access points and the wireless module is one of a plurality of wireless modules, each wireless module associated with one of the lock mechanisms. 9. The lock system of claim 8, wherein each receiver listens for the signal at a period that is selected to assure that each of the lock mechanisms transitions to the locked or unlocked state less than five seconds after the transmission of the global command. 10. The lock system of claim 1, wherein the receiver listens for the signal at an interval between about 5 seconds and 15 seconds. 11. The lock system of claim 1, wherein the receiver includes a first receiver that operates at a first power level to listen for the signal and a second receiver separate from the first receiver and operable at a second power level greater than the first power level to receive the data. 12. The lock system of claim 1, wherein the receiver is a part of a transceiver. 13. A lock system comprising: an access point having a portion that is movable between an open position and a closed position; a lock mechanism coupled to the access point and movable between a locked position in which the access point is maintained in the closed position and an unlocked position in which the access point is freely movable between the open position and the closed position; a first receiver periodically operable at a first power level to detect a beacon; and a second receiver separate from the first receiver and operable at a second power level greater than the first power level to receive data, the second receiver operable in a sleep mode and an active mode, the second receiver transitioning from the sleep mode to the active mode in response to receipt of the beacon by the first receiver. 14. The lock system of claim 13, further comprising a credential reader coupled to the access point and operable to collect a user identifier from a user attempting to move the movable portion to the open position. 15. The lock system of claim 14, further comprising a processor and a memory storage device, wherein the memory storage device includes a database including valid user identifiers, and wherein the processor is operable to compare the collected user identifier to the stored valid user identifiers to make an access decision at the access point. 16. The lock system of claim 14, wherein the credential reader includes at least one of a card reader, a keypad, a biometric reader, and a proximity detector. 17. The lock system of claim 13, wherein the received data is a global command and the lock mechanism is transitioned to and maintained in one of a locked and unlocked state in response to the receipt of the global command. 18. The lock system of claim 17, wherein the access point is one of a plurality of access points, the lock mechanism is one of a plurality of lock mechanisms each associated with one of the access points, the first receiver is one of a plurality of first receivers, and the second receiver is one of a plurality of second receivers, each first receiver and second receiver associated with one of the lock mechanisms. 19. The lock system of claim 18, wherein each first receiver listens for the signal at a period that is selected to assure that each of the lock mechanisms transitions to the locked or unlocked state less than five seconds after the transmission of the global command. 20. The lock system of claim 13, wherein the receiver listens for the signal at an interval between about 5 seconds and 15 seconds. 21. The lock system of claim 13, wherein the second receiver is a part of a transceiver. 22. A method of operating a wireless lock system, the method comprising: periodically operating a plurality of receivers at a low power level to detect a beacon, each receiver associated with a different lock mechanism; transitioning each receiver to a high power level in response to detection of the beacon; receiving data using each receiver while operating at the high power level; and transitioning each lock mechanism to one of a locked position and an unlocked position in response to the received data. 23. The method of claim 20, wherein the period at which the receivers attempt to detect the beacon is between about 5 seconds and 15 seconds.
2,600
10,958
10,958
16,376,905
2,616
Structural modifications to a person's face in a reference image are captured and automatically applied to the person's face in another image. The reference image is processed to compute landmark information on the person's face and apply a mesh to the reference image. When structural modifications are made to the person's face in the reference image, the mesh is modified, and the modified mesh is stored in association with the landmark information. Another image is analyzed to compute landmark information on the person's face in that image and apply a mesh to the image. A transformation matrix is computed using the landmark information from the reference image and current image, and the modified mesh from the reference image is transformed using the transformation matrix. The mesh in the current image is modified using the transformed mesh, thereby applying the structural modification to the person's face in the current image.
1. One or more computer storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform operations, the operations comprising: capturing a structural modification made to a person's face in a first image relative to landmark information on the person's face in the first image; and automatically applying a structural modification to the person's face in a second image using the captured structural modification from the first image and landmark information on the person's face in the second image. 2. The one or more computer storage media of claim 1, wherein capturing the structural modification made to the person's face in the first image comprises: analyzing the first image to detect the face and compute the landmark information on the person's face in the first image. 3. The one or more computer storage media of claim 2, wherein the landmark information on the person's face in the first image comprises a vector of points on features of the person's face in the first image, the features comprising one or more selected from the following: eye, ear, nose, lip, eyebrow, hairline, and jawline. 4. The one or more computer storage media of claim 2, wherein capturing the structural modification made to the person's face in the first image comprises: generating a unique facial ID that comprises the landmark information from the first image; and storing the unique facial ID. 5. The one or more computer storage media of claim 1, wherein capturing the structural modification made to the person's face in the first image comprises: applying a mesh to the first image, the mesh dividing the first image into a plurality of blocks; receiving user input making the structural modification to the person's face in the first image; and modifying a portion of the blocks in the mesh in the first image based on the user input to provide a modified mesh. 6. The one or more computer storage media of claim 5, wherein each block in the mesh is a rectangle of 8 by 8 pixels. 7. The one or more computer storage media of claim 5, wherein capturing the structural modification made to the person's face in the first image further comprises: storing the modified mesh in association with the landmark information from the first image. 8. The one or more computer storage media of claim 7, wherein automatically applying the structural modification to the person's face in the second image comprises: applying a mesh to the second image, the mesh dividing the second image into a plurality of blocks; generating a transformation matrix using the landmark information from the first image and the landmark information from the second image; applying the transformation matrix to the modified mesh from the first image to generate a transformed mesh; and modifying the mesh in the second image using the transformed mesh to apply the structural modification to the person's face in the second image. 9. The one or more computer storage media of claim 1, wherein automatically applying the structural modification to the person's face in the second image comprises: analyzing the second image to detect the face and compute the landmark information on the person's face in the second image. 10. The one or more computer storage media of claim 9, wherein automatically applying the structural modification to the person's face in the second image comprises: performing facial recognition to identify the person's face in the second image as corresponding to the person's face in the first image. 11. The one or more computer storage media of claim 1, wherein the first image also includes a second person's face and the operations further comprise: capturing a structural modification made to the second person's face in the first image relative to landmark information on the second person's face in the first image. 12. The one or more computer storage media of claim 1, wherein the second image also includes a second person's face and the operations further comprise: automatically applying a structural modification to the second person's face in the second image using a captured structural modification to the second person's face in another image and landmark information on the second person's face in the second image. 13. A computerized method for image editing, the method comprising: analyzing a first image to detect a face in the first image and compute landmark information on the face in the first image; applying a mesh to the first image, the mesh dividing the first image into a plurality of blocks; modifying a portion of the blocks in the mesh in the first image to provide a modified mesh based on user input making a structural modification to the face in the first image; analyzing a second image to detect the face in the second image and compute landmark information on the face in the second image; applying a mesh to the second image, the mesh dividing the second image into a plurality of blocks; generating a transformation matrix using the landmark information from the first image and the landmark information from the second image; applying the transformation matrix to the modified mesh from the first image to generate a transformed mesh; and modifying the mesh in the second image using the transformed mesh to apply a structural modification to the face in the second image. 14. The computerized method of claim 13, the method further comprising: generating a unique facial ID that comprises the landmark information from the first image; and storing the unique facial ID. 15. The computerized method of claim 14, the method further comprising: storing at least a portion of the modified mesh from the first image in association with the unique facial ID. 16. The computerized method of claim 13, the method further comprising: performing facial recognition to identify the face in the first image as corresponding to the face in the second image. 17. The computerized method of claim 13, wherein each block from the plurality of blocks of the mesh in the first image and the plurality of blocks of the mesh in the second image comprises a rectangle of 8 by 8 pixels. 18. The computerized method of claim 13, the method further comprising: detecting a second face in the first image and computing landmark information on the second face in the first image; and modifying a second portion of the blocks in the mesh in the first image based on further user input making a structural modification to the second face in the first image; 19. The computerized method of claim 13, the method further comprising: detecting a second face in the second image and computing landmark information on the second face in the second image; generating a second transformation matrix using landmark information on the second face from a further image and the landmark information on the second face in the second image; applying the second transformation matrix to a second modified mesh from the further image to generate a second transformed mesh; and modifying the mesh in the second image using the second transformed mesh to apply a structural modification to the second face in the second image. 20. A computer system comprising: means for capturing a structural modification made to a face in a first image relative to landmark information on the face in the first image; a storage component storing information regarding the captured structural modification to the face in the first image relative to the landmark information on the face in the first image; means for automatically applying a structural modification to the face in a second image relative to landmark information on the face in the second image using the captured structural modification to the face in the first image.
Structural modifications to a person's face in a reference image are captured and automatically applied to the person's face in another image. The reference image is processed to compute landmark information on the person's face and apply a mesh to the reference image. When structural modifications are made to the person's face in the reference image, the mesh is modified, and the modified mesh is stored in association with the landmark information. Another image is analyzed to compute landmark information on the person's face in that image and apply a mesh to the image. A transformation matrix is computed using the landmark information from the reference image and current image, and the modified mesh from the reference image is transformed using the transformation matrix. The mesh in the current image is modified using the transformed mesh, thereby applying the structural modification to the person's face in the current image.1. One or more computer storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform operations, the operations comprising: capturing a structural modification made to a person's face in a first image relative to landmark information on the person's face in the first image; and automatically applying a structural modification to the person's face in a second image using the captured structural modification from the first image and landmark information on the person's face in the second image. 2. The one or more computer storage media of claim 1, wherein capturing the structural modification made to the person's face in the first image comprises: analyzing the first image to detect the face and compute the landmark information on the person's face in the first image. 3. The one or more computer storage media of claim 2, wherein the landmark information on the person's face in the first image comprises a vector of points on features of the person's face in the first image, the features comprising one or more selected from the following: eye, ear, nose, lip, eyebrow, hairline, and jawline. 4. The one or more computer storage media of claim 2, wherein capturing the structural modification made to the person's face in the first image comprises: generating a unique facial ID that comprises the landmark information from the first image; and storing the unique facial ID. 5. The one or more computer storage media of claim 1, wherein capturing the structural modification made to the person's face in the first image comprises: applying a mesh to the first image, the mesh dividing the first image into a plurality of blocks; receiving user input making the structural modification to the person's face in the first image; and modifying a portion of the blocks in the mesh in the first image based on the user input to provide a modified mesh. 6. The one or more computer storage media of claim 5, wherein each block in the mesh is a rectangle of 8 by 8 pixels. 7. The one or more computer storage media of claim 5, wherein capturing the structural modification made to the person's face in the first image further comprises: storing the modified mesh in association with the landmark information from the first image. 8. The one or more computer storage media of claim 7, wherein automatically applying the structural modification to the person's face in the second image comprises: applying a mesh to the second image, the mesh dividing the second image into a plurality of blocks; generating a transformation matrix using the landmark information from the first image and the landmark information from the second image; applying the transformation matrix to the modified mesh from the first image to generate a transformed mesh; and modifying the mesh in the second image using the transformed mesh to apply the structural modification to the person's face in the second image. 9. The one or more computer storage media of claim 1, wherein automatically applying the structural modification to the person's face in the second image comprises: analyzing the second image to detect the face and compute the landmark information on the person's face in the second image. 10. The one or more computer storage media of claim 9, wherein automatically applying the structural modification to the person's face in the second image comprises: performing facial recognition to identify the person's face in the second image as corresponding to the person's face in the first image. 11. The one or more computer storage media of claim 1, wherein the first image also includes a second person's face and the operations further comprise: capturing a structural modification made to the second person's face in the first image relative to landmark information on the second person's face in the first image. 12. The one or more computer storage media of claim 1, wherein the second image also includes a second person's face and the operations further comprise: automatically applying a structural modification to the second person's face in the second image using a captured structural modification to the second person's face in another image and landmark information on the second person's face in the second image. 13. A computerized method for image editing, the method comprising: analyzing a first image to detect a face in the first image and compute landmark information on the face in the first image; applying a mesh to the first image, the mesh dividing the first image into a plurality of blocks; modifying a portion of the blocks in the mesh in the first image to provide a modified mesh based on user input making a structural modification to the face in the first image; analyzing a second image to detect the face in the second image and compute landmark information on the face in the second image; applying a mesh to the second image, the mesh dividing the second image into a plurality of blocks; generating a transformation matrix using the landmark information from the first image and the landmark information from the second image; applying the transformation matrix to the modified mesh from the first image to generate a transformed mesh; and modifying the mesh in the second image using the transformed mesh to apply a structural modification to the face in the second image. 14. The computerized method of claim 13, the method further comprising: generating a unique facial ID that comprises the landmark information from the first image; and storing the unique facial ID. 15. The computerized method of claim 14, the method further comprising: storing at least a portion of the modified mesh from the first image in association with the unique facial ID. 16. The computerized method of claim 13, the method further comprising: performing facial recognition to identify the face in the first image as corresponding to the face in the second image. 17. The computerized method of claim 13, wherein each block from the plurality of blocks of the mesh in the first image and the plurality of blocks of the mesh in the second image comprises a rectangle of 8 by 8 pixels. 18. The computerized method of claim 13, the method further comprising: detecting a second face in the first image and computing landmark information on the second face in the first image; and modifying a second portion of the blocks in the mesh in the first image based on further user input making a structural modification to the second face in the first image; 19. The computerized method of claim 13, the method further comprising: detecting a second face in the second image and computing landmark information on the second face in the second image; generating a second transformation matrix using landmark information on the second face from a further image and the landmark information on the second face in the second image; applying the second transformation matrix to a second modified mesh from the further image to generate a second transformed mesh; and modifying the mesh in the second image using the second transformed mesh to apply a structural modification to the second face in the second image. 20. A computer system comprising: means for capturing a structural modification made to a face in a first image relative to landmark information on the face in the first image; a storage component storing information regarding the captured structural modification to the face in the first image relative to the landmark information on the face in the first image; means for automatically applying a structural modification to the face in a second image relative to landmark information on the face in the second image using the captured structural modification to the face in the first image.
2,600
10,959
10,959
16,884,663
2,646
A battery in a mobile phone cover may be selectively charged according to a user-selectable parameter. When charged, the battery in the mobile phone cover may be used to charge a battery in a mobile phone.
1. A mobile phone, comprising: one or more processors configured to selectively couple a battery of a mobile phone cover to a charging interface according to a passthrough charging parameter. 2. The mobile phone according to claim 1, wherein the mobile phone cover comprises the charging interface. 3. The mobile phone according to claim 1, wherein the charging interface is configured to be operably coupled to an external power source. 4. The mobile phone according to claim 1, wherein the charging interface is configured to be operably coupled to a battery of the mobile phone, and wherein the charging interface is external to the mobile phone. 5. The mobile phone according to claim 1, wherein a battery of the mobile phone is operably coupled to the battery of the mobile phone cover. 6. The mobile phone according to claim 1, wherein the one or more processors are configured to provide a user interface associated with the passthrough charging parameter. 7. The mobile phone according to claim 1, wherein when the passthrough charging parameter is selected, a battery of the mobile phone is operably coupled to the charging interface and the battery of the mobile phone cover is decoupled from the charging interface. 8. The mobile phone according to claim 1, wherein when the passthrough charging parameter is deselected, a battery of the mobile phone is operably coupled to the charging interface and the battery of the mobile phone cover is operably coupled to the charging interface. 9. The mobile phone according to claim 1, wherein the one or more processors are configured to generate a warning if a flow of charge to the charging interface is below a threshold. 10. The mobile phone according to claim 1, wherein when a flow of charge to the charging interface is below a threshold, the one or more processors are configured to prioritize one of charging the battery of the mobile phone cover and charging a battery of the mobile phone, wherein the prioritization is selectable. 11. A mobile phone cover, comprising: a cover charging circuit; and a cover battery, wherein: the cover charging circuit is configured to selectively couple the cover battery to an external power source according to a passthrough charging parameter. 12. The mobile phone cover according to claim 11, wherein the mobile phone cover is positioned around a mobile phone. 13. The mobile phone cover according to claim 11, wherein the cover charging circuit is operably coupled to a charging interface of a mobile phone. 14. The mobile phone cover according to claim 11, wherein the cover charging circuit is configured to operably couple a mobile phone battery to the external power source. 15. The mobile phone cover according to claim 11, wherein the cover charging circuit is configured to operably couple the cover battery to a mobile phone battery. 16. The mobile phone cover according to claim 11, wherein the passthrough charging parameter is selected via a mobile phone. 17. The mobile phone cover according to claim 11, wherein when the passthrough charging parameter is selected, a mobile phone battery is operably coupled to the external power source and the cover battery is decoupled from the external power source. 18. The mobile phone cover according to claim 11, wherein when the passthrough charging parameter is deselected, a mobile phone battery is operably coupled to the external power source and the cover battery is operably coupled to the external power source. 19. The mobile phone cover according to claim 11, wherein the cover charging circuit is configured to indicate when a flow of charge from the external power source is below a threshold. 20. The mobile phone cover according to claim 11, wherein when a flow of charge from the external power source is below a threshold, one of the cover battery and a mobile phone battery is selectively charged. 21. The mobile phone according to claim 1, wherein the passthrough charging parameter is selected via the mobile phone cover. 22. The mobile phone cover according to claim 11, wherein the passthrough charging parameter is selected via the mobile phone cover. 23. A mobile application, comprising: a user interface configured to selectively couple a battery of a mobile phone cover to a charging interface according to a passthrough charging parameter. 24. The mobile application according to claim 23, wherein the mobile phone cover comprises the charging interface. 25. The mobile application according to claim 23, wherein the charging interface is configured to be operably coupled to an external power source. 26. The mobile application according to claim 23, wherein the charging interface is configured to be operably coupled to a battery of the mobile phone, and wherein the charging interface is external to the mobile phone. 27. The mobile application according to claim 23, wherein a battery of the mobile phone is operably coupled to the battery of the mobile phone cover. 28. The mobile application according to claim 23, wherein when the passthrough charging parameter is selected, a battery of the mobile phone is operably coupled to the charging interface and the battery of the mobile phone cover is decoupled from the charging interface. 29. The mobile application according to claim 23, wherein when the passthrough charging parameter is deselected, a battery of the mobile phone is operably coupled to the charging interface and the battery of the mobile phone cover is operably coupled to the charging interface. 30. The mobile application according to claim 23, wherein the mobile application is configured to generate a warning if a flow of charge to the charging interface is below a threshold. 31. The mobile application according to claim 23, wherein when a flow of charge to the charging interface is below a threshold, the mobile application is configured to prioritize one of charging the battery of the mobile phone cover and charging a battery of the mobile phone, wherein the prioritization is selectable. 32. The mobile phone cover according to claim 23, wherein the passthrough charging parameter is selected via the mobile phone cover.
A battery in a mobile phone cover may be selectively charged according to a user-selectable parameter. When charged, the battery in the mobile phone cover may be used to charge a battery in a mobile phone.1. A mobile phone, comprising: one or more processors configured to selectively couple a battery of a mobile phone cover to a charging interface according to a passthrough charging parameter. 2. The mobile phone according to claim 1, wherein the mobile phone cover comprises the charging interface. 3. The mobile phone according to claim 1, wherein the charging interface is configured to be operably coupled to an external power source. 4. The mobile phone according to claim 1, wherein the charging interface is configured to be operably coupled to a battery of the mobile phone, and wherein the charging interface is external to the mobile phone. 5. The mobile phone according to claim 1, wherein a battery of the mobile phone is operably coupled to the battery of the mobile phone cover. 6. The mobile phone according to claim 1, wherein the one or more processors are configured to provide a user interface associated with the passthrough charging parameter. 7. The mobile phone according to claim 1, wherein when the passthrough charging parameter is selected, a battery of the mobile phone is operably coupled to the charging interface and the battery of the mobile phone cover is decoupled from the charging interface. 8. The mobile phone according to claim 1, wherein when the passthrough charging parameter is deselected, a battery of the mobile phone is operably coupled to the charging interface and the battery of the mobile phone cover is operably coupled to the charging interface. 9. The mobile phone according to claim 1, wherein the one or more processors are configured to generate a warning if a flow of charge to the charging interface is below a threshold. 10. The mobile phone according to claim 1, wherein when a flow of charge to the charging interface is below a threshold, the one or more processors are configured to prioritize one of charging the battery of the mobile phone cover and charging a battery of the mobile phone, wherein the prioritization is selectable. 11. A mobile phone cover, comprising: a cover charging circuit; and a cover battery, wherein: the cover charging circuit is configured to selectively couple the cover battery to an external power source according to a passthrough charging parameter. 12. The mobile phone cover according to claim 11, wherein the mobile phone cover is positioned around a mobile phone. 13. The mobile phone cover according to claim 11, wherein the cover charging circuit is operably coupled to a charging interface of a mobile phone. 14. The mobile phone cover according to claim 11, wherein the cover charging circuit is configured to operably couple a mobile phone battery to the external power source. 15. The mobile phone cover according to claim 11, wherein the cover charging circuit is configured to operably couple the cover battery to a mobile phone battery. 16. The mobile phone cover according to claim 11, wherein the passthrough charging parameter is selected via a mobile phone. 17. The mobile phone cover according to claim 11, wherein when the passthrough charging parameter is selected, a mobile phone battery is operably coupled to the external power source and the cover battery is decoupled from the external power source. 18. The mobile phone cover according to claim 11, wherein when the passthrough charging parameter is deselected, a mobile phone battery is operably coupled to the external power source and the cover battery is operably coupled to the external power source. 19. The mobile phone cover according to claim 11, wherein the cover charging circuit is configured to indicate when a flow of charge from the external power source is below a threshold. 20. The mobile phone cover according to claim 11, wherein when a flow of charge from the external power source is below a threshold, one of the cover battery and a mobile phone battery is selectively charged. 21. The mobile phone according to claim 1, wherein the passthrough charging parameter is selected via the mobile phone cover. 22. The mobile phone cover according to claim 11, wherein the passthrough charging parameter is selected via the mobile phone cover. 23. A mobile application, comprising: a user interface configured to selectively couple a battery of a mobile phone cover to a charging interface according to a passthrough charging parameter. 24. The mobile application according to claim 23, wherein the mobile phone cover comprises the charging interface. 25. The mobile application according to claim 23, wherein the charging interface is configured to be operably coupled to an external power source. 26. The mobile application according to claim 23, wherein the charging interface is configured to be operably coupled to a battery of the mobile phone, and wherein the charging interface is external to the mobile phone. 27. The mobile application according to claim 23, wherein a battery of the mobile phone is operably coupled to the battery of the mobile phone cover. 28. The mobile application according to claim 23, wherein when the passthrough charging parameter is selected, a battery of the mobile phone is operably coupled to the charging interface and the battery of the mobile phone cover is decoupled from the charging interface. 29. The mobile application according to claim 23, wherein when the passthrough charging parameter is deselected, a battery of the mobile phone is operably coupled to the charging interface and the battery of the mobile phone cover is operably coupled to the charging interface. 30. The mobile application according to claim 23, wherein the mobile application is configured to generate a warning if a flow of charge to the charging interface is below a threshold. 31. The mobile application according to claim 23, wherein when a flow of charge to the charging interface is below a threshold, the mobile application is configured to prioritize one of charging the battery of the mobile phone cover and charging a battery of the mobile phone, wherein the prioritization is selectable. 32. The mobile phone cover according to claim 23, wherein the passthrough charging parameter is selected via the mobile phone cover.
2,600
10,960
10,960
16,141,444
2,645
A system is provided for characterizing a device under test (DUT) including an integrated antenna array. The system includes an optical subsystem having first and second focal planes, where the integrated antenna array is positioned in a beam overlap region extending from the first focal plane of the optical subsystem. The system further includes a measurement array having multiple array elements positioned substantially on the second focal plane of the optical subsystem, the measurement array being configured to receive signals from the DUT, and/or to transmit substantially collimated beams to the DUT, via the optical subsystem. Far-field characteristics of the DUT are measured, as well as angular dependence of each of the far-field characteristics.
1. A system for characterizing a device under test (DUT) comprising an integrated antenna array, the system comprising: an optical subsystem having first and second focal planes, wherein the integrated antenna array is positioned in a beam overlap region extending from the first focal plane of the optical subsystem; and a measurement array comprising a plurality of array elements positioned substantially on the second focal plane of the optical subsystem, the measurement array being configured to transmit substantially collimated beams to the DUT, and/or to receive signals from the DUT, via the optical subsystem, wherein the measurement array enables measurement of at least one far-field characteristic of the DUT and/or angular dependence of each of the at least one far-field characteristic. 2. The system of claim 1, wherein the at least one DUT far-field characteristic comprises at least one of an antenna profile, an effective isotropic radiated power (EIRP), a total radiated power of the integrated antenna array, an error-vector-magnitude (EVM), and an adjacent channel leakage ratio (ACLR). 3. The system of claim 1, further comprising: an anechoic chamber housing the DUT, the optical subsystem, and the measurement array. 4. The system of claim 1, wherein the plurality of array elements in the measurement array comprises a plurality of detectors. 5. The system of claim 4, wherein the plurality of detectors comprises a plurality of power sensing diodes configured to perform the measurements of the at least one DUT far-field characteristic. 6. The system of claim 1, wherein the beam overlap region extends from the first focal plane to a furthest distance at which the substantially collimated beams sufficiently overlap a surface area of the integrated antenna array. 7. The system of claim 6, further comprising: a switch; at least one receiver selectively connectable to each of the plurality of array elements via the switch; and a communication analyzer configured to perform the measurements of the at least one DUT far-field characteristic. 8. The system of claim 7, further comprising: a memory configured to store at least a portion of the measurements; and a display configured to display at least a portion of the measurements. 9. The system of claim 1, wherein the optical subsystem comprises a curved mirror. 10. The system of claim 1, wherein the optical subsystem comprises a lens. 11. A system for characterizing a device under test (DUT) comprising an integrated antenna array, the system comprising: a curved mirror having a first focal plane and a second focal plane, wherein the integrated antenna array is positioned in a beam overlap region extending from the first focal plane of the curved mirror; and a measurement array comprising a plurality of array elements positioned substantially on the second focal plane of the curved mirror, the plurality of array elements being configured to receive signals from the DUT and reflected by the curved mirror, and/or to transmit substantially collimated beams to the DUT and reflected by the curved mirror, wherein at least one far-field DUT characteristic is measured at the measurement array, enabling determination of the at least one DUT characteristic as a function of angle by the plurality of array elements. 12. The system of claim 11, wherein the beam overlap region extends from the first focal plane to a furthest distance at which the substantially collimated beams sufficiently overlap a surface area of the integrated antenna array. 13. The system of claim 11, further comprising: an anechoic chamber housing the DUT, the curved mirror and the measurement array. 14. The system of claim 11, wherein the curved mirror comprises a parabolic mirror. 15. A system for characterizing a device under test (DUT) comprising an integrated antenna array, the system comprising: a lens having a first focal plane on one side of the lens and a second focal plane on an opposite side of the lens, wherein the integrated antenna array is positioned in a beam overlap region extending from the first focal plane of the lens; and a measurement array comprising a plurality of array elements substantially positioned on the second focal plane of the lens, the plurality of array elements being configured to receive signals from the DUT through the lens, and/or to transmit substantially collimated beams to the DUT through the lens, wherein at least one far-field DUT characteristic is measured at the measurement array, enabling determination of the at least one DUT characteristic as a function of angle by the plurality of array elements. 16. The system of claim 15, wherein the beam overlap region extends from the first focal plane to a furthest distance at which the substantially collimated beams sufficiently overlap a surface area of the integrated antenna array. 17. The system of claim 15, further comprising: an attenuator positioned between the DUT and the one side of the lens, the attenuator being configured to mitigate reflections by the lens of the substantially collimated beams. 18. The system of claim 17, wherein the integrated antenna array comprises an M×N array of antennae, where M and N are positive integers, respectively, separated from one another by λ/2, wherein λ is a wavelength of the substantially collimated beams.
A system is provided for characterizing a device under test (DUT) including an integrated antenna array. The system includes an optical subsystem having first and second focal planes, where the integrated antenna array is positioned in a beam overlap region extending from the first focal plane of the optical subsystem. The system further includes a measurement array having multiple array elements positioned substantially on the second focal plane of the optical subsystem, the measurement array being configured to receive signals from the DUT, and/or to transmit substantially collimated beams to the DUT, via the optical subsystem. Far-field characteristics of the DUT are measured, as well as angular dependence of each of the far-field characteristics.1. A system for characterizing a device under test (DUT) comprising an integrated antenna array, the system comprising: an optical subsystem having first and second focal planes, wherein the integrated antenna array is positioned in a beam overlap region extending from the first focal plane of the optical subsystem; and a measurement array comprising a plurality of array elements positioned substantially on the second focal plane of the optical subsystem, the measurement array being configured to transmit substantially collimated beams to the DUT, and/or to receive signals from the DUT, via the optical subsystem, wherein the measurement array enables measurement of at least one far-field characteristic of the DUT and/or angular dependence of each of the at least one far-field characteristic. 2. The system of claim 1, wherein the at least one DUT far-field characteristic comprises at least one of an antenna profile, an effective isotropic radiated power (EIRP), a total radiated power of the integrated antenna array, an error-vector-magnitude (EVM), and an adjacent channel leakage ratio (ACLR). 3. The system of claim 1, further comprising: an anechoic chamber housing the DUT, the optical subsystem, and the measurement array. 4. The system of claim 1, wherein the plurality of array elements in the measurement array comprises a plurality of detectors. 5. The system of claim 4, wherein the plurality of detectors comprises a plurality of power sensing diodes configured to perform the measurements of the at least one DUT far-field characteristic. 6. The system of claim 1, wherein the beam overlap region extends from the first focal plane to a furthest distance at which the substantially collimated beams sufficiently overlap a surface area of the integrated antenna array. 7. The system of claim 6, further comprising: a switch; at least one receiver selectively connectable to each of the plurality of array elements via the switch; and a communication analyzer configured to perform the measurements of the at least one DUT far-field characteristic. 8. The system of claim 7, further comprising: a memory configured to store at least a portion of the measurements; and a display configured to display at least a portion of the measurements. 9. The system of claim 1, wherein the optical subsystem comprises a curved mirror. 10. The system of claim 1, wherein the optical subsystem comprises a lens. 11. A system for characterizing a device under test (DUT) comprising an integrated antenna array, the system comprising: a curved mirror having a first focal plane and a second focal plane, wherein the integrated antenna array is positioned in a beam overlap region extending from the first focal plane of the curved mirror; and a measurement array comprising a plurality of array elements positioned substantially on the second focal plane of the curved mirror, the plurality of array elements being configured to receive signals from the DUT and reflected by the curved mirror, and/or to transmit substantially collimated beams to the DUT and reflected by the curved mirror, wherein at least one far-field DUT characteristic is measured at the measurement array, enabling determination of the at least one DUT characteristic as a function of angle by the plurality of array elements. 12. The system of claim 11, wherein the beam overlap region extends from the first focal plane to a furthest distance at which the substantially collimated beams sufficiently overlap a surface area of the integrated antenna array. 13. The system of claim 11, further comprising: an anechoic chamber housing the DUT, the curved mirror and the measurement array. 14. The system of claim 11, wherein the curved mirror comprises a parabolic mirror. 15. A system for characterizing a device under test (DUT) comprising an integrated antenna array, the system comprising: a lens having a first focal plane on one side of the lens and a second focal plane on an opposite side of the lens, wherein the integrated antenna array is positioned in a beam overlap region extending from the first focal plane of the lens; and a measurement array comprising a plurality of array elements substantially positioned on the second focal plane of the lens, the plurality of array elements being configured to receive signals from the DUT through the lens, and/or to transmit substantially collimated beams to the DUT through the lens, wherein at least one far-field DUT characteristic is measured at the measurement array, enabling determination of the at least one DUT characteristic as a function of angle by the plurality of array elements. 16. The system of claim 15, wherein the beam overlap region extends from the first focal plane to a furthest distance at which the substantially collimated beams sufficiently overlap a surface area of the integrated antenna array. 17. The system of claim 15, further comprising: an attenuator positioned between the DUT and the one side of the lens, the attenuator being configured to mitigate reflections by the lens of the substantially collimated beams. 18. The system of claim 17, wherein the integrated antenna array comprises an M×N array of antennae, where M and N are positive integers, respectively, separated from one another by λ/2, wherein λ is a wavelength of the substantially collimated beams.
2,600
10,961
10,961
16,267,966
2,621
An actuating user interface for a media player or other electronic device is disclosed. According to one aspect, the user interface is a display device that can both display visual information and serve as a mechanical actuator to generate input signals is disclosed. The display device, which displays visual information such as text, characters and graphics, may also act like a push or clickable button(s), a sliding toggle button or switch, a rotating dial or knob, a motion controlling device such as a joystick or navigation pad, and/or the like. According to another aspect, the user interface is an input device that includes a movable touch pad control signal capable of detecting the movements of the movable touch pad so as to generate one or more distinct second control signals. The control signals being used to perform actions in an electronic device operatively coupled to the input device.
1. A device comprising: a display capable of presenting a graphical user interface (GUI); a frame, circular input element configured for rotational input relative to the frame, the rotational input generating first signals for scrolling the GUI presented on the display, and one or more sensors generating signals for zooming the GUI presented on the display. 2. The device of claim 1, wherein the circular input element is configured to be depressed relative to the frame in response to a force applied to the circular input element to thereby generate a second signal. 3. The device of claim 1, further comprising a processor configured to receive a first signal generated by the one or more sensors indicating a position of an input, receive a second signal generated by the circular input element indicating that the circular input element has been rotated, and generate a command in response to the first and second signals. 4. The device of claim 1, wherein the circular input element is configured to tilt relative to the frame in response to a force applied to a side of the circular input element. 5. The device of claim 4, wherein the tilt of the circular input element relative to the frame enables the circular input element to move in multiple degrees of freedom relative to the frame, one or more of the multiple degrees of freedom being associated with a function of the device, the tilting action of the circular input element relative to the frame enabling a user of the device to make a selection. 6. The device of claim 1, wherein the device comprises a media player. 7. The device of claim 1, wherein the circular input element comprises multiple spatially distinct input zones, each of the input zones having a corresponding indicator for generating a distinct user input signal when the circular input element is depressed in a region of one of the input zones. 8. The device of claim 7, wherein one of the input zones corresponds to selection of a media file. 9. The device of claim 1, wherein the circular input element comprises at least four spatially distinct input zones. 10. The device of claim 1, wherein the circular input element generates signals based on a polar coordinate system. 11. The device of claim 1, wherein the circular input element is configured to pivot about a first contact between the circular input element and the frame when a force is applied to the circular input element in a first zone located on a side of the circular input element opposite the first contact, and wherein the circular input element is configured to pivot about a second contact between the circular input element and the frame when a force is applied to the circular input element in a second zone located on a side of the circular input element opposite the second contact. 12. A method of controlling a display of a device having a circular input element and one or more sensors, comprising: rotating the circular input element relative to a frame of the device to scroll a displayed GUI; and sensing one or more touch inputs to zoom the displayed GUI. 13. The method of claim 12, further comprising depressing the circular input element relative to the frame in response to a force applied to the circular input element to thereby generate a button signal. 14. The method of claim 12, further comprising: receiving a first signal generated by the one or more sensors indicating a position of an input, receiving a second signal generated by the circular input element indicating that the circular input element has been rotated, and generating a command in response to the first and second signals. 15. The method of claim 12, further comprising: applying a force to a side of the circular input element to tilt the circular input element relative to the frame, and generating an input signal from the applied force. 16. The method of claim 15, further comprising tilting the circular input element to move the circular input element in one or more of multiple degrees of freedom relative to the frame, one or more of the multiple degrees of freedom being associated with a function of the device, the tilting of the circular input element relative to the frame enabling a user of the device to make a selection. 17. The method of claim 12, further comprising depressing the circular input element in a region of one of a plurality of spatially distinct input zones on the circular input element and generating a distinct user input signal when the circular input element is depressed in the region of one of the plurality of input zones. 18. The method of claim 17, further comprising selecting a media file upon depressing the touch sensitive surface at one of the input zones. 19. The method of claim 12, further comprising generating signals at the circular input element based on a polar coordinate system. 20. The method of claim 12, further comprising pivoting the circular input element about a first contact between the circular input element and the frame when a force is applied to the circular input element in a first zone located on a side of the circular input element opposite the first contact, and pivoting the circular input element about a second contact between the circular input element and the frame when a force is applied to the circular input element in a second zone located on a side of the circular input element opposite the second contact.
An actuating user interface for a media player or other electronic device is disclosed. According to one aspect, the user interface is a display device that can both display visual information and serve as a mechanical actuator to generate input signals is disclosed. The display device, which displays visual information such as text, characters and graphics, may also act like a push or clickable button(s), a sliding toggle button or switch, a rotating dial or knob, a motion controlling device such as a joystick or navigation pad, and/or the like. According to another aspect, the user interface is an input device that includes a movable touch pad control signal capable of detecting the movements of the movable touch pad so as to generate one or more distinct second control signals. The control signals being used to perform actions in an electronic device operatively coupled to the input device.1. A device comprising: a display capable of presenting a graphical user interface (GUI); a frame, circular input element configured for rotational input relative to the frame, the rotational input generating first signals for scrolling the GUI presented on the display, and one or more sensors generating signals for zooming the GUI presented on the display. 2. The device of claim 1, wherein the circular input element is configured to be depressed relative to the frame in response to a force applied to the circular input element to thereby generate a second signal. 3. The device of claim 1, further comprising a processor configured to receive a first signal generated by the one or more sensors indicating a position of an input, receive a second signal generated by the circular input element indicating that the circular input element has been rotated, and generate a command in response to the first and second signals. 4. The device of claim 1, wherein the circular input element is configured to tilt relative to the frame in response to a force applied to a side of the circular input element. 5. The device of claim 4, wherein the tilt of the circular input element relative to the frame enables the circular input element to move in multiple degrees of freedom relative to the frame, one or more of the multiple degrees of freedom being associated with a function of the device, the tilting action of the circular input element relative to the frame enabling a user of the device to make a selection. 6. The device of claim 1, wherein the device comprises a media player. 7. The device of claim 1, wherein the circular input element comprises multiple spatially distinct input zones, each of the input zones having a corresponding indicator for generating a distinct user input signal when the circular input element is depressed in a region of one of the input zones. 8. The device of claim 7, wherein one of the input zones corresponds to selection of a media file. 9. The device of claim 1, wherein the circular input element comprises at least four spatially distinct input zones. 10. The device of claim 1, wherein the circular input element generates signals based on a polar coordinate system. 11. The device of claim 1, wherein the circular input element is configured to pivot about a first contact between the circular input element and the frame when a force is applied to the circular input element in a first zone located on a side of the circular input element opposite the first contact, and wherein the circular input element is configured to pivot about a second contact between the circular input element and the frame when a force is applied to the circular input element in a second zone located on a side of the circular input element opposite the second contact. 12. A method of controlling a display of a device having a circular input element and one or more sensors, comprising: rotating the circular input element relative to a frame of the device to scroll a displayed GUI; and sensing one or more touch inputs to zoom the displayed GUI. 13. The method of claim 12, further comprising depressing the circular input element relative to the frame in response to a force applied to the circular input element to thereby generate a button signal. 14. The method of claim 12, further comprising: receiving a first signal generated by the one or more sensors indicating a position of an input, receiving a second signal generated by the circular input element indicating that the circular input element has been rotated, and generating a command in response to the first and second signals. 15. The method of claim 12, further comprising: applying a force to a side of the circular input element to tilt the circular input element relative to the frame, and generating an input signal from the applied force. 16. The method of claim 15, further comprising tilting the circular input element to move the circular input element in one or more of multiple degrees of freedom relative to the frame, one or more of the multiple degrees of freedom being associated with a function of the device, the tilting of the circular input element relative to the frame enabling a user of the device to make a selection. 17. The method of claim 12, further comprising depressing the circular input element in a region of one of a plurality of spatially distinct input zones on the circular input element and generating a distinct user input signal when the circular input element is depressed in the region of one of the plurality of input zones. 18. The method of claim 17, further comprising selecting a media file upon depressing the touch sensitive surface at one of the input zones. 19. The method of claim 12, further comprising generating signals at the circular input element based on a polar coordinate system. 20. The method of claim 12, further comprising pivoting the circular input element about a first contact between the circular input element and the frame when a force is applied to the circular input element in a first zone located on a side of the circular input element opposite the first contact, and pivoting the circular input element about a second contact between the circular input element and the frame when a force is applied to the circular input element in a second zone located on a side of the circular input element opposite the second contact.
2,600
10,962
10,962
16,367,447
2,645
Event data recording for a road vehicle to support event reconstruction after a trigger event includes a snapshot of the wireless environment within which the vehicle resides. Radio frequency (RF) signals are received from external sources in a vicinity of the vehicle via a plurality of antennas connected to a plurality of receivers. Wireless metadata is compiled based on the detected RF signals. The compiled metadata is timestamped, and then the timestamped metadata is queued in a buffer memory. A trigger event is detected indicating occurrence of an event for subsequent reconstruction. Respective metadata is transferred from the buffer memory to a non-volatile memory upon occurrence of the trigger event. The wireless metadata may preferably include identifiers or signal strength of WiFi or Bluetooth devices, active cellular telephone channels, detected DSRC messages, and spectrum data of EMI sources.
1. Vehicular apparatus in a road vehicle comprising: a plurality of wireless receivers detecting respective radio frequency (RF) signals transmitted by external sources, wherein the plurality of wireless receivers includes a wideband receiver for monitoring wideband electromagnetic interference (EMI); a central controller for compiling wireless metadata based on the detected RF signals to characterize an RF wireless environment around the vehicle, wherein the compiled metadata includes a spectrum of the monitored EMI; a buffer memory for queueing newly compiled metadata together with respective timestamps identifying times of collection of the metadata; and a non-volatile memory for longer term storage of respective metadata in the buffer memory upon occurrence of a trigger event. 2. The apparatus of claim 1 wherein the plurality of wireless receivers includes a WiFi receiver, and wherein the compiled metadata includes a network identifier of an external source. 3. The apparatus of claim 2 wherein the WiFi receiver scans a plurality of frequency channels for active WiFi devices. 4. The apparatus of claim 1 wherein the plurality of wireless receivers includes a cellular receiver, and wherein the compiled metadata includes identification of at least one cellular channel having active transmission. 5. The apparatus of claim 1 wherein the plurality of wireless receivers includes a DSRC receiver, and wherein the compiled metadata includes contents of a detected DSRC message and a source identifier. 6. (canceled) 7. The apparatus of claim 1 wherein the plurality of wireless receivers includes a Bluetooth transceiver, wherein the Bluetooth transceiver periodically sends a presence request, and wherein the compiled metadata includes a listing of detected Bluetooth devices. 8. The apparatus of claim 1 wherein the compiled metadata includes a measured signal strength for an RF signal detected by a receiver. 9. The apparatus of claim 1 wherein one of the receivers detects a respective RF signal via a plurality of antennas, and wherein the compiled metadata includes a respective measured signal strength of the respective RF signal derived from each respective antenna. 10. The apparatus of claim 1 wherein the compiled metadata includes a detected location of a respective source of RF signals. 11. The apparatus of claim 1 wherein the buffer memory is comprised of a ring buffer. 12. A method of data recording in a road vehicle, comprising the steps of: receiving radio frequency (RF) signals transmitted by external sources in a vicinity of the vehicle via a plurality of antennas connected to a plurality of receivers wherein the plurality of wireless receivers includes a wideband receiver for monitoring wideband electromagnetic interference (EMI); compiling wireless metadata based on the detected RF signals to characterize an RF wireless environment around the vehicle, wherein the compiled metadata includes a spectrum of the monitored EMI; timestamping the compiled metadata; queueing the timestamped metadata in a buffer memory; detecting a trigger event indicating occurrence of an event for subsequent reconstruction; and transferring respective metadata from the buffer memory to a non-volatile memory upon occurrence of the trigger event. 13. The method of claim 12 wherein the compiled metadata includes a measured signal strength for a respective RF signal detected by a receiver. 14. The method of claim 12 wherein a respective RF signal is detected via a plurality of antennas, and wherein the compiled metadata includes a respective measured signal strength of the respective RF signal derived from each respective antenna. 15. The method of claim 12 wherein the compiled metadata includes a detected location of a respective source of RF signals. 16. The method of claim 12 wherein the plurality of receivers includes a WiFi receiver, and wherein the compiled metadata includes a network identifier of an external source. 17. The method of claim 12 wherein the plurality of receivers includes a cellular receiver, and wherein the compiled metadata includes identification of at least one cellular channel having active transmission. 18. The method of claim 12 wherein the plurality of receivers includes a DSRC receiver, and wherein the compiled metadata includes contents of a detected DSRC message and a source identifier. 19. (canceled) 20. The method of claim 12 wherein the plurality of receivers includes a Bluetooth transceiver, wherein the Bluetooth transceiver periodically sends a presence request, and wherein the compiled metadata includes a listing of detected Bluetooth devices.
Event data recording for a road vehicle to support event reconstruction after a trigger event includes a snapshot of the wireless environment within which the vehicle resides. Radio frequency (RF) signals are received from external sources in a vicinity of the vehicle via a plurality of antennas connected to a plurality of receivers. Wireless metadata is compiled based on the detected RF signals. The compiled metadata is timestamped, and then the timestamped metadata is queued in a buffer memory. A trigger event is detected indicating occurrence of an event for subsequent reconstruction. Respective metadata is transferred from the buffer memory to a non-volatile memory upon occurrence of the trigger event. The wireless metadata may preferably include identifiers or signal strength of WiFi or Bluetooth devices, active cellular telephone channels, detected DSRC messages, and spectrum data of EMI sources.1. Vehicular apparatus in a road vehicle comprising: a plurality of wireless receivers detecting respective radio frequency (RF) signals transmitted by external sources, wherein the plurality of wireless receivers includes a wideband receiver for monitoring wideband electromagnetic interference (EMI); a central controller for compiling wireless metadata based on the detected RF signals to characterize an RF wireless environment around the vehicle, wherein the compiled metadata includes a spectrum of the monitored EMI; a buffer memory for queueing newly compiled metadata together with respective timestamps identifying times of collection of the metadata; and a non-volatile memory for longer term storage of respective metadata in the buffer memory upon occurrence of a trigger event. 2. The apparatus of claim 1 wherein the plurality of wireless receivers includes a WiFi receiver, and wherein the compiled metadata includes a network identifier of an external source. 3. The apparatus of claim 2 wherein the WiFi receiver scans a plurality of frequency channels for active WiFi devices. 4. The apparatus of claim 1 wherein the plurality of wireless receivers includes a cellular receiver, and wherein the compiled metadata includes identification of at least one cellular channel having active transmission. 5. The apparatus of claim 1 wherein the plurality of wireless receivers includes a DSRC receiver, and wherein the compiled metadata includes contents of a detected DSRC message and a source identifier. 6. (canceled) 7. The apparatus of claim 1 wherein the plurality of wireless receivers includes a Bluetooth transceiver, wherein the Bluetooth transceiver periodically sends a presence request, and wherein the compiled metadata includes a listing of detected Bluetooth devices. 8. The apparatus of claim 1 wherein the compiled metadata includes a measured signal strength for an RF signal detected by a receiver. 9. The apparatus of claim 1 wherein one of the receivers detects a respective RF signal via a plurality of antennas, and wherein the compiled metadata includes a respective measured signal strength of the respective RF signal derived from each respective antenna. 10. The apparatus of claim 1 wherein the compiled metadata includes a detected location of a respective source of RF signals. 11. The apparatus of claim 1 wherein the buffer memory is comprised of a ring buffer. 12. A method of data recording in a road vehicle, comprising the steps of: receiving radio frequency (RF) signals transmitted by external sources in a vicinity of the vehicle via a plurality of antennas connected to a plurality of receivers wherein the plurality of wireless receivers includes a wideband receiver for monitoring wideband electromagnetic interference (EMI); compiling wireless metadata based on the detected RF signals to characterize an RF wireless environment around the vehicle, wherein the compiled metadata includes a spectrum of the monitored EMI; timestamping the compiled metadata; queueing the timestamped metadata in a buffer memory; detecting a trigger event indicating occurrence of an event for subsequent reconstruction; and transferring respective metadata from the buffer memory to a non-volatile memory upon occurrence of the trigger event. 13. The method of claim 12 wherein the compiled metadata includes a measured signal strength for a respective RF signal detected by a receiver. 14. The method of claim 12 wherein a respective RF signal is detected via a plurality of antennas, and wherein the compiled metadata includes a respective measured signal strength of the respective RF signal derived from each respective antenna. 15. The method of claim 12 wherein the compiled metadata includes a detected location of a respective source of RF signals. 16. The method of claim 12 wherein the plurality of receivers includes a WiFi receiver, and wherein the compiled metadata includes a network identifier of an external source. 17. The method of claim 12 wherein the plurality of receivers includes a cellular receiver, and wherein the compiled metadata includes identification of at least one cellular channel having active transmission. 18. The method of claim 12 wherein the plurality of receivers includes a DSRC receiver, and wherein the compiled metadata includes contents of a detected DSRC message and a source identifier. 19. (canceled) 20. The method of claim 12 wherein the plurality of receivers includes a Bluetooth transceiver, wherein the Bluetooth transceiver periodically sends a presence request, and wherein the compiled metadata includes a listing of detected Bluetooth devices.
2,600
10,963
10,963
15,853,780
2,655
Natural language grammars interpret expressions at the conversational human-machine interfaces of devices. Under conditions favoring engagement, as specified in a unit of conversational code, the device initiates a discussion using one or more of TTS, images, video, audio, and animation depending on the device capabilities of screen and audio output. Conversational code units specify conditions based on conversation state, mood, and privacy. Grammars provide intents that cause calls to system functions. Units can provide scripts for guiding the conversation. The device, or supporting server system, can provide feedback to creators of the conversational code units for analysis and machine learning.
1. A method of delivering interactive experiences through a human-machine interface, the method comprising: delivering an introductory message defined in an interactive experience unit; receiving a natural language expression from a user after starting delivery of the introductory message; interpreting the natural language expression according to a natural language grammar defined within the interactive experience unit; and responsive to the natural language expression matching the grammar, determining an intent defined in the natural language grammar. 2. The method of claim 1 wherein: the introductory message is verbal; and the natural language expression is received as audible speech. 3. The method of claim 1 wherein the mode of user interaction is screenless. 4. The method of claim 1 wherein the introductory message comprises text with metadata for speech synthesis. 5. The method of claim 1 wherein the introductory message comprises conditional content that is conditioned by conversation state. 6. The method of claim 5 wherein the condition is a range. 7. The method of claim 1 further comprising detecting a mood, wherein delivering the introductory message is conditioned by a mood. 8. The method of claim 1 further comprising choosing an interactive experience unit with the highest message rate value, wherein message rate values are conditioned by conversation state. 9. The method of claim 1 further comprising taking an action indicated by the intent, wherein the action comprises a script with a plurality of conditional conversation paths. 10. The method of claim 1 further comprising determining an interface privacy level, wherein delivering the introductory message is conditioned by the privacy level. 11. A method of defining interactive experiences, the method comprising: specifying an introductory message; defining a natural language grammar comprising at least one intent; and associating the natural language grammar with an interactive experience unit, wherein, if a message server system interprets an expression, from a user, as matching the grammar, after starting delivery of the introductory message, the message server system determines at least one of the intents. 12. The method of claim 11 wherein the interactive experience is an audio experience. 13. The method of claim 11 further comprising executing an action indicated by the intent. 14. A system comprising: a natural language processor enabled to interpret natural language expressions according to a natural language grammar; means to deliver introductory messages of an experience unit having an associated natural language grammar, the means to deliver being in communication with the natural language processor; means to receive natural language expressions after delivering at least one of the introductory messages, the means to deliver being in communication with the natural language processor; and means for storing definitions of interactive experience units, the means for storing being in communication with the natural language processor, and the interactive experience units comprising: introductory messages; and natural language grammars comprising intents that indicate actions. 15. The system of claim 14 further comprising: means to execute the actions, the means to execute being in communication with the natural language processor and the execution being conditioned by a natural language expression matching a grammar. 16. A method comprising: maintaining, for a conversational human-machine interface, a current conversation state variable; and analyzing an offer for a message in relation to the conversational human-machine interface, wherein: the offer indicates an interesting value of the conversation state variable; and the offer is influenced by the current conversation state variable having the interesting conversation state variable value. 17. The method of claim 16 wherein the current conversation state variable represents one or more keywords or related words. 18. The method of claim 16 wherein the current conversation state variable represents a domain. 19. The method of claim 16 wherein the interesting value is the result of a programmatic equation. 20. The method of claim 16 further comprising detecting a current mood value, wherein the offer is influenced by the current mood value. 21. The method of claim 16 further comprising receiving a current mood value, wherein the offer is influenced by the current mood value. 22. The method of claim 16 further comprising determining whether a the human-machine interface is in a private listening environment, wherein the offer is conditioned on whether the human-machine interface is in a private listening environment. 23. The method of claim 22 wherein determining whether the human-machine interface is in a private listening environment is by means of detecting presence of people. 24. The method of claim 16, further comprising: performing voice activity detection; and delivering an introductory message of an experience unit associated, wherein delivering the introductory message is conditioned on no voice activity being detected. 25. A method of identifying amessage opportunity comprising: monitoring natural language interactions at a human-machine interface; interpreting a natural language expression as a query that identifies a zero moment of truth intent; identifying a type of product or service referenced in the query; and alerting a message offer. 26. The method of claim 25 further comprising delivering an experience unit identified by the message offer. 27. The method of claim 25 further comprising delivering an introductory message associated with the message offer. 28. The method of claim 27, further comprising receiving a follow-up that matches a grammar.
Natural language grammars interpret expressions at the conversational human-machine interfaces of devices. Under conditions favoring engagement, as specified in a unit of conversational code, the device initiates a discussion using one or more of TTS, images, video, audio, and animation depending on the device capabilities of screen and audio output. Conversational code units specify conditions based on conversation state, mood, and privacy. Grammars provide intents that cause calls to system functions. Units can provide scripts for guiding the conversation. The device, or supporting server system, can provide feedback to creators of the conversational code units for analysis and machine learning.1. A method of delivering interactive experiences through a human-machine interface, the method comprising: delivering an introductory message defined in an interactive experience unit; receiving a natural language expression from a user after starting delivery of the introductory message; interpreting the natural language expression according to a natural language grammar defined within the interactive experience unit; and responsive to the natural language expression matching the grammar, determining an intent defined in the natural language grammar. 2. The method of claim 1 wherein: the introductory message is verbal; and the natural language expression is received as audible speech. 3. The method of claim 1 wherein the mode of user interaction is screenless. 4. The method of claim 1 wherein the introductory message comprises text with metadata for speech synthesis. 5. The method of claim 1 wherein the introductory message comprises conditional content that is conditioned by conversation state. 6. The method of claim 5 wherein the condition is a range. 7. The method of claim 1 further comprising detecting a mood, wherein delivering the introductory message is conditioned by a mood. 8. The method of claim 1 further comprising choosing an interactive experience unit with the highest message rate value, wherein message rate values are conditioned by conversation state. 9. The method of claim 1 further comprising taking an action indicated by the intent, wherein the action comprises a script with a plurality of conditional conversation paths. 10. The method of claim 1 further comprising determining an interface privacy level, wherein delivering the introductory message is conditioned by the privacy level. 11. A method of defining interactive experiences, the method comprising: specifying an introductory message; defining a natural language grammar comprising at least one intent; and associating the natural language grammar with an interactive experience unit, wherein, if a message server system interprets an expression, from a user, as matching the grammar, after starting delivery of the introductory message, the message server system determines at least one of the intents. 12. The method of claim 11 wherein the interactive experience is an audio experience. 13. The method of claim 11 further comprising executing an action indicated by the intent. 14. A system comprising: a natural language processor enabled to interpret natural language expressions according to a natural language grammar; means to deliver introductory messages of an experience unit having an associated natural language grammar, the means to deliver being in communication with the natural language processor; means to receive natural language expressions after delivering at least one of the introductory messages, the means to deliver being in communication with the natural language processor; and means for storing definitions of interactive experience units, the means for storing being in communication with the natural language processor, and the interactive experience units comprising: introductory messages; and natural language grammars comprising intents that indicate actions. 15. The system of claim 14 further comprising: means to execute the actions, the means to execute being in communication with the natural language processor and the execution being conditioned by a natural language expression matching a grammar. 16. A method comprising: maintaining, for a conversational human-machine interface, a current conversation state variable; and analyzing an offer for a message in relation to the conversational human-machine interface, wherein: the offer indicates an interesting value of the conversation state variable; and the offer is influenced by the current conversation state variable having the interesting conversation state variable value. 17. The method of claim 16 wherein the current conversation state variable represents one or more keywords or related words. 18. The method of claim 16 wherein the current conversation state variable represents a domain. 19. The method of claim 16 wherein the interesting value is the result of a programmatic equation. 20. The method of claim 16 further comprising detecting a current mood value, wherein the offer is influenced by the current mood value. 21. The method of claim 16 further comprising receiving a current mood value, wherein the offer is influenced by the current mood value. 22. The method of claim 16 further comprising determining whether a the human-machine interface is in a private listening environment, wherein the offer is conditioned on whether the human-machine interface is in a private listening environment. 23. The method of claim 22 wherein determining whether the human-machine interface is in a private listening environment is by means of detecting presence of people. 24. The method of claim 16, further comprising: performing voice activity detection; and delivering an introductory message of an experience unit associated, wherein delivering the introductory message is conditioned on no voice activity being detected. 25. A method of identifying amessage opportunity comprising: monitoring natural language interactions at a human-machine interface; interpreting a natural language expression as a query that identifies a zero moment of truth intent; identifying a type of product or service referenced in the query; and alerting a message offer. 26. The method of claim 25 further comprising delivering an experience unit identified by the message offer. 27. The method of claim 25 further comprising delivering an introductory message associated with the message offer. 28. The method of claim 27, further comprising receiving a follow-up that matches a grammar.
2,600
10,964
10,964
16,396,368
2,699
Methods and systems for managing a premises are described. A premises or devices at a premises may be associated with one or more premises zones. The one or more premises zones may be associated with corresponding content. If data is received from a device associated with a particular premises zone, then the content may be output. The content may be used to notify a user of an event, state change, or other indication associated with the particular premises zone.
1. (canceled) 2. A method comprising: receiving event data indicative of an event associated with a premises device located at a premises; determining, based on receiving the event data, that one or more of the event or the premises device is associated with a first premises zone of a plurality of premises zones; determining an audible indication, of a plurality of audible indications, based on an association between the first premises zone and the audible indication; and causing output of the determined audible indication. 3. The method of claim 2, wherein each of the plurality of premises zones is associated with one or more audible indications of the plurality of audible indications. 4. The method of claim 2, wherein each of the plurality of premises zones is associated with one or more corresponding zone types of a plurality of zone types, and wherein determining the audible indication comprises determining, based on a first zone type associated with the first premises zone, the audible indication. 5. The method of claim 4, wherein one or more of the first premises zone or the first zone type is indicative of at least one of a type of premises zone, a location within the premises, or a type of device. 6. The method of claim 2, further comprising receiving, via a network, from a computing device external to the premises, and based on user input indicating a selection of the audible indication, the audible indication. 7. The method of claim 2, wherein the event comprises one or more of a change of a state of the premises device, detection of an entry into the first premises zone, detection of an exit from the first premises zone, a sensor event, a door event, a window event, a motion detection event, or detection of a substance. 8. The method of claim 2, wherein causing output of the determined audible indication comprises causing one or more of: output, via a device located at the premises, of the audible indication or transmission, via a network, of data indicative of the audible indication. 9. The method of claim 2, wherein determining the audible indication comprises determining, based on at least one of the premises device, type information associated with the premises device, or the event, the audible indication. 10. The method of claim 2, wherein the audible indication comprises at least one of an audio tone, a sound, an audio alert, an audio file, an audio signal, or an audio item. 11. The method of claim 2, wherein the audible indication is one of one or more audible indications, of the plurality of audible indications, that are associated with the first premises zone, and wherein the audible indication is determined from among the one or more audible indications based on at least one of: an association of the event with the audible indication, a type of the event, whether the event is an open event or a close event, or whether the event is indicative of a specific activity or non-activity. 12. A method comprising: receiving event data indicative of an event associated with a premises device located at a premises; determining, based on receiving the event data, a zone type associated with the premises device; determining an audible indication, of a plurality of audible indications, based on an association between the zone type and the audible indication; and causing output of the determined audible indication. 13. The method of claim 12, wherein the zone type is one of a plurality of zone types, and wherein each of the plurality of zone types is associated with one or more audible indications of the plurality of audible indications. 14. The method of claim 12, wherein determining the zone type associated with the premises device comprises determining, based on the event data, a zone associated with the premises device and determining, based on the zone, the zone type. 15. The method of claim 12, wherein the zone type is indicative of at least one of a type of premises zone, a location within the premises, or a type of device. 16. The method of claim 12, further comprising receiving, via a network, from a computing device external to the premises, and based on user input indicating a selection of the audible indication, the audible indication. 17. The method of claim 12, wherein the event comprises one or more of a change of a state of the premises device, detection of an entry into a premises zone associated with the zone type, detection of an exit from the premises zone, a sensor event, a door event, a window event, a motion detection event, or detection of a substance. 18. The method of claim 12, wherein causing output of the determined audible indication comprises causing one or more of: output, via a device located at the premises, of the audible indication or transmission, via a network, of data indicative of the audible indication. 19. The method of claim 12, wherein determining the audible indication comprises determining, based on at least one of the premises device or the event, the audible indication. 20. The method of claim 12, wherein the audible indication comprises at least one of an audio tone, a sound, an audio alert, an audio file, an audio signal, or an audio item. 21. The method of claim 12, wherein the audible indication is one of one or more audible indications, of the plurality of audible indications, that are associated with the zone type, and wherein the audible indication is determined from among the one or more audible indications based on at least one of: an association of the event with the audible indication, a type of the event, whether the event is an open event or a close event, or whether the event is indicative of a specific activity or non-activity. 22. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: receive event data indicative of an event associated with a premises device located at a premises; determine, based on the event data, that one or more of the event or the premises device is associated with a first premises zone of a plurality of premises zones; determine an audible indication, of a plurality of audible indications, based on an association between the first premises zone and the audible indication; and cause output of the determined audible indication. 23. The device of claim 22, wherein each of the plurality of premises zones is associated with one or more audible indications of the plurality of audible indications. 24. The device of claim 22, wherein each of the plurality of premises zones is associated with one or more corresponding zone types of a plurality of zone types, and wherein the instructions that, when executed by the one or more processors, cause the device to determine the audible indication comprises instructions that, when executed by the one or more processors, cause the device to determine, based on a first zone type associated with the first premises zone, the audible indication. 25. The device of claim 24, wherein one or more of the first premises zone or the first zone type is indicative of at least one of a type of premises zone, a location within the premises, or a type of device. 26. The device of claim 22, wherein the instructions, when executed by the one or more processors, further cause the device to receive, via a network, from a computing device external to the premises, and based on user input indicating a selection of the audible indication, the audible indication. 27. The device of claim 22, wherein the event comprises one or more of a change of a state of the premises device, detection of an entry into the first premises zone, detection of an exit from the first premises zone, a sensor event, a door event, a window event, a motion detection event, or detection of a substance. 28. The device of claim 22, wherein the instructions that, when executed by the one or more processors, cause the device to determine the audible indication comprises instructions that, when executed by the one or more processors, cause the device to determine, based on at least one of the premises device, zone type information associated with the premises device, or the event, the audible indication. 29. The device of claim 22, wherein the audible indication comprises at least one of an audio tone, a sound, an audio alert, an audio file, an audio signal, or an audio item. 30. The device of claim 22, wherein the audible indication is one of one or more audible indications, of the plurality of audible indications, that are associated with the first premises zone, and wherein the audible indication is determined from among the one or more audible indications based on at least one of: an association of the event with the audible indication, a type of the event, whether the event is an open event or a close event, or whether the event is indicative of a specific activity or non-activity. 31. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: receive event data indicative of an event associated with a premises device located at a premises; determine, based the event data, a zone type associated with the premises device; determine an audible indication, of a plurality of audible indications, based on an association of the zone type and the audible indication; and cause output of the determined audible indication. 32. The device of claim 31, wherein the zone type is one of a plurality of zone types, and wherein each of the plurality of zone types is associated with one or more audible indications of the plurality of audible indications. 33. The device of claim 31, wherein the instructions that, when executed by the one or more processors, cause the device to determine the zone type associated with the premises device comprises instructions that, when executed by the one or more processors, cause the device to determine, based on the event data, a first zone of a plurality of zones associated with the premises device and determine, based on the zone, the zone type. 34. The device of claim 31, wherein the zone type is indicative of at least one of a type of premises zone, a location within the premises, or a type of device. 35. The device of claim 31, wherein the instructions, when executed by the one or more processors, further cause the device to receive, via a network, from a computing device external to the premises, and based on user input indicating a selection of the audible indication, the audible indication. 36. The device of claim 31, wherein the event comprises one or more of a change of a state of the premises device, a detection of an entry into a premises zone associated with the zone type, a detection of an exit from the premises zone, a sensor event, a door event, a window event, a motion detection event, or detection of a substance. 37. The device of claim 31, wherein the instructions that, when executed by the one or more processors, cause the device to determine the audible indication comprises instructions that, when executed by the one or more processors, cause the device to determine, based on at least one of the premises device or the event, the audible indication. 38. The device of claim 31, wherein the audible indication comprises at least one of an audio tone, a sound, an audio alert, an audio file, an audio signal, or an audio item. 39. The device of claim 31, wherein the audible indication is one of one or more audible indications, of the plurality of audible indications, that are associated with the zone type, and wherein the audible indication is determined from among the one or more audible indications based on at least one of: an association of the event with the audible indication, a type of the event, whether the event is an open event or a close event, or whether the event is indicative of a specific activity or non-activity.
Methods and systems for managing a premises are described. A premises or devices at a premises may be associated with one or more premises zones. The one or more premises zones may be associated with corresponding content. If data is received from a device associated with a particular premises zone, then the content may be output. The content may be used to notify a user of an event, state change, or other indication associated with the particular premises zone.1. (canceled) 2. A method comprising: receiving event data indicative of an event associated with a premises device located at a premises; determining, based on receiving the event data, that one or more of the event or the premises device is associated with a first premises zone of a plurality of premises zones; determining an audible indication, of a plurality of audible indications, based on an association between the first premises zone and the audible indication; and causing output of the determined audible indication. 3. The method of claim 2, wherein each of the plurality of premises zones is associated with one or more audible indications of the plurality of audible indications. 4. The method of claim 2, wherein each of the plurality of premises zones is associated with one or more corresponding zone types of a plurality of zone types, and wherein determining the audible indication comprises determining, based on a first zone type associated with the first premises zone, the audible indication. 5. The method of claim 4, wherein one or more of the first premises zone or the first zone type is indicative of at least one of a type of premises zone, a location within the premises, or a type of device. 6. The method of claim 2, further comprising receiving, via a network, from a computing device external to the premises, and based on user input indicating a selection of the audible indication, the audible indication. 7. The method of claim 2, wherein the event comprises one or more of a change of a state of the premises device, detection of an entry into the first premises zone, detection of an exit from the first premises zone, a sensor event, a door event, a window event, a motion detection event, or detection of a substance. 8. The method of claim 2, wherein causing output of the determined audible indication comprises causing one or more of: output, via a device located at the premises, of the audible indication or transmission, via a network, of data indicative of the audible indication. 9. The method of claim 2, wherein determining the audible indication comprises determining, based on at least one of the premises device, type information associated with the premises device, or the event, the audible indication. 10. The method of claim 2, wherein the audible indication comprises at least one of an audio tone, a sound, an audio alert, an audio file, an audio signal, or an audio item. 11. The method of claim 2, wherein the audible indication is one of one or more audible indications, of the plurality of audible indications, that are associated with the first premises zone, and wherein the audible indication is determined from among the one or more audible indications based on at least one of: an association of the event with the audible indication, a type of the event, whether the event is an open event or a close event, or whether the event is indicative of a specific activity or non-activity. 12. A method comprising: receiving event data indicative of an event associated with a premises device located at a premises; determining, based on receiving the event data, a zone type associated with the premises device; determining an audible indication, of a plurality of audible indications, based on an association between the zone type and the audible indication; and causing output of the determined audible indication. 13. The method of claim 12, wherein the zone type is one of a plurality of zone types, and wherein each of the plurality of zone types is associated with one or more audible indications of the plurality of audible indications. 14. The method of claim 12, wherein determining the zone type associated with the premises device comprises determining, based on the event data, a zone associated with the premises device and determining, based on the zone, the zone type. 15. The method of claim 12, wherein the zone type is indicative of at least one of a type of premises zone, a location within the premises, or a type of device. 16. The method of claim 12, further comprising receiving, via a network, from a computing device external to the premises, and based on user input indicating a selection of the audible indication, the audible indication. 17. The method of claim 12, wherein the event comprises one or more of a change of a state of the premises device, detection of an entry into a premises zone associated with the zone type, detection of an exit from the premises zone, a sensor event, a door event, a window event, a motion detection event, or detection of a substance. 18. The method of claim 12, wherein causing output of the determined audible indication comprises causing one or more of: output, via a device located at the premises, of the audible indication or transmission, via a network, of data indicative of the audible indication. 19. The method of claim 12, wherein determining the audible indication comprises determining, based on at least one of the premises device or the event, the audible indication. 20. The method of claim 12, wherein the audible indication comprises at least one of an audio tone, a sound, an audio alert, an audio file, an audio signal, or an audio item. 21. The method of claim 12, wherein the audible indication is one of one or more audible indications, of the plurality of audible indications, that are associated with the zone type, and wherein the audible indication is determined from among the one or more audible indications based on at least one of: an association of the event with the audible indication, a type of the event, whether the event is an open event or a close event, or whether the event is indicative of a specific activity or non-activity. 22. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: receive event data indicative of an event associated with a premises device located at a premises; determine, based on the event data, that one or more of the event or the premises device is associated with a first premises zone of a plurality of premises zones; determine an audible indication, of a plurality of audible indications, based on an association between the first premises zone and the audible indication; and cause output of the determined audible indication. 23. The device of claim 22, wherein each of the plurality of premises zones is associated with one or more audible indications of the plurality of audible indications. 24. The device of claim 22, wherein each of the plurality of premises zones is associated with one or more corresponding zone types of a plurality of zone types, and wherein the instructions that, when executed by the one or more processors, cause the device to determine the audible indication comprises instructions that, when executed by the one or more processors, cause the device to determine, based on a first zone type associated with the first premises zone, the audible indication. 25. The device of claim 24, wherein one or more of the first premises zone or the first zone type is indicative of at least one of a type of premises zone, a location within the premises, or a type of device. 26. The device of claim 22, wherein the instructions, when executed by the one or more processors, further cause the device to receive, via a network, from a computing device external to the premises, and based on user input indicating a selection of the audible indication, the audible indication. 27. The device of claim 22, wherein the event comprises one or more of a change of a state of the premises device, detection of an entry into the first premises zone, detection of an exit from the first premises zone, a sensor event, a door event, a window event, a motion detection event, or detection of a substance. 28. The device of claim 22, wherein the instructions that, when executed by the one or more processors, cause the device to determine the audible indication comprises instructions that, when executed by the one or more processors, cause the device to determine, based on at least one of the premises device, zone type information associated with the premises device, or the event, the audible indication. 29. The device of claim 22, wherein the audible indication comprises at least one of an audio tone, a sound, an audio alert, an audio file, an audio signal, or an audio item. 30. The device of claim 22, wherein the audible indication is one of one or more audible indications, of the plurality of audible indications, that are associated with the first premises zone, and wherein the audible indication is determined from among the one or more audible indications based on at least one of: an association of the event with the audible indication, a type of the event, whether the event is an open event or a close event, or whether the event is indicative of a specific activity or non-activity. 31. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: receive event data indicative of an event associated with a premises device located at a premises; determine, based the event data, a zone type associated with the premises device; determine an audible indication, of a plurality of audible indications, based on an association of the zone type and the audible indication; and cause output of the determined audible indication. 32. The device of claim 31, wherein the zone type is one of a plurality of zone types, and wherein each of the plurality of zone types is associated with one or more audible indications of the plurality of audible indications. 33. The device of claim 31, wherein the instructions that, when executed by the one or more processors, cause the device to determine the zone type associated with the premises device comprises instructions that, when executed by the one or more processors, cause the device to determine, based on the event data, a first zone of a plurality of zones associated with the premises device and determine, based on the zone, the zone type. 34. The device of claim 31, wherein the zone type is indicative of at least one of a type of premises zone, a location within the premises, or a type of device. 35. The device of claim 31, wherein the instructions, when executed by the one or more processors, further cause the device to receive, via a network, from a computing device external to the premises, and based on user input indicating a selection of the audible indication, the audible indication. 36. The device of claim 31, wherein the event comprises one or more of a change of a state of the premises device, a detection of an entry into a premises zone associated with the zone type, a detection of an exit from the premises zone, a sensor event, a door event, a window event, a motion detection event, or detection of a substance. 37. The device of claim 31, wherein the instructions that, when executed by the one or more processors, cause the device to determine the audible indication comprises instructions that, when executed by the one or more processors, cause the device to determine, based on at least one of the premises device or the event, the audible indication. 38. The device of claim 31, wherein the audible indication comprises at least one of an audio tone, a sound, an audio alert, an audio file, an audio signal, or an audio item. 39. The device of claim 31, wherein the audible indication is one of one or more audible indications, of the plurality of audible indications, that are associated with the zone type, and wherein the audible indication is determined from among the one or more audible indications based on at least one of: an association of the event with the audible indication, a type of the event, whether the event is an open event or a close event, or whether the event is indicative of a specific activity or non-activity.
2,600
10,965
10,965
16,595,974
2,687
A universal remote control (URC) is programmed to control a particular type and make of electronic consumer device using a graphical user interface. A plurality of images is displayed on the user-interface. Each image of the plurality of images is a digital photograph of an electronic consumer device or a remote control device usable to control the corresponding electronic consumer device. A user selects the digital photograph of the particular type and make of electronic consumer device or its corresponding remote control device. Codeset information associated with the selected device is transmitted to the URC such that the URC is programmed to control the selected device. If the codeset information is a codeset identifier, then it is displayed on the user interface. The user enters the codeset identifier into the URC such that the URC is programmed to control the selected device.
1. A home entertainment device, comprising: a receiver, a transmitter, a processing device in communication with the receiver and the transmitter, and a memory having stored thereon a set of instructions which, when executed by the processing device, cause the home entertainment device to use a data received via the receiver that functions to identify a consumer electronic device to retrieve from a database having a plurality of device descriptions a one of the plurality of device descriptions for the consumer electronic device and to transmit to a controlling device via use of the transmitter a communication having the one of the plurality of device descriptions for use in configuring the controlling device to transmit command communications to the consumer electronic device. 2. The home entertainment device as recited in claim 1, wherein the one of the plurality of device descriptions comprises data indicative of at least a brand and a model of the consumer electronic device. 3. The home entertainment device as recited in claim 2, wherein the one of the plurality of device descriptions further comprises data indicative of an operational behavior of the consumer electronic device. 4. The home entertainment device as recited in claim 3, wherein the data indicative of the operational behavior of the consumer electronic device comprises data indicative of how long it will take the consumer electronic device to complete a power on operation following receipt of a “power” signal. 5. The home entertainment device as recited in claim 3, wherein the data indicative of the operational behavior of the consumer electronic device comprises data indicative of a need to operate an “enter” key of the controlling device following operations of digit keys of the controlling device when the controlling device is operated for channel selection purposes. 6. The home entertainment device as recited in claim 1, wherein the data received via the receiver that functions to identify the consumer electronic device comprises menu selection command data received from a device operable to communicate with the home entertainment device via the receiver. 7. The home entertainment device as recited in claim 6, wherein the menu selection command data received from the device operable to communicate with the home entertainment device via the receiver comprises a data that functions to select an image from a plurality of images that are caused to be presented by the home entertainment device in a display associated with the home entertainment device. 8. The home entertainment device as recited in claim 1, wherein the database is stored in the memory. 9. The home entertainment device as recited in claim 1, wherein the database is stored in further memory device associated with a remotely located server device that is accessible by the home entertainment device.
A universal remote control (URC) is programmed to control a particular type and make of electronic consumer device using a graphical user interface. A plurality of images is displayed on the user-interface. Each image of the plurality of images is a digital photograph of an electronic consumer device or a remote control device usable to control the corresponding electronic consumer device. A user selects the digital photograph of the particular type and make of electronic consumer device or its corresponding remote control device. Codeset information associated with the selected device is transmitted to the URC such that the URC is programmed to control the selected device. If the codeset information is a codeset identifier, then it is displayed on the user interface. The user enters the codeset identifier into the URC such that the URC is programmed to control the selected device.1. A home entertainment device, comprising: a receiver, a transmitter, a processing device in communication with the receiver and the transmitter, and a memory having stored thereon a set of instructions which, when executed by the processing device, cause the home entertainment device to use a data received via the receiver that functions to identify a consumer electronic device to retrieve from a database having a plurality of device descriptions a one of the plurality of device descriptions for the consumer electronic device and to transmit to a controlling device via use of the transmitter a communication having the one of the plurality of device descriptions for use in configuring the controlling device to transmit command communications to the consumer electronic device. 2. The home entertainment device as recited in claim 1, wherein the one of the plurality of device descriptions comprises data indicative of at least a brand and a model of the consumer electronic device. 3. The home entertainment device as recited in claim 2, wherein the one of the plurality of device descriptions further comprises data indicative of an operational behavior of the consumer electronic device. 4. The home entertainment device as recited in claim 3, wherein the data indicative of the operational behavior of the consumer electronic device comprises data indicative of how long it will take the consumer electronic device to complete a power on operation following receipt of a “power” signal. 5. The home entertainment device as recited in claim 3, wherein the data indicative of the operational behavior of the consumer electronic device comprises data indicative of a need to operate an “enter” key of the controlling device following operations of digit keys of the controlling device when the controlling device is operated for channel selection purposes. 6. The home entertainment device as recited in claim 1, wherein the data received via the receiver that functions to identify the consumer electronic device comprises menu selection command data received from a device operable to communicate with the home entertainment device via the receiver. 7. The home entertainment device as recited in claim 6, wherein the menu selection command data received from the device operable to communicate with the home entertainment device via the receiver comprises a data that functions to select an image from a plurality of images that are caused to be presented by the home entertainment device in a display associated with the home entertainment device. 8. The home entertainment device as recited in claim 1, wherein the database is stored in the memory. 9. The home entertainment device as recited in claim 1, wherein the database is stored in further memory device associated with a remotely located server device that is accessible by the home entertainment device.
2,600
10,966
10,966
14,651,272
2,616
At least one aspect of the present disclosure describes a computer-implemented system for optimization of animated content. The system includes a rule management module, a content generation module, and a content evaluation module. The content generation module is operative to generate an animated content configuration in accordance with a set of rules on content generation. The animated content configuration includes an initial configuration and a transition and is designed with a particular optimization objective. The content evaluation module is operative to evaluate content performance on reaching the particular optimization objective based on data acquired when a piece of animated content assembled from the animated content configuration is displayed. The rule management module is operative to amend the set of rules based on the evaluated content performance.
1. A computer-implemented system for facilitating automatic optimization of animated content to be rendered on an electronically addressable display, the system comprising: a content generation module operative to generate an animated content configuration in accordance with a set of rules on content generation, the animated content configuration comprising an initial configuration and a transition, the animated content configuration designed with a particular optimization objective; a content evaluation module operative to evaluate content performance on reaching the particular optimization objective based on data acquired when a piece of animated content assembled from the animated content configuration is displayed; and a rule management module operative to amend the set of rules based on the evaluated content performance, wherein the initial configuration comprises a plurality of content elements and one or more relationships among the plurality of content elements, and the transition defines image transformations of one of the plurality of content elements. 2. The computer-implemented system of claim 1, wherein the set of rules on content generation are initially developed according to historical data on effects of particular categorical relationships and metric adjustments. 3. The computer-implemented system of claim 1, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on probability factor and modifying a rule on probability factor, the rule on probability factor specifying probability of content configurations including a configuration element that has a particular attribute value. 4. The computer-implemented system of claim 1, wherein the transition defines changes over time on at least one aspect of relationships, velocity, size, opacity, degree of curvature, color, brightness, hue, and contrast. 5. The computer-implemented system of claim 3, wherein the configuration element comprises at least one of a content element, a size adjustment, a position adjustment, a relationship, and a transition. 6. The computer-implemented system of claim 1, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on a content element and modifying a rule on a content element. 7. The computer-implemented system of claim 1, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on a relationship and modifying a rule on a relationship. 8. The computer-implemented system of claim 7, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on probability factor and modifying a rule on probability factor, the rule on probability factor specifying a statistical probability that a piece of content expresses a particular value of a metric adjustment within the relationship. 9. The computer-implemented system of claim 1, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on a transition and modifying a rule on a transition. 10. A method for optimizing animated content, comprising: generating, by a processing unit, two animated content configurations in accordance with a set of rules on content generation; assembling, by a processing unit, the two animated content configurations to two pieces of animated content; conducting an experiment to obtain effectiveness data of the two pieces of animated content on reaching an optimization objective; determining relative effectiveness of the two animated content configurations based on the effectiveness data; and amending the set of rules on content generation based on the relative effectiveness of the two animated content configurations, wherein each animated content configuration comprises an initial configuration and a transition, the initial configuration comprises a plurality of content elements and one or more relationships among the plurality of content elements, the transition defines image transformations of one of the plurality of content elements. 11. The method of claim 10, wherein the transition defines changes over time on at least one aspect of relationships, velocity, size, opacity, degree of curvature, color, brightness, hue, and contrast. 12. The method of claim 10, wherein the effectiveness data comprises at least one of data indicative of activities at a location where the piece of content is displayed, data indicative of view behavior, and result from a visual attention model. 13. The method of claim 10, wherein the amending step comprises amending the set of rules by at least one of the steps of adding a rule on visual perception based on the particular optimization objective and modifying a rule on visual perception based on the particular optimization objective. 14. The method of claim 10, further comprising: applying a visual attention model (VAM) on one of the two assembled piece of content to generate a VAM output and verify if the one of the two assembled piece of content satisfies the set of rules based on the VAM output. 15. The method of claim 10, wherein the amending step comprises amending the set of rules by at least one of the steps of adding a rule on a relationship, modifying a rule on a relationship, adding a rule on a transition, modifying a rule on a transition, adding a rule on a content element, and modifying a rule on a content element.
At least one aspect of the present disclosure describes a computer-implemented system for optimization of animated content. The system includes a rule management module, a content generation module, and a content evaluation module. The content generation module is operative to generate an animated content configuration in accordance with a set of rules on content generation. The animated content configuration includes an initial configuration and a transition and is designed with a particular optimization objective. The content evaluation module is operative to evaluate content performance on reaching the particular optimization objective based on data acquired when a piece of animated content assembled from the animated content configuration is displayed. The rule management module is operative to amend the set of rules based on the evaluated content performance.1. A computer-implemented system for facilitating automatic optimization of animated content to be rendered on an electronically addressable display, the system comprising: a content generation module operative to generate an animated content configuration in accordance with a set of rules on content generation, the animated content configuration comprising an initial configuration and a transition, the animated content configuration designed with a particular optimization objective; a content evaluation module operative to evaluate content performance on reaching the particular optimization objective based on data acquired when a piece of animated content assembled from the animated content configuration is displayed; and a rule management module operative to amend the set of rules based on the evaluated content performance, wherein the initial configuration comprises a plurality of content elements and one or more relationships among the plurality of content elements, and the transition defines image transformations of one of the plurality of content elements. 2. The computer-implemented system of claim 1, wherein the set of rules on content generation are initially developed according to historical data on effects of particular categorical relationships and metric adjustments. 3. The computer-implemented system of claim 1, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on probability factor and modifying a rule on probability factor, the rule on probability factor specifying probability of content configurations including a configuration element that has a particular attribute value. 4. The computer-implemented system of claim 1, wherein the transition defines changes over time on at least one aspect of relationships, velocity, size, opacity, degree of curvature, color, brightness, hue, and contrast. 5. The computer-implemented system of claim 3, wherein the configuration element comprises at least one of a content element, a size adjustment, a position adjustment, a relationship, and a transition. 6. The computer-implemented system of claim 1, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on a content element and modifying a rule on a content element. 7. The computer-implemented system of claim 1, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on a relationship and modifying a rule on a relationship. 8. The computer-implemented system of claim 7, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on probability factor and modifying a rule on probability factor, the rule on probability factor specifying a statistical probability that a piece of content expresses a particular value of a metric adjustment within the relationship. 9. The computer-implemented system of claim 1, wherein the rule management module is further operative to amend the set of rules by at least one of the steps of adding a rule on a transition and modifying a rule on a transition. 10. A method for optimizing animated content, comprising: generating, by a processing unit, two animated content configurations in accordance with a set of rules on content generation; assembling, by a processing unit, the two animated content configurations to two pieces of animated content; conducting an experiment to obtain effectiveness data of the two pieces of animated content on reaching an optimization objective; determining relative effectiveness of the two animated content configurations based on the effectiveness data; and amending the set of rules on content generation based on the relative effectiveness of the two animated content configurations, wherein each animated content configuration comprises an initial configuration and a transition, the initial configuration comprises a plurality of content elements and one or more relationships among the plurality of content elements, the transition defines image transformations of one of the plurality of content elements. 11. The method of claim 10, wherein the transition defines changes over time on at least one aspect of relationships, velocity, size, opacity, degree of curvature, color, brightness, hue, and contrast. 12. The method of claim 10, wherein the effectiveness data comprises at least one of data indicative of activities at a location where the piece of content is displayed, data indicative of view behavior, and result from a visual attention model. 13. The method of claim 10, wherein the amending step comprises amending the set of rules by at least one of the steps of adding a rule on visual perception based on the particular optimization objective and modifying a rule on visual perception based on the particular optimization objective. 14. The method of claim 10, further comprising: applying a visual attention model (VAM) on one of the two assembled piece of content to generate a VAM output and verify if the one of the two assembled piece of content satisfies the set of rules based on the VAM output. 15. The method of claim 10, wherein the amending step comprises amending the set of rules by at least one of the steps of adding a rule on a relationship, modifying a rule on a relationship, adding a rule on a transition, modifying a rule on a transition, adding a rule on a content element, and modifying a rule on a content element.
2,600
10,967
10,967
16,066,225
2,613
In some examples, an electronic device receives selection of a target image relating to an augmented reality presentation, displays, in a display screen of the electronic device, captured visual data of an environment acquired by the electronic device, and displays, in the display screen, guidance information relating to the target image to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment.
1. A non-transitory storage medium storing instructions that upon execution cause an electronic device to: receive a selection of a target image relating to an augmented reality presentation; display, in a display screen of the electronic device, captured visual data of an environment acquired by the electronic device; and display, in the display screen, guidance information relating to the target image to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment. 2. The non-transitory storage medium of claim 1, wherein the guidance information is displayed concurrently with the captured visual data of the environment. 3. The non-transitory storage medium of claim 2, wherein the guidance information is displayed in a first portion of the display screen, and the concurrently displayed captured visual data of the environment is displayed in a second, different portion of the display screen. 4. The non-transitory storage medium of claim 2, wherein the displayed guidance information is displayed as an overlay over the captured visual data of the environment in the display screen. 5. The non-transitory storage medium of claim 1, wherein the displayed guidance information comprises the target image. 6. The non-transitory storage medium of claim 1, wherein the displayed guidance information comprises position information to assist the user in re-positioning the electronic device towards the physical target. 7. The non-transitory storage medium of claim 6, wherein the instructions upon execution cause the electronic device to: determine where the physical target is in the captured visual data based on object recognition of the physical target that includes receiving a score that indicates a likelihood of the physical target matching the target image. 8. The non-transitory storage medium of claim 1, wherein the instructions upon execution cause the electronic device to: receive a scanned visual data of the physical target after the user has re-positioned the electronic device to focus on the physical target based on the guidance information; and responsive to the scanned visual data of the physical target, trigger display of the augmented reality presentation in the display screen of the electronic device. 9. The non-transitory storage medium of claim 1, wherein receiving the selection of the target image comprises receiving the selection: in response to user selection of a target image from among a plurality of target images presented in the display screen, or in response to opening an application in the electronic device. 10. An electronic device comprising: a camera to capture visual data of an environment; a display device comprising a display screen; and a processor to: receive selection of a target image relating to an augmented reality presentation; cause display, in the display screen, of the captured visual data of an environment; cause display, in the display screen, guidance information relating to the target image to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment; and responsive to a capture of visual data of the physical target, trigger presentation of the augmented reality presentation. 11. The electronic device of claim 10, wherein the displayed guidance information includes the target image. 12. The electronic device of claim 10, wherein the displayed guidance information includes position information indicating a location of the physical target. 13. The electronic device of claim 10, wherein the processor is to further determine one or both of a position and an orientation of the electronic device, and to trigger display of the guidance information based on one or both of the position and the orientation of the electronic device. 14. A method comprising: receiving, by an electronic device, selection of a target image; displaying, in a display screen of the electronic device, captured visual data of an environment obtained by a camera of the electronic device; and displaying, in the display screen, the target image concurrently with the captured visual data of the environment, to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment. 15. The method of claim 14, further comprising: displaying, in the display screen, position information that indicates a location of the target image relative to the electronic device.
In some examples, an electronic device receives selection of a target image relating to an augmented reality presentation, displays, in a display screen of the electronic device, captured visual data of an environment acquired by the electronic device, and displays, in the display screen, guidance information relating to the target image to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment.1. A non-transitory storage medium storing instructions that upon execution cause an electronic device to: receive a selection of a target image relating to an augmented reality presentation; display, in a display screen of the electronic device, captured visual data of an environment acquired by the electronic device; and display, in the display screen, guidance information relating to the target image to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment. 2. The non-transitory storage medium of claim 1, wherein the guidance information is displayed concurrently with the captured visual data of the environment. 3. The non-transitory storage medium of claim 2, wherein the guidance information is displayed in a first portion of the display screen, and the concurrently displayed captured visual data of the environment is displayed in a second, different portion of the display screen. 4. The non-transitory storage medium of claim 2, wherein the displayed guidance information is displayed as an overlay over the captured visual data of the environment in the display screen. 5. The non-transitory storage medium of claim 1, wherein the displayed guidance information comprises the target image. 6. The non-transitory storage medium of claim 1, wherein the displayed guidance information comprises position information to assist the user in re-positioning the electronic device towards the physical target. 7. The non-transitory storage medium of claim 6, wherein the instructions upon execution cause the electronic device to: determine where the physical target is in the captured visual data based on object recognition of the physical target that includes receiving a score that indicates a likelihood of the physical target matching the target image. 8. The non-transitory storage medium of claim 1, wherein the instructions upon execution cause the electronic device to: receive a scanned visual data of the physical target after the user has re-positioned the electronic device to focus on the physical target based on the guidance information; and responsive to the scanned visual data of the physical target, trigger display of the augmented reality presentation in the display screen of the electronic device. 9. The non-transitory storage medium of claim 1, wherein receiving the selection of the target image comprises receiving the selection: in response to user selection of a target image from among a plurality of target images presented in the display screen, or in response to opening an application in the electronic device. 10. An electronic device comprising: a camera to capture visual data of an environment; a display device comprising a display screen; and a processor to: receive selection of a target image relating to an augmented reality presentation; cause display, in the display screen, of the captured visual data of an environment; cause display, in the display screen, guidance information relating to the target image to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment; and responsive to a capture of visual data of the physical target, trigger presentation of the augmented reality presentation. 11. The electronic device of claim 10, wherein the displayed guidance information includes the target image. 12. The electronic device of claim 10, wherein the displayed guidance information includes position information indicating a location of the physical target. 13. The electronic device of claim 10, wherein the processor is to further determine one or both of a position and an orientation of the electronic device, and to trigger display of the guidance information based on one or both of the position and the orientation of the electronic device. 14. A method comprising: receiving, by an electronic device, selection of a target image; displaying, in a display screen of the electronic device, captured visual data of an environment obtained by a camera of the electronic device; and displaying, in the display screen, the target image concurrently with the captured visual data of the environment, to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment. 15. The method of claim 14, further comprising: displaying, in the display screen, position information that indicates a location of the target image relative to the electronic device.
2,600
10,968
10,968
15,429,953
2,616
This disclosure relates to systems and methods for using augmented reality with the internet of things. An augmented reality experience may be provided based on an operation of an object. Operation status information of a detected object may be obtained and a visual effect may be determined based on the operation status information. An object may be controlled using augmented reality. Operation status information of a detected object may be obtained and a control option may be determined based on the operation status information. A visual effect may be determined based on the control option and a user input regarding the control option may be obtained. A control information configured to effectuate a change in the operation of the object may be transmitted to the object.
1. A system for providing augmented reality experience based on an operation of an object, the system comprising: a display configured to display an overlay image; a first image sensor configured to generate visual output signals conveying visual information within a field of view of the first image sensor; one or more processors configured by machine readable instructions to: detect the object based on the visual output signals; determine a position and/or an orientation of the object based on the visual output signals; obtain operation status information of the object; determine a first visual effect based on the operation status information; determine an overlay position and/or an overlay orientation for the first visual effect based on the position and/or the orientation of the object; determine the overlay image comprising the first visual effect, wherein the first visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the first visual effect; and effectuate displaying of the overlay image on the display. 2. The system of claim 1, wherein the one or more processors are further configured by machine readable instructions to: determine a change in the operation status information of the object; and determine the first visual effect further based on the change in the operation status information. 3. The system of claim 2, wherein the change in the operation status information of the object includes transitional operation status information of the object. 4. The system of claim 1, wherein the one or more processors are further configured by machine readable instruction to: determine a change in the operation status information of the object; determine a second visual effect based on the change in the operation status information; determine an overlay position and/or an overlay orientation for the second visual effect based on the position and/or the orientation of the object; and add the second visual effect to the overlay image, wherein the second visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the second visual effect. 5. The system of claim 4, wherein adding the second visual effect in the overlay image includes removing the first visual effect from the overlay image. 6. The system of claim 1, wherein the object is detected further based on a gaze direction of an eye of a user. 7. The system of claim 6, wherein the object within the field of view of the first image sensor is detected when the gaze direction of the eye of the user is pointed towards the object. 8. The system of claim 6, further comprising a second image sensor configured to track a position of the eye of the user. 9. The system of claim 1, wherein the first visual effect is determined further based on a recreational presentation conveyed to a user through one or more of visual, audio, and/or haptic simulation. 10. A method for providing augmented reality experience based on an operation of an object, the method comprising: generating visual output signals conveying visual information within a field of view of a first image sensor; detecting the object based on the visual output signals; determining a position and/or an orientation of the object based on the visual output signals; obtaining operation status information of the object; determining a first visual effect based on the operation status information; determining an overlay position and/or an overlay orientation for the first visual effect based on the position and/or the orientation of the object; determining an overlay image comprising the first visual effect, wherein the first visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the first visual effect; and effectuating displaying of the overlay image on a display. 11. The method of claim 10, further comprising: determining a change in the operation status information of the object; and determining the first visual effect further based on the change in the operation status information. 12. The method of claim 11, wherein the change in the operation status information of the object includes transitional operation status information of the object. 13. The method of claim 10, further comprising: determining a change in the operation status information of the object; determining a second visual effect based on the change in the operation status information; determining an overlay position and/or an overlay orientation for the second visual effect based on the position and/or the orientation of the object; and adding the second visual effect to the overlay image, wherein the second visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the second visual effect. 14. The method of claim 13, wherein adding the second visual effect in the overlay image includes removing the first visual effect from the overlay image. 15. The method of claim 10, wherein the object is detected further based on a gaze direction of an eye of a user. 16. The method of claim 15, wherein the object within the field of view of the first image sensor is detected when the gaze direction of the eye of the user is pointed towards the object. 17. The method of claim 15, wherein a position of the eye of the user is tracked by a second image sensor. 18. The method of claim 10, wherein the first visual effect is determined further based on a recreational presentation conveyed to a user through one or more of visual, audio, and/or haptic simulation. 19. A system for providing augmented reality experience based on an operation of an object, the system comprising: a display configured to display an overlay image; a first image sensor configured to generate visual output signals conveying visual information within a field of view of the first image sensor; a second image sensor configured to track a position of an eye of a user; one or more processors configured by machine readable instructions to: determine a gaze direction of the eye of the user based on the position of the eye of the user; detect the object based on the visual output signals and based on the gaze direction of the eye of the user, wherein the object within the field of view of the first image sensor is detected when the gaze direction of the eye of the user is pointed towards the object; determine a position and/or an orientation of the object based on the visual output signals; obtain operation status information of the object; determine a first visual effect based on the operation status information; determine an overlay position and/or an overlay orientation for the first visual effect based on the position and/or the orientation of the object; determine the overlay image comprising the first visual effect, wherein the first visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the first visual effect; and effectuate displaying of the overlay image on the display. 20. The system of claim 19, wherein the one or more processors are further configured by machine readable instructions to: determine a change in the operation status information of the object; and determine the first visual effect further based on the change in the operation status information.
This disclosure relates to systems and methods for using augmented reality with the internet of things. An augmented reality experience may be provided based on an operation of an object. Operation status information of a detected object may be obtained and a visual effect may be determined based on the operation status information. An object may be controlled using augmented reality. Operation status information of a detected object may be obtained and a control option may be determined based on the operation status information. A visual effect may be determined based on the control option and a user input regarding the control option may be obtained. A control information configured to effectuate a change in the operation of the object may be transmitted to the object.1. A system for providing augmented reality experience based on an operation of an object, the system comprising: a display configured to display an overlay image; a first image sensor configured to generate visual output signals conveying visual information within a field of view of the first image sensor; one or more processors configured by machine readable instructions to: detect the object based on the visual output signals; determine a position and/or an orientation of the object based on the visual output signals; obtain operation status information of the object; determine a first visual effect based on the operation status information; determine an overlay position and/or an overlay orientation for the first visual effect based on the position and/or the orientation of the object; determine the overlay image comprising the first visual effect, wherein the first visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the first visual effect; and effectuate displaying of the overlay image on the display. 2. The system of claim 1, wherein the one or more processors are further configured by machine readable instructions to: determine a change in the operation status information of the object; and determine the first visual effect further based on the change in the operation status information. 3. The system of claim 2, wherein the change in the operation status information of the object includes transitional operation status information of the object. 4. The system of claim 1, wherein the one or more processors are further configured by machine readable instruction to: determine a change in the operation status information of the object; determine a second visual effect based on the change in the operation status information; determine an overlay position and/or an overlay orientation for the second visual effect based on the position and/or the orientation of the object; and add the second visual effect to the overlay image, wherein the second visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the second visual effect. 5. The system of claim 4, wherein adding the second visual effect in the overlay image includes removing the first visual effect from the overlay image. 6. The system of claim 1, wherein the object is detected further based on a gaze direction of an eye of a user. 7. The system of claim 6, wherein the object within the field of view of the first image sensor is detected when the gaze direction of the eye of the user is pointed towards the object. 8. The system of claim 6, further comprising a second image sensor configured to track a position of the eye of the user. 9. The system of claim 1, wherein the first visual effect is determined further based on a recreational presentation conveyed to a user through one or more of visual, audio, and/or haptic simulation. 10. A method for providing augmented reality experience based on an operation of an object, the method comprising: generating visual output signals conveying visual information within a field of view of a first image sensor; detecting the object based on the visual output signals; determining a position and/or an orientation of the object based on the visual output signals; obtaining operation status information of the object; determining a first visual effect based on the operation status information; determining an overlay position and/or an overlay orientation for the first visual effect based on the position and/or the orientation of the object; determining an overlay image comprising the first visual effect, wherein the first visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the first visual effect; and effectuating displaying of the overlay image on a display. 11. The method of claim 10, further comprising: determining a change in the operation status information of the object; and determining the first visual effect further based on the change in the operation status information. 12. The method of claim 11, wherein the change in the operation status information of the object includes transitional operation status information of the object. 13. The method of claim 10, further comprising: determining a change in the operation status information of the object; determining a second visual effect based on the change in the operation status information; determining an overlay position and/or an overlay orientation for the second visual effect based on the position and/or the orientation of the object; and adding the second visual effect to the overlay image, wherein the second visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the second visual effect. 14. The method of claim 13, wherein adding the second visual effect in the overlay image includes removing the first visual effect from the overlay image. 15. The method of claim 10, wherein the object is detected further based on a gaze direction of an eye of a user. 16. The method of claim 15, wherein the object within the field of view of the first image sensor is detected when the gaze direction of the eye of the user is pointed towards the object. 17. The method of claim 15, wherein a position of the eye of the user is tracked by a second image sensor. 18. The method of claim 10, wherein the first visual effect is determined further based on a recreational presentation conveyed to a user through one or more of visual, audio, and/or haptic simulation. 19. A system for providing augmented reality experience based on an operation of an object, the system comprising: a display configured to display an overlay image; a first image sensor configured to generate visual output signals conveying visual information within a field of view of the first image sensor; a second image sensor configured to track a position of an eye of a user; one or more processors configured by machine readable instructions to: determine a gaze direction of the eye of the user based on the position of the eye of the user; detect the object based on the visual output signals and based on the gaze direction of the eye of the user, wherein the object within the field of view of the first image sensor is detected when the gaze direction of the eye of the user is pointed towards the object; determine a position and/or an orientation of the object based on the visual output signals; obtain operation status information of the object; determine a first visual effect based on the operation status information; determine an overlay position and/or an overlay orientation for the first visual effect based on the position and/or the orientation of the object; determine the overlay image comprising the first visual effect, wherein the first visual effect is placed within the overlay image according to the overlay position and/or the overlay orientation for the first visual effect; and effectuate displaying of the overlay image on the display. 20. The system of claim 19, wherein the one or more processors are further configured by machine readable instructions to: determine a change in the operation status information of the object; and determine the first visual effect further based on the change in the operation status information.
2,600
10,969
10,969
16,388,243
2,651
A phone appliance and method of use are provided where the phone appliance can be used to make VoIP communications calls. In a preferred embodiment, the phone appliance includes an RF connection for connecting to a computer or other computing device for facilitating the placement of the VoIP communications calls. The phone appliance further includes a display or portal for depicting advertisements provided by various advertisers. The advertisements provided can be used to defray all or part of the cost associated with making VoIP communications calls. The portal can also be used to communicate with businesses for ordering products. such as ordering a pizza, and to perform various services, such as purchasing stocks. In an exemplary system, the phone appliance is used to transmit to a control center information related to the user of the phone appliance, such as interests and buying habits, and queries for receiving additional information for various advertised products and services. The control center transmits the queries to the appropriate vendors for providing the user with additional information. Other functions and features are provided to the phone appliance, such as being able to download e-mail messages stored within or received by the computer.
1. A recording system for recording voice communications during a voice communication call, comprising: at least one phone appliance for transmitting or receiving voice communications; a converter configured to convert at least one voice communication from analog to digital format; and a computing device configured to facilitate recording the at least one voice communication in digital format on at least one computer memory of the computing device or a disk of the computing device, wherein the at least one voice communication is recorded during a voice communication call. 2. The phone appliance according to claim 1, wherein the at least one phone appliance includes a key for enabling initiating recording of the at least one voice communication. 3. The phone appliance according to claim 1, wherein the at least one voice communication in digital format is compressed. 4. The phone appliance according to claim 1, wherein the stored at least one voice communication can be retrieved for use. 5. The phone appliance according to claim 1, wherein the at least one phone appliance includes at least one transceiver for transmitting or receiving the at least one voice communication. 6. The phone appliance according to claim 5, wherein the at least one phone appliance performs data communications including Internet access for viewing and interacting with Internet content. 7. A system for recording voice communications during a voice communication call, comprising: a computing device in operative communication with a phone appliance for enabling initiating recording of at least one voice communication, wherein the at least one voice communication is an analog or digital voice communication transmitted or received by at least one phone appliance; the computing device receiving and recording the at least one voice communication, wherein the at least one voice communication is recorded during a voice communication call; and the computing device is enabled to facilitate retrieving the recorded at least one voice communication. 8. The system according to claim 7, wherein the at least one voice communication is compressed. 9. The system according to claim 7, wherein the computing device or a control center records at least one voice communication on at least one memory or a disk. 10. The system according to claim 7, wherein the system includes at least a converter to convert at least one voice communication from analog to digital format. 11. The system according to claim 7, wherein the at least one phone appliance includes at least one transceiver for transmitting or receiving the at least one voice communication for recording. 12. The system according to claim 7, wherein the at least one phone appliance is configured for data communications including Internet access for viewing and accessing Internet content. 13. The system according to claim 7, wherein the computing device determines a fee for storing the recorded at least one voice communication. 14. A method for recording voice communications during a voice communication call, comprising: establishing a voice communication call with a phone appliance, wherein the voice communication call is an analog or digital communication call; converting at least one voice communication from analog to digital format; compressing the digital at least one voice communication; initiating recording the at least one voice communication by the phone appliance, wherein the at least one voice communication is recorded during the voice communication call; and facilitating recording or storing of the at least one voice communication by a computing device. 15. A method according to claim 14, further comprising enabling recording the at least one voice communication via a key. 16. A method according to claim 14, further comprising retrieving the recorded or the stored at least one voice communication. 17. A method according to claim 14, further comprising transmitting or receiving the at least one voice communication for recording via a transceiver. 18. A method according to claim 17, further comprising storing that at least one voice communication for recording in at least one memory or at least one disk of a computing device or a control center. 19. A method according to claim 14, further comprising performing data communications including Internet access for viewing and interacting with Internet content. 20. A method according to claim 14, further comprising determining a fee for storing the recorded at least one voice communication. 21. A method for recording voice communications during a voice communication call, comprising: receiving, by a computing device, a communication for enabling recording at least one voice communication, wherein the at least one voice communication is an analog or data communication call; recording the at least one voice communication, wherein the at least one voice communication is recorded during a voice communication call; and retrieving the recorded at least one voice communication by a computing device.
A phone appliance and method of use are provided where the phone appliance can be used to make VoIP communications calls. In a preferred embodiment, the phone appliance includes an RF connection for connecting to a computer or other computing device for facilitating the placement of the VoIP communications calls. The phone appliance further includes a display or portal for depicting advertisements provided by various advertisers. The advertisements provided can be used to defray all or part of the cost associated with making VoIP communications calls. The portal can also be used to communicate with businesses for ordering products. such as ordering a pizza, and to perform various services, such as purchasing stocks. In an exemplary system, the phone appliance is used to transmit to a control center information related to the user of the phone appliance, such as interests and buying habits, and queries for receiving additional information for various advertised products and services. The control center transmits the queries to the appropriate vendors for providing the user with additional information. Other functions and features are provided to the phone appliance, such as being able to download e-mail messages stored within or received by the computer.1. A recording system for recording voice communications during a voice communication call, comprising: at least one phone appliance for transmitting or receiving voice communications; a converter configured to convert at least one voice communication from analog to digital format; and a computing device configured to facilitate recording the at least one voice communication in digital format on at least one computer memory of the computing device or a disk of the computing device, wherein the at least one voice communication is recorded during a voice communication call. 2. The phone appliance according to claim 1, wherein the at least one phone appliance includes a key for enabling initiating recording of the at least one voice communication. 3. The phone appliance according to claim 1, wherein the at least one voice communication in digital format is compressed. 4. The phone appliance according to claim 1, wherein the stored at least one voice communication can be retrieved for use. 5. The phone appliance according to claim 1, wherein the at least one phone appliance includes at least one transceiver for transmitting or receiving the at least one voice communication. 6. The phone appliance according to claim 5, wherein the at least one phone appliance performs data communications including Internet access for viewing and interacting with Internet content. 7. A system for recording voice communications during a voice communication call, comprising: a computing device in operative communication with a phone appliance for enabling initiating recording of at least one voice communication, wherein the at least one voice communication is an analog or digital voice communication transmitted or received by at least one phone appliance; the computing device receiving and recording the at least one voice communication, wherein the at least one voice communication is recorded during a voice communication call; and the computing device is enabled to facilitate retrieving the recorded at least one voice communication. 8. The system according to claim 7, wherein the at least one voice communication is compressed. 9. The system according to claim 7, wherein the computing device or a control center records at least one voice communication on at least one memory or a disk. 10. The system according to claim 7, wherein the system includes at least a converter to convert at least one voice communication from analog to digital format. 11. The system according to claim 7, wherein the at least one phone appliance includes at least one transceiver for transmitting or receiving the at least one voice communication for recording. 12. The system according to claim 7, wherein the at least one phone appliance is configured for data communications including Internet access for viewing and accessing Internet content. 13. The system according to claim 7, wherein the computing device determines a fee for storing the recorded at least one voice communication. 14. A method for recording voice communications during a voice communication call, comprising: establishing a voice communication call with a phone appliance, wherein the voice communication call is an analog or digital communication call; converting at least one voice communication from analog to digital format; compressing the digital at least one voice communication; initiating recording the at least one voice communication by the phone appliance, wherein the at least one voice communication is recorded during the voice communication call; and facilitating recording or storing of the at least one voice communication by a computing device. 15. A method according to claim 14, further comprising enabling recording the at least one voice communication via a key. 16. A method according to claim 14, further comprising retrieving the recorded or the stored at least one voice communication. 17. A method according to claim 14, further comprising transmitting or receiving the at least one voice communication for recording via a transceiver. 18. A method according to claim 17, further comprising storing that at least one voice communication for recording in at least one memory or at least one disk of a computing device or a control center. 19. A method according to claim 14, further comprising performing data communications including Internet access for viewing and interacting with Internet content. 20. A method according to claim 14, further comprising determining a fee for storing the recorded at least one voice communication. 21. A method for recording voice communications during a voice communication call, comprising: receiving, by a computing device, a communication for enabling recording at least one voice communication, wherein the at least one voice communication is an analog or data communication call; recording the at least one voice communication, wherein the at least one voice communication is recorded during a voice communication call; and retrieving the recorded at least one voice communication by a computing device.
2,600
10,970
10,970
14,980,723
2,675
An image processing apparatus is provided that is capable of conducting short-range wireless communication with a terminal having a first display unit. The image processing apparatus includes an operation panel including a second display unit, and a sensor configured to sense a radio wave from the terminal for short-range wireless communication. The operation panel is capable of changing a position thereof relative to a main body of the image processing apparatus. The operation panel includes a touch area arranged on the operation panel, wherein the touch area is an area where the terminal is sensed by the sensor.
1. An image processing apparatus capable of conducting short-range wireless communication with a terminal having a first display unit, comprising: an operation panel including a second display unit; and a sensor configured to sense a radio wave from the terminal for short-range wireless communication, the operation panel capable of changing a position thereof relative to a main body of the image processing apparatus, and the operation panel including a touch area arranged on the operation panel, the touch area being an area where the terminal is sensed by the sensor. 2. The image processing apparatus according to claim 1, wherein execution of the short-range wireless communication involves waving of the terminal over the touch area. 3. The image processing apparatus according to claim 1, wherein the short-range wireless communication is near field communication (NFC). 4. The image processing apparatus according to claim 1, wherein the sensor includes an antenna disposed on a rear side of the touch area. 5. The image processing apparatus according to claim 1, wherein the second display unit is a touch panel. 6. The image processing apparatus according to claim 1, wherein one end of the operation panel is fixed to the main body of the image processing apparatus, and the operation panel is configured to change the position thereof by rotating about the end. 7. The image processing apparatus according to claim 1, wherein the operation panel is configured to be removable from the main body of the image processing apparatus. 8. The image processing apparatus according to claim 1, wherein the touch area is arranged adjacent to the second display unit. 9. The image processing apparatus according to claim 8, wherein the touch area is arranged adjacent to the second display unit in a portion other than an upper side of the second display unit when information is being displayed in an upright state on the second display unit. 10. The image processing apparatus according to claim 8, wherein the operation panel includes an operation button for inputting information, and the second display unit is adjacent to the touch area on a side surface different from a side surface adjacent to the operation button. 11. The image processing apparatus according to claim 10, wherein the operation button includes a button for instructing the image processing apparatus to start formation of an image. 12. The image processing apparatus according to claim 10, wherein the operation button includes a button for instructing the image processing apparatus to perform an irreversible operation. 13. The image processing apparatus according to claim 10, wherein the touch area is arranged so as not to overlap with the operation button. 14. The image processing apparatus according to claim 1, wherein the operation panel includes an input button for inputting reversible control information to the image processing apparatus, and the input button is arranged near the touch area. 15. The image processing apparatus according to claim 1, wherein the touch area is arranged adjacent to a right-hand side of the second display unit when information is being displayed in an upright state on the second display unit. 16. An image processing apparatus capable of conducting short-range wireless communication with a terminal comprising: an operation panel including a display unit and removable from a main body of the image processing apparatus; and a sensor configured to sense a radio wave from the terminal for short-range wireless communication, the operation panel capable of changing a position thereof relative to a main body of the image processing apparatus, and the operation panel including a touch area arranged on the operation panel, the touch area being an area where the terminal is sensed by the sensor. 17. The image processing apparatus according to claim 16, wherein execution of the short-range wireless communication involves waving of the terminal over the touch area. 18. The image processing apparatus according to claim 16, wherein the short-range wireless communication is near field communication (NFC). 19. The image processing apparatus according to claim 16, wherein one end of the operation panel is fixed to the main body of the image processing apparatus, and the operation panel is configured to change the position thereof by rotating about the end. 20. The image processing apparatus according to claim 16, wherein the touch area is arranged adjacent to the second display unit. 21. The image processing apparatus according to claim 20, wherein the operation panel includes an operation button for inputting information, and the second display unit is adjacent to the touch area on a side surface different from a side surface adjacent to the operation button. 22. The image processing apparatus according to claim 21, wherein the touch area is arranged so as not to overlap with the operation button.
An image processing apparatus is provided that is capable of conducting short-range wireless communication with a terminal having a first display unit. The image processing apparatus includes an operation panel including a second display unit, and a sensor configured to sense a radio wave from the terminal for short-range wireless communication. The operation panel is capable of changing a position thereof relative to a main body of the image processing apparatus. The operation panel includes a touch area arranged on the operation panel, wherein the touch area is an area where the terminal is sensed by the sensor.1. An image processing apparatus capable of conducting short-range wireless communication with a terminal having a first display unit, comprising: an operation panel including a second display unit; and a sensor configured to sense a radio wave from the terminal for short-range wireless communication, the operation panel capable of changing a position thereof relative to a main body of the image processing apparatus, and the operation panel including a touch area arranged on the operation panel, the touch area being an area where the terminal is sensed by the sensor. 2. The image processing apparatus according to claim 1, wherein execution of the short-range wireless communication involves waving of the terminal over the touch area. 3. The image processing apparatus according to claim 1, wherein the short-range wireless communication is near field communication (NFC). 4. The image processing apparatus according to claim 1, wherein the sensor includes an antenna disposed on a rear side of the touch area. 5. The image processing apparatus according to claim 1, wherein the second display unit is a touch panel. 6. The image processing apparatus according to claim 1, wherein one end of the operation panel is fixed to the main body of the image processing apparatus, and the operation panel is configured to change the position thereof by rotating about the end. 7. The image processing apparatus according to claim 1, wherein the operation panel is configured to be removable from the main body of the image processing apparatus. 8. The image processing apparatus according to claim 1, wherein the touch area is arranged adjacent to the second display unit. 9. The image processing apparatus according to claim 8, wherein the touch area is arranged adjacent to the second display unit in a portion other than an upper side of the second display unit when information is being displayed in an upright state on the second display unit. 10. The image processing apparatus according to claim 8, wherein the operation panel includes an operation button for inputting information, and the second display unit is adjacent to the touch area on a side surface different from a side surface adjacent to the operation button. 11. The image processing apparatus according to claim 10, wherein the operation button includes a button for instructing the image processing apparatus to start formation of an image. 12. The image processing apparatus according to claim 10, wherein the operation button includes a button for instructing the image processing apparatus to perform an irreversible operation. 13. The image processing apparatus according to claim 10, wherein the touch area is arranged so as not to overlap with the operation button. 14. The image processing apparatus according to claim 1, wherein the operation panel includes an input button for inputting reversible control information to the image processing apparatus, and the input button is arranged near the touch area. 15. The image processing apparatus according to claim 1, wherein the touch area is arranged adjacent to a right-hand side of the second display unit when information is being displayed in an upright state on the second display unit. 16. An image processing apparatus capable of conducting short-range wireless communication with a terminal comprising: an operation panel including a display unit and removable from a main body of the image processing apparatus; and a sensor configured to sense a radio wave from the terminal for short-range wireless communication, the operation panel capable of changing a position thereof relative to a main body of the image processing apparatus, and the operation panel including a touch area arranged on the operation panel, the touch area being an area where the terminal is sensed by the sensor. 17. The image processing apparatus according to claim 16, wherein execution of the short-range wireless communication involves waving of the terminal over the touch area. 18. The image processing apparatus according to claim 16, wherein the short-range wireless communication is near field communication (NFC). 19. The image processing apparatus according to claim 16, wherein one end of the operation panel is fixed to the main body of the image processing apparatus, and the operation panel is configured to change the position thereof by rotating about the end. 20. The image processing apparatus according to claim 16, wherein the touch area is arranged adjacent to the second display unit. 21. The image processing apparatus according to claim 20, wherein the operation panel includes an operation button for inputting information, and the second display unit is adjacent to the touch area on a side surface different from a side surface adjacent to the operation button. 22. The image processing apparatus according to claim 21, wherein the touch area is arranged so as not to overlap with the operation button.
2,600
10,971
10,971
16,130,829
2,693
Systems, devices, and methods for reducing bulk and balancing weight in wearable heads-up displays are described. Bulk can be reduced in a wearable heads-up display by positioning a battery in a first arm of the wearable heads-up display and other electronics in a second arm of the wearable heads-up display, thus reducing the amount of extraneous housing that would otherwise be required to house multiple batteries or electronic components in both arms. Weight of a wearable heads-up display can be balanced by selecting appropriately sized and weight electronics in the first arm, and by adjusting size and therefore weight of the battery in the second arm. Densely filling the first arm with electronics can result in the first arm and the second arm having similar weight.
1. A wearable heads-up display (“WHUD”) comprising: a support structure to be worn on a head of a user, the support structure comprising a first arm to be positioned on a first side of the head of the user, a second arm to be positioned on a second side of the head of the user opposite the first side of the head of the user, and a front frame to be positioned on a front side of the head of the user, the front frame physically coupled to the first arm and the second arm; an optical combiner carried by the front frame to be positioned within a field of view of an eye of the user; a light engine carried by the first arm, the light engine positioned and oriented to output display light to the optical combiner; a battery carried by the second arm; and at least one connector which electrically couples the battery to the light engine, wherein the optical combiner is positioned and oriented to direct the display light towards the eye of the user. 2. The WHUD of claim 1, further comprising at least one processor carried by the first arm and a non-transitory processor readable medium carried by the first arm, wherein the at least one processor is communicatively coupled to the non-transitory processor readable medium and the light engine, and the at least one connector electrically couples the battery to the at least one processor and to the non-transitory processor readable storage medium. 3. The WHUD of claim 2, further comprising a power supply circuit carried by the first arm, wherein the at least one connector directly electrically couples the battery to the power supply circuit, and the power supply circuit is electrically coupled to the at least one processor, the non-transitory processor readable medium, and the light engine. 4. The WHUD of claim 2, wherein the at least one connector directly electrically couples the battery to the at least one processor, the non-transitory processor readable medium, and the light engine. 5. The WHUD of claim 1, further comprising a wireless communication module operable to provide wireless communications with one or more other electronic devices, wherein at least a portion of the wireless communication module is positioned on the support structure relative to the light engine. 6. The WHUD of claim 5 wherein the wireless communication module comprises a wireless receiver communicatively coupled to the light engine, wherein the at least one connector electrically couples the battery to the wireless receiver. 7. The WHUD of claim 1, the first arm to be positioned on a right side of the head of the user and the second arm to be positioned on a left side of the head of a user. 8. The WHUD of claim 1, the first arm to be positioned on a left side of the head of the user and the second arm to be positioned on a right side of the head of the user. 9. The WHUD of claim 1 wherein the light engine comprises a projector, a scanning laser projector, a microdisplay, or a white-light source. 10. The WHUD of claim 1 wherein the optical combiner comprises: a lightguide, at least one hologram, at least one prism, a diffraction grating, at least one light reflector, or at least one light refractor positioned and oriented to redirect the display light towards the eye of the user. 11. The WHUD of claim 1, wherein the optical combiner is carried by a lens carried by the front frame of the support structure. 12. The WHUD of claim 1 wherein the light engine includes a scanning laser projector and the optical combiner includes at least one hologram, wherein the scanning laser projector is positioned and oriented to project laser light onto the at least one hologram, and the at least one hologram is positioned and oriented to redirect the laser light towards an eye of the user. 13. The WHUD of claim 1, further comprising a light redirector, wherein: the light redirector is positioned and oriented to receive the display light output by the light engine and to redirect the display light into a periphery of the optical combiner; and the optical combiner comprises a lightguide and an out-coupler, wherein the lightguide is positioned and oriented to receive the display light from the light redirector and direct the display light to the out-coupler, and the out-coupler is positioned and oriented to redirect the display light towards the eye of the user. 14. The WHUD of claim 1, wherein the at least one connector is carried by the front frame. 15. The WHUD of claim 1 the at least one connector to be positioned behind a head of the user. 16. The WHUD of claim 1, wherein the front frame is directly physically coupled to the first arm and the second arm. 17. The WHUD of claim 1, wherein the front frame is indirectly physically coupled to the first arm via a first intermediary coupler, and the front frame is indirectly physically coupled to the second arm via a second intermediary coupler. 18. A wearable heads-up display (“WHUD”) comprising: a support structure to be worn on a head of a user, the support structure comprising a first arm to be positioned on a first side of the head of the user, a second arm to be positioned on a second side of the head of the user opposite the first side of the head of the user, and a front frame to be positioned on a front side of the head of the user, the front frame physically coupled to the first arm and the second arm; an optical combiner carried by the front frame to be positioned within a field of view of an eye of the user; a light engine carried by the front frame, the light engine positioned and oriented to output display light into a periphery of the optical combiner; a non-transitory processor-readable medium carried by the first arm; at least one processor carried by the first arm, the at least one processor communicatively coupled to the non-transitory processor readable medium and the light engine; a battery carried by the second arm; and at least one connector which electrically couples the battery to the light engine, the non-transitory processor-readable medium, and the at least one processor, wherein the optical combiner is positioned and oriented to direct the display light towards the eye of the user. 19. The WHUD of claim 18, further comprising a power supply circuit carried by the first arm, wherein the at least one connector directly electrically couples the battery to the power supply circuit, and the power supply circuit is electrically coupled to the at least one processor, the non-transitory processor readable medium, and the light engine. 20. The WHUD of claim 18, wherein the at least one connector directly electrically couples the battery to the at least one processor, the non-transitory processor readable medium, and the light engine. 21. The WHUD of claim 18, further comprising a wireless communication module operable to provide wireless communications with one or more other electronic devices, wherein at least a portion of the wireless communication module is positioned on the support structure relative to the light engine. 22. The WHUD of claim 21 wherein the wireless communication module comprises a wireless receiver communicatively coupled to the light engine, wherein the at least one connector electrically couples the battery to the wireless receiver. 23. The WHUD of claim 18, the first arm to be positioned on a right side of the head of the user and the second arm to be positioned on a left side of the head of a user. 24. The WHUD of claim 18, the first arm to be positioned on a left side of the head of the user and the second arm to be positioned on a right side of the head of the user. 25. The WHUD of claim 18 wherein the light engine comprises a projector, a scanning laser projector, a microdisplay, or a white-light source. 26. The WHUD of claim 18 wherein the optical combiner comprises a lightguide and an out-coupler, wherein the lightguide is positioned and oriented to receive the display light from the light engine and direct the display light to the out-coupler, and the out-coupler is positioned and oriented to redirect the display light towards the eye of the user. 27. The WHUD of claim 18 wherein the optical combiner is carried by a lens carried by the front frame of the support structure. 28. The WHUD of claim 18, wherein the at least one connector is carried by the front frame. 29. The WHUD of claim 18, the at least one connector to be positioned behind a head of the user. 30. The WHUD of claim 18, wherein the front frame is directly physically coupled to the first arm and the second arm. 31. The WHUD of claim 18, wherein the front frame is indirectly physically coupled to the first arm via a first intermediary coupler, and the front frame is indirectly physically coupled to the second arm via a second intermediary coupler.
Systems, devices, and methods for reducing bulk and balancing weight in wearable heads-up displays are described. Bulk can be reduced in a wearable heads-up display by positioning a battery in a first arm of the wearable heads-up display and other electronics in a second arm of the wearable heads-up display, thus reducing the amount of extraneous housing that would otherwise be required to house multiple batteries or electronic components in both arms. Weight of a wearable heads-up display can be balanced by selecting appropriately sized and weight electronics in the first arm, and by adjusting size and therefore weight of the battery in the second arm. Densely filling the first arm with electronics can result in the first arm and the second arm having similar weight.1. A wearable heads-up display (“WHUD”) comprising: a support structure to be worn on a head of a user, the support structure comprising a first arm to be positioned on a first side of the head of the user, a second arm to be positioned on a second side of the head of the user opposite the first side of the head of the user, and a front frame to be positioned on a front side of the head of the user, the front frame physically coupled to the first arm and the second arm; an optical combiner carried by the front frame to be positioned within a field of view of an eye of the user; a light engine carried by the first arm, the light engine positioned and oriented to output display light to the optical combiner; a battery carried by the second arm; and at least one connector which electrically couples the battery to the light engine, wherein the optical combiner is positioned and oriented to direct the display light towards the eye of the user. 2. The WHUD of claim 1, further comprising at least one processor carried by the first arm and a non-transitory processor readable medium carried by the first arm, wherein the at least one processor is communicatively coupled to the non-transitory processor readable medium and the light engine, and the at least one connector electrically couples the battery to the at least one processor and to the non-transitory processor readable storage medium. 3. The WHUD of claim 2, further comprising a power supply circuit carried by the first arm, wherein the at least one connector directly electrically couples the battery to the power supply circuit, and the power supply circuit is electrically coupled to the at least one processor, the non-transitory processor readable medium, and the light engine. 4. The WHUD of claim 2, wherein the at least one connector directly electrically couples the battery to the at least one processor, the non-transitory processor readable medium, and the light engine. 5. The WHUD of claim 1, further comprising a wireless communication module operable to provide wireless communications with one or more other electronic devices, wherein at least a portion of the wireless communication module is positioned on the support structure relative to the light engine. 6. The WHUD of claim 5 wherein the wireless communication module comprises a wireless receiver communicatively coupled to the light engine, wherein the at least one connector electrically couples the battery to the wireless receiver. 7. The WHUD of claim 1, the first arm to be positioned on a right side of the head of the user and the second arm to be positioned on a left side of the head of a user. 8. The WHUD of claim 1, the first arm to be positioned on a left side of the head of the user and the second arm to be positioned on a right side of the head of the user. 9. The WHUD of claim 1 wherein the light engine comprises a projector, a scanning laser projector, a microdisplay, or a white-light source. 10. The WHUD of claim 1 wherein the optical combiner comprises: a lightguide, at least one hologram, at least one prism, a diffraction grating, at least one light reflector, or at least one light refractor positioned and oriented to redirect the display light towards the eye of the user. 11. The WHUD of claim 1, wherein the optical combiner is carried by a lens carried by the front frame of the support structure. 12. The WHUD of claim 1 wherein the light engine includes a scanning laser projector and the optical combiner includes at least one hologram, wherein the scanning laser projector is positioned and oriented to project laser light onto the at least one hologram, and the at least one hologram is positioned and oriented to redirect the laser light towards an eye of the user. 13. The WHUD of claim 1, further comprising a light redirector, wherein: the light redirector is positioned and oriented to receive the display light output by the light engine and to redirect the display light into a periphery of the optical combiner; and the optical combiner comprises a lightguide and an out-coupler, wherein the lightguide is positioned and oriented to receive the display light from the light redirector and direct the display light to the out-coupler, and the out-coupler is positioned and oriented to redirect the display light towards the eye of the user. 14. The WHUD of claim 1, wherein the at least one connector is carried by the front frame. 15. The WHUD of claim 1 the at least one connector to be positioned behind a head of the user. 16. The WHUD of claim 1, wherein the front frame is directly physically coupled to the first arm and the second arm. 17. The WHUD of claim 1, wherein the front frame is indirectly physically coupled to the first arm via a first intermediary coupler, and the front frame is indirectly physically coupled to the second arm via a second intermediary coupler. 18. A wearable heads-up display (“WHUD”) comprising: a support structure to be worn on a head of a user, the support structure comprising a first arm to be positioned on a first side of the head of the user, a second arm to be positioned on a second side of the head of the user opposite the first side of the head of the user, and a front frame to be positioned on a front side of the head of the user, the front frame physically coupled to the first arm and the second arm; an optical combiner carried by the front frame to be positioned within a field of view of an eye of the user; a light engine carried by the front frame, the light engine positioned and oriented to output display light into a periphery of the optical combiner; a non-transitory processor-readable medium carried by the first arm; at least one processor carried by the first arm, the at least one processor communicatively coupled to the non-transitory processor readable medium and the light engine; a battery carried by the second arm; and at least one connector which electrically couples the battery to the light engine, the non-transitory processor-readable medium, and the at least one processor, wherein the optical combiner is positioned and oriented to direct the display light towards the eye of the user. 19. The WHUD of claim 18, further comprising a power supply circuit carried by the first arm, wherein the at least one connector directly electrically couples the battery to the power supply circuit, and the power supply circuit is electrically coupled to the at least one processor, the non-transitory processor readable medium, and the light engine. 20. The WHUD of claim 18, wherein the at least one connector directly electrically couples the battery to the at least one processor, the non-transitory processor readable medium, and the light engine. 21. The WHUD of claim 18, further comprising a wireless communication module operable to provide wireless communications with one or more other electronic devices, wherein at least a portion of the wireless communication module is positioned on the support structure relative to the light engine. 22. The WHUD of claim 21 wherein the wireless communication module comprises a wireless receiver communicatively coupled to the light engine, wherein the at least one connector electrically couples the battery to the wireless receiver. 23. The WHUD of claim 18, the first arm to be positioned on a right side of the head of the user and the second arm to be positioned on a left side of the head of a user. 24. The WHUD of claim 18, the first arm to be positioned on a left side of the head of the user and the second arm to be positioned on a right side of the head of the user. 25. The WHUD of claim 18 wherein the light engine comprises a projector, a scanning laser projector, a microdisplay, or a white-light source. 26. The WHUD of claim 18 wherein the optical combiner comprises a lightguide and an out-coupler, wherein the lightguide is positioned and oriented to receive the display light from the light engine and direct the display light to the out-coupler, and the out-coupler is positioned and oriented to redirect the display light towards the eye of the user. 27. The WHUD of claim 18 wherein the optical combiner is carried by a lens carried by the front frame of the support structure. 28. The WHUD of claim 18, wherein the at least one connector is carried by the front frame. 29. The WHUD of claim 18, the at least one connector to be positioned behind a head of the user. 30. The WHUD of claim 18, wherein the front frame is directly physically coupled to the first arm and the second arm. 31. The WHUD of claim 18, wherein the front frame is indirectly physically coupled to the first arm via a first intermediary coupler, and the front frame is indirectly physically coupled to the second arm via a second intermediary coupler.
2,600
10,972
10,972
14,426,131
2,622
An upper-arm computer pointing apparatus, comprising: at least one orientation measurer, deployable on at least one area of an upper arm of a user, configured to measure orientation of the upper arm, at least one pressure meter, deployable on at least one area of the upper arm, configured to measure pressure applied by muscle of the upper arm, a computer processor, associated with the orientation measurer and pressure meter, configured to derive control data from the measured orientation and pressure, and a data transmitter, associated with the computer processor, configured to transmit the control data to a computing device.
1-63. (canceled) 64. An upper-arm computer pointing apparatus, comprising: at least one orientation measurer, deployable on at least one area of an upper arm of a user, configured to measure orientation of the upper arm; a computer processor, associated with said orientation measurer, configured to derive control data from the measured orientation; and a data transmitter, associated with said computer processor, configured to transmit the control data to a computing device. 65. The apparatus of claim 64, further comprising at least one pressure meter, deployable on at least one area of an upper arm of the user, in communication with said computer processor, configured to measure pressure applied by muscle of the upper arm, wherein said computer processor is configured to derive the control data from the measured orientation and the measured pressure. 66. The apparatus of claim 64, wherein said data transmitter is further configured to transmit the control data to the computing device over a wireless connection. 67. The apparatus of claim 64, further comprising a data convertor, implemented on said computing device, configured to receive the transmitted control data and convert the transmitted control data into mouse protocol compliant control data. 68. The apparatus of claim 64, wherein said computer processor is further configured to derive the control data as mouse protocol compliant control data. 69. The apparatus of claim 65, wherein said pressure meters comprise at least two pressure meters deployable on the upper arm, over opposite sides of the muscle, and said computer processor is further configured to compare a measurement of a first one of said pressure meters with a measurement of a second one of said pressure meters, for deriving the control data. 70. The apparatus of claim 64, wherein said orientation measurers comprise at least two orientation measurers deployable on the upper arm, and said computer processor is further configured to compare a measurement of a first one of said orientation measurers with a measurement of a second one of said orientation measurers, for deriving the control data. 71. The apparatus of claim 64, wherein at least one of said orientation measurers comprises a GPS (Global Positioning System) receiver. 72. The apparatus of claim 64, wherein at least one of said orientation measurers comprises an IMU (Inertial Measurement Unit). 73. The apparatus of claim 65, wherein at least one of said pressure meters comprises an FSR (Force Sensing Resistor). 74. The apparatus of claim 65, where said computer processor is further configured to translate a pressure change measured by at least one of said pressure meters into clicking operation data included in the derived control data. 75. The apparatus of claim 64, where said computer processor is further configured to translate an angular orientation change measured by at least one of said orientation measurers into clicking operation data included in the derived control data. 76. The apparatus of claim 64, where said computer processor is further configured to translate a movement in a predefined direction, measured by at least one of said orientation measurers, into clicking operation data included in the derived control data. 77. The apparatus of claim 64, where said computer processor is further configured to translate an angular orientation change measured by at least one of said orientation measurers, into mouse speed change data included in the derived control data. 78. The apparatus of claim 64, wherein at least one of said orientation measurers is further configured to measure angular orientation of the upper arm. 79. The apparatus of claim 64, wherein at least one of said orientation measurers is further configured to measure bi-dimensional positional orientation of the upper arm. 80. The apparatus of claim 64, wherein at least one of said orientation measurers is further configured to measure tri-dimensional positional orientation of the upper arm. 81. An upper-arm computer pointing apparatus, comprising: at least one pressure meter, deployable on at least one area of an upper arm of a user, configured to measure pressure applied by muscle of the upper arm; a computer processor, associated with said pressure meter, configured to derive control data from the measured pressure; and a data transmitter, associated with said computer processor, configured to transmit the control data to a computing device. 82. A method for upper-arm computer pointing, comprising: measuring orientation of an upper arm of a user, on at least one area of the upper arm; deriving control data from said measured orientation; and transmitting the control data to a computing device. 83. A method for upper-arm computer pointing, comprising: measuring pressure applied by muscle of the upper arm, on at least one area of the upper arm; deriving control data from said measured pressure; and transmitting the control data to a computing device.
An upper-arm computer pointing apparatus, comprising: at least one orientation measurer, deployable on at least one area of an upper arm of a user, configured to measure orientation of the upper arm, at least one pressure meter, deployable on at least one area of the upper arm, configured to measure pressure applied by muscle of the upper arm, a computer processor, associated with the orientation measurer and pressure meter, configured to derive control data from the measured orientation and pressure, and a data transmitter, associated with the computer processor, configured to transmit the control data to a computing device.1-63. (canceled) 64. An upper-arm computer pointing apparatus, comprising: at least one orientation measurer, deployable on at least one area of an upper arm of a user, configured to measure orientation of the upper arm; a computer processor, associated with said orientation measurer, configured to derive control data from the measured orientation; and a data transmitter, associated with said computer processor, configured to transmit the control data to a computing device. 65. The apparatus of claim 64, further comprising at least one pressure meter, deployable on at least one area of an upper arm of the user, in communication with said computer processor, configured to measure pressure applied by muscle of the upper arm, wherein said computer processor is configured to derive the control data from the measured orientation and the measured pressure. 66. The apparatus of claim 64, wherein said data transmitter is further configured to transmit the control data to the computing device over a wireless connection. 67. The apparatus of claim 64, further comprising a data convertor, implemented on said computing device, configured to receive the transmitted control data and convert the transmitted control data into mouse protocol compliant control data. 68. The apparatus of claim 64, wherein said computer processor is further configured to derive the control data as mouse protocol compliant control data. 69. The apparatus of claim 65, wherein said pressure meters comprise at least two pressure meters deployable on the upper arm, over opposite sides of the muscle, and said computer processor is further configured to compare a measurement of a first one of said pressure meters with a measurement of a second one of said pressure meters, for deriving the control data. 70. The apparatus of claim 64, wherein said orientation measurers comprise at least two orientation measurers deployable on the upper arm, and said computer processor is further configured to compare a measurement of a first one of said orientation measurers with a measurement of a second one of said orientation measurers, for deriving the control data. 71. The apparatus of claim 64, wherein at least one of said orientation measurers comprises a GPS (Global Positioning System) receiver. 72. The apparatus of claim 64, wherein at least one of said orientation measurers comprises an IMU (Inertial Measurement Unit). 73. The apparatus of claim 65, wherein at least one of said pressure meters comprises an FSR (Force Sensing Resistor). 74. The apparatus of claim 65, where said computer processor is further configured to translate a pressure change measured by at least one of said pressure meters into clicking operation data included in the derived control data. 75. The apparatus of claim 64, where said computer processor is further configured to translate an angular orientation change measured by at least one of said orientation measurers into clicking operation data included in the derived control data. 76. The apparatus of claim 64, where said computer processor is further configured to translate a movement in a predefined direction, measured by at least one of said orientation measurers, into clicking operation data included in the derived control data. 77. The apparatus of claim 64, where said computer processor is further configured to translate an angular orientation change measured by at least one of said orientation measurers, into mouse speed change data included in the derived control data. 78. The apparatus of claim 64, wherein at least one of said orientation measurers is further configured to measure angular orientation of the upper arm. 79. The apparatus of claim 64, wherein at least one of said orientation measurers is further configured to measure bi-dimensional positional orientation of the upper arm. 80. The apparatus of claim 64, wherein at least one of said orientation measurers is further configured to measure tri-dimensional positional orientation of the upper arm. 81. An upper-arm computer pointing apparatus, comprising: at least one pressure meter, deployable on at least one area of an upper arm of a user, configured to measure pressure applied by muscle of the upper arm; a computer processor, associated with said pressure meter, configured to derive control data from the measured pressure; and a data transmitter, associated with said computer processor, configured to transmit the control data to a computing device. 82. A method for upper-arm computer pointing, comprising: measuring orientation of an upper arm of a user, on at least one area of the upper arm; deriving control data from said measured orientation; and transmitting the control data to a computing device. 83. A method for upper-arm computer pointing, comprising: measuring pressure applied by muscle of the upper arm, on at least one area of the upper arm; deriving control data from said measured pressure; and transmitting the control data to a computing device.
2,600
10,973
10,973
15,068,793
2,684
A crossbore detection system. The system is located in a downhole tool proximate a drill bit. The system comprises circuitry sensitive to a subsurface environment and a sensor that detects changes in the circuitry. The sensor detects changes in the circuitry that indicates that the drill bit has struck an underground pipe. The sensor may detect a series of electromagnetic signals indicative of the strike or may detect changes to an impedance bridge at a capacitive sensor.
1. A crossbore detection system comprising: a drill bit; a first antenna configured to transmit a series of signals; a second antenna configured to receive the series of signals transmitted by the first antenna; and a sensor to detect changes in the series of signals received by the second antenna indicative of proximity between the drill bit and an underground anomaly. 2. The crossbore detection system of claim 1 wherein a frequency of the series of signals is between about 1 gigahertz and 8 gigahertz. 3. The crossbore detection system of claim 1 further comprising a transmitter capable of receiving signals from the sensor and transmitting signals to an above ground receiver. 4. The crossbore detection system of claim 1 further comprising a housing connected to the drill bit wherein the second antenna is disposed on the housing. 5. The crossbore detection system of claim 4 wherein the first antenna is disposed on the housing. 6. The crossbore detection system of claim 1 further comprising an accelerometer. 7. The crossbore detection system of claim 1 wherein the second antenna comprises a front face, wherein the front face of the second antenna is substantially parallel with the cutting blade. 8. The crossbore detection system of claim 1 wherein the underground anomaly comprises an underground pipe. 9. The crossbore detection system of claim 1 herein the underground anomaly comprises a void space. 10. A system comprising: a horizontal directional drilling unit; a drill string coupled to the horizontal directional drilling unit; an above ground receiver; the crossbore detection system of claim I located on a distal end of the drill string. 11. The system of claim 8 wherein the above ground receiver is located at the horizontal directional drilling unit. 12. A system comprising: a horizontal directional a drill string rotatable by the horizontal directional drill; a downhole tool coupled to a distal end of the drill string, wherein the downhole tool comprises: a drill bit; and a crossbore detection system comprising: circuitry disposed on the downhole tool and sensitive to changes in the subsurface; and a sensor capable of detecting variations in the circuitry caused by the drill bit crossing a path of an underground pipe. 13. The system of claim 12 wherein the circuitry comprises a first electromagnetic transmitting antenna and a second electromagnetic receiving antenna. 14. The system of claim 13 wherein the first electromagnetic transmitting antenna is disposed on the drill bit. 15. The system of claim 13 further comprising an accelerometer disposed within the downhole tool. 16. The system of claim 12 further comprising a transmitter disposed within the downhole tool, wherein the transmitter emits a signal when the sensor detects the variations in the circuitry. 17. The system of claim 12 wherein the circuitry comprises a plurality of electrodes. 18. The system of claim 17 wherein the sensor detects an induced voltage between at least two of the plurality of electrodes. 19. A method for detecting a crossbore in horizontal directional drilling operations comprising: drilling a borehole with a downhole tool comprising a first antenna, a second antenna, a sensor and a drill bit; transmitting a series of signals from the first antenna to a second antenna; comparing signals received at the second antenna to a reference signal indicative of a crossbore; and generating a warning if the signal received at the second antenna indicates a crossbore. 20. The method of claim 19 further comprising storing received signal data in the downhole tool and uploading the signal data from at a port. 21. The method of claim 19 wherein the first antenna is disposed on the drill bit. 22. The method of claim 19 wherein the series of signals comprise a frequency between about 1 gigahertz to about 5 gigahertz. 23. The method of claim 19 comprising generating the warning at a drilling machine.
A crossbore detection system. The system is located in a downhole tool proximate a drill bit. The system comprises circuitry sensitive to a subsurface environment and a sensor that detects changes in the circuitry. The sensor detects changes in the circuitry that indicates that the drill bit has struck an underground pipe. The sensor may detect a series of electromagnetic signals indicative of the strike or may detect changes to an impedance bridge at a capacitive sensor.1. A crossbore detection system comprising: a drill bit; a first antenna configured to transmit a series of signals; a second antenna configured to receive the series of signals transmitted by the first antenna; and a sensor to detect changes in the series of signals received by the second antenna indicative of proximity between the drill bit and an underground anomaly. 2. The crossbore detection system of claim 1 wherein a frequency of the series of signals is between about 1 gigahertz and 8 gigahertz. 3. The crossbore detection system of claim 1 further comprising a transmitter capable of receiving signals from the sensor and transmitting signals to an above ground receiver. 4. The crossbore detection system of claim 1 further comprising a housing connected to the drill bit wherein the second antenna is disposed on the housing. 5. The crossbore detection system of claim 4 wherein the first antenna is disposed on the housing. 6. The crossbore detection system of claim 1 further comprising an accelerometer. 7. The crossbore detection system of claim 1 wherein the second antenna comprises a front face, wherein the front face of the second antenna is substantially parallel with the cutting blade. 8. The crossbore detection system of claim 1 wherein the underground anomaly comprises an underground pipe. 9. The crossbore detection system of claim 1 herein the underground anomaly comprises a void space. 10. A system comprising: a horizontal directional drilling unit; a drill string coupled to the horizontal directional drilling unit; an above ground receiver; the crossbore detection system of claim I located on a distal end of the drill string. 11. The system of claim 8 wherein the above ground receiver is located at the horizontal directional drilling unit. 12. A system comprising: a horizontal directional a drill string rotatable by the horizontal directional drill; a downhole tool coupled to a distal end of the drill string, wherein the downhole tool comprises: a drill bit; and a crossbore detection system comprising: circuitry disposed on the downhole tool and sensitive to changes in the subsurface; and a sensor capable of detecting variations in the circuitry caused by the drill bit crossing a path of an underground pipe. 13. The system of claim 12 wherein the circuitry comprises a first electromagnetic transmitting antenna and a second electromagnetic receiving antenna. 14. The system of claim 13 wherein the first electromagnetic transmitting antenna is disposed on the drill bit. 15. The system of claim 13 further comprising an accelerometer disposed within the downhole tool. 16. The system of claim 12 further comprising a transmitter disposed within the downhole tool, wherein the transmitter emits a signal when the sensor detects the variations in the circuitry. 17. The system of claim 12 wherein the circuitry comprises a plurality of electrodes. 18. The system of claim 17 wherein the sensor detects an induced voltage between at least two of the plurality of electrodes. 19. A method for detecting a crossbore in horizontal directional drilling operations comprising: drilling a borehole with a downhole tool comprising a first antenna, a second antenna, a sensor and a drill bit; transmitting a series of signals from the first antenna to a second antenna; comparing signals received at the second antenna to a reference signal indicative of a crossbore; and generating a warning if the signal received at the second antenna indicates a crossbore. 20. The method of claim 19 further comprising storing received signal data in the downhole tool and uploading the signal data from at a port. 21. The method of claim 19 wherein the first antenna is disposed on the drill bit. 22. The method of claim 19 wherein the series of signals comprise a frequency between about 1 gigahertz to about 5 gigahertz. 23. The method of claim 19 comprising generating the warning at a drilling machine.
2,600
10,974
10,974
15,895,115
2,616
A tile-based graphics system has a rendering space sub-divided into a plurality of tiles which are to be processed. Graphics data items, such as parameters or texels, are fetched into a cache for use in processing one of the tiles. Indicators are determined for the graphics data items, whereby the indicator for a graphics data item indicates the number of tiles with which that graphics data item is associated. The graphics data items are evicted from the cache in accordance with the indicators of the graphics data items. For example, the indicator for a graphics data item may be a count of the number of tiles with which that graphics data item is associated, whereby the graphics data item(s) with the lowest count(s) is (are) evicted from the cache.
1. A method of processing data in a graphics system wherein graphics data items are associated with tiles within the graphics system, the method comprising: storing graphics data items in a cache, wherein each of the graphics data items in the cache is associated with an indicator which is indicative of a number of tiles with which the graphics data item is associated; and determining an order in which tiles are to be processed based on the indicators of the graphics data items in the cache. 2. The method of claim 1, further comprising evicting a graphics data item from the cache in dependence on the indicator associated with the graphics data item. 3. The method of claim 1, further comprising determining the indicators for the graphics data items stored in the cache. 4. The method of claim 1, further comprising processing graphics data items from the cache in accordance with the determined order. 5. The method of claim 1, wherein for each of the graphics data items, the indicator is a count of the number of tiles with which that graphics data item is associated. 6. The method of claim 5, further comprising decrementing the count for a particular graphics data item when a tile with which the particular graphics data item is associated has been processed. 7. The method of claim 5, further comprising evicting, from the cache, the graphics data item which has the lowest count. 8. The method of claim 5, wherein the count has a predetermined maximum value. 9. The method of claim 1, wherein for each of the graphics data items, the indicator for that graphics data item indicates one of four conditions, the four conditions being: (i) that the number of tiles with which that graphics data item is associated is equal to one, (ii) that the number of tiles with which that graphics data item is associated is equal to two, (iii) that the number of tiles with which that graphics data item is associated is equal to three or four, and (iv) that the number of tiles with that graphics data item is associated is greater than four. 10. The method of claim 1, wherein the order in which tiles are to be processed is determined further based on a determination of which of the graphics data items are present in each of the tiles. 11. A graphics system comprising a cache configured to store graphics data items which are associated with tiles within the graphics system, wherein the graphics system is configured to: store graphics data items in the cache, wherein each of the graphics data items in the cache is associated with an indicator which is indicative of a number of tiles with which the graphics data item is associated; and determine an order in which tiles are to be processed based on the indicators of the graphics data items in the cache. 12. The graphics system of claim 11, wherein the graphics system is further configured to evict a graphics data item from the cache in dependence on the indicator associated with the graphics data item. 13. The graphics system of claim 11, further configured to determine the indicators for the graphics data items. 14. The graphics system of claim 11, further comprising processing logic configured to process graphics data items from the cache in accordance with the determined order. 15. The graphics system of claim 11, wherein the graphics data items are stored in graphics data sets in a graphics data memory, each of the graphics data sets comprising one or more of the graphics data items. 16. The graphics system of claim 15, wherein a particular graphics data item is associated with a particular tile if a graphics data item in the graphics data set comprising the particular graphics data item is to be used to process the particular tile, and wherein for each of the graphics data items, the indicator for that graphics data item is indicative of the number of tiles which are processed using a graphics data item in the graphics data set comprising that graphics data item. 17. The graphics system of claim 15, wherein the graphics data sets are textures and the graphics data memory is a texture memory. 18. The graphics system of claim 11, wherein a particular graphics data item is associated with a particular tile if the particular graphics data item is to be used to process the particular tile, and wherein for each of the graphics data items, the indicator for that graphics data item is indicative of the number of tiles which are processed using that graphics data item. 19. The graphics system of claim 11, wherein the graphics system is configured to determine the order in which tiles are to be processed further based on a determination of which of the graphics data items are present in each of the tiles. 20. A non-transitory computer readable storage medium having stored thereon a computer readable description of an integrated circuit that, when processed, causes a system to generate a processing unit, said processing unit being configured to process data in a graphics system wherein graphics data items are associated with tiles within the graphics system, said processing unit being further configured to: store graphics data items in a cache, wherein each of the graphics data items in the cache is associated with an indicator which is indicative of a number of tiles with which the graphics data item is associated; and determine an order in which tiles are to be processed based on the indicators of the graphics data items in the cache.
A tile-based graphics system has a rendering space sub-divided into a plurality of tiles which are to be processed. Graphics data items, such as parameters or texels, are fetched into a cache for use in processing one of the tiles. Indicators are determined for the graphics data items, whereby the indicator for a graphics data item indicates the number of tiles with which that graphics data item is associated. The graphics data items are evicted from the cache in accordance with the indicators of the graphics data items. For example, the indicator for a graphics data item may be a count of the number of tiles with which that graphics data item is associated, whereby the graphics data item(s) with the lowest count(s) is (are) evicted from the cache.1. A method of processing data in a graphics system wherein graphics data items are associated with tiles within the graphics system, the method comprising: storing graphics data items in a cache, wherein each of the graphics data items in the cache is associated with an indicator which is indicative of a number of tiles with which the graphics data item is associated; and determining an order in which tiles are to be processed based on the indicators of the graphics data items in the cache. 2. The method of claim 1, further comprising evicting a graphics data item from the cache in dependence on the indicator associated with the graphics data item. 3. The method of claim 1, further comprising determining the indicators for the graphics data items stored in the cache. 4. The method of claim 1, further comprising processing graphics data items from the cache in accordance with the determined order. 5. The method of claim 1, wherein for each of the graphics data items, the indicator is a count of the number of tiles with which that graphics data item is associated. 6. The method of claim 5, further comprising decrementing the count for a particular graphics data item when a tile with which the particular graphics data item is associated has been processed. 7. The method of claim 5, further comprising evicting, from the cache, the graphics data item which has the lowest count. 8. The method of claim 5, wherein the count has a predetermined maximum value. 9. The method of claim 1, wherein for each of the graphics data items, the indicator for that graphics data item indicates one of four conditions, the four conditions being: (i) that the number of tiles with which that graphics data item is associated is equal to one, (ii) that the number of tiles with which that graphics data item is associated is equal to two, (iii) that the number of tiles with which that graphics data item is associated is equal to three or four, and (iv) that the number of tiles with that graphics data item is associated is greater than four. 10. The method of claim 1, wherein the order in which tiles are to be processed is determined further based on a determination of which of the graphics data items are present in each of the tiles. 11. A graphics system comprising a cache configured to store graphics data items which are associated with tiles within the graphics system, wherein the graphics system is configured to: store graphics data items in the cache, wherein each of the graphics data items in the cache is associated with an indicator which is indicative of a number of tiles with which the graphics data item is associated; and determine an order in which tiles are to be processed based on the indicators of the graphics data items in the cache. 12. The graphics system of claim 11, wherein the graphics system is further configured to evict a graphics data item from the cache in dependence on the indicator associated with the graphics data item. 13. The graphics system of claim 11, further configured to determine the indicators for the graphics data items. 14. The graphics system of claim 11, further comprising processing logic configured to process graphics data items from the cache in accordance with the determined order. 15. The graphics system of claim 11, wherein the graphics data items are stored in graphics data sets in a graphics data memory, each of the graphics data sets comprising one or more of the graphics data items. 16. The graphics system of claim 15, wherein a particular graphics data item is associated with a particular tile if a graphics data item in the graphics data set comprising the particular graphics data item is to be used to process the particular tile, and wherein for each of the graphics data items, the indicator for that graphics data item is indicative of the number of tiles which are processed using a graphics data item in the graphics data set comprising that graphics data item. 17. The graphics system of claim 15, wherein the graphics data sets are textures and the graphics data memory is a texture memory. 18. The graphics system of claim 11, wherein a particular graphics data item is associated with a particular tile if the particular graphics data item is to be used to process the particular tile, and wherein for each of the graphics data items, the indicator for that graphics data item is indicative of the number of tiles which are processed using that graphics data item. 19. The graphics system of claim 11, wherein the graphics system is configured to determine the order in which tiles are to be processed further based on a determination of which of the graphics data items are present in each of the tiles. 20. A non-transitory computer readable storage medium having stored thereon a computer readable description of an integrated circuit that, when processed, causes a system to generate a processing unit, said processing unit being configured to process data in a graphics system wherein graphics data items are associated with tiles within the graphics system, said processing unit being further configured to: store graphics data items in a cache, wherein each of the graphics data items in the cache is associated with an indicator which is indicative of a number of tiles with which the graphics data item is associated; and determine an order in which tiles are to be processed based on the indicators of the graphics data items in the cache.
2,600
10,975
10,975
16,256,320
2,626
Embodiments of the invention are directed to input devices configured for use with computing devices. The present invention relates to input device configured with a plurality of input members grouped into contoured-shaped bowls on a portion of the input devices. The input device may also be configured for use with multiple hand positions and multiple profiles based on the hand positions. The input device may enable switching between user-programmable profiles, and may include sensory feedback indicating the profiles active on the input device.
1.-20. (canceled) 21. A computer input device comprising: a housing including: a side portion; and a plurality of buttons disposed on the side portion, wherein each of the plurality of buttons includes a top surface, wherein the top surface of each of the plurality of buttons is contoured such that the plurality of buttons forms a bowl shape with a common center. 22. The computer input device of claim 21, wherein the top surface of each of the plurality of buttons is of a different shape. 23. The computer input device of claim 21 further comprising: a second plurality of buttons disposed on the side portion, wherein the second plurality of buttons is adjacent to the plurality of buttons, wherein each of the second plurality of buttons includes a top surface, and wherein the top surface of each of the second plurality of buttons is contoured such that the second plurality of buttons forms a bowl shape. 24. The computer input device of claim 23 wherein the top surface of each of the second plurality of buttons is of a different shape. 25. The computer input device of claim 23 further comprising: a mode switch button including a first control state and a second control state; and one or more processors to control: the mode switch button; the plurality of buttons; and the second plurality of buttons, wherein the one or more processors generates a set of functions, wherein the set of functions are associated with the plurality of buttons when the mode switch button is set to the first control state, and wherein the set of functions are associated with the second plurality of buttons when the mode switch button is set to the second control state. 26. The computer input device of claim 21 further comprising: a profile selection button to select between at least a first profile and a second profile, wherein the first profile defines a first set of functions, and wherein the second profile defines a second set of functions; and one or more processors to control: the profile selection button; the plurality of buttons; and the second plurality of buttons, wherein the first profile is associated with the plurality of buttons when the profile selection button is set to the first profile, and wherein the first profile is associated with the second plurality of buttons when the profile selection button is set to the second profile. 27. The computer input device of claim 26 further comprising a scroll wheel controlled by the one or more processors and configured to perform at least a first function and a second function, wherein the scroll wheel is configured for perform the first function in response to the profile selection button being set to the first profile, and wherein the scroll wheel is configured for perform the second function in response to the profile selection button being set to the second profile. 28. The computer input device of claim 26 further comprising: one or more processors; and a light-emitting element, controlled by the one or more processors and disposed on the housing, the light-emitting element configured to emit any of a plurality of colored light, wherein the one or more processors causes the light-emitting element to emit one of the plurality of colored light based on the selected profile. 29. The computer input device of claim 28 wherein the light-emitting elements provides a back-lighting for the plurality of buttons. 30. The computer input device of claim 21 wherein the plurality of buttons includes more than six buttons. 31. The computer input device of claim 21 wherein the plurality of buttons includes less than six buttons. 32. The computer input device of claim 21 wherein the side portion of the housing with the plurality of buttons is concave. 33. The computer input device of claim 21 wherein the computer input device is a computer mouse. 34. A computer mouse comprising: a housing including: a top portion configured to receive a user's hand; a bottom portion configured to move along a work surface; and a side portion; a first plurality of buttons disposed on the side portion; and a second plurality of buttons disposed on the side portion, wherein each button of the first plurality of buttons and each button of the second plurality of buttons includes a top surface, wherein the top surface of each of the first plurality of buttons is contoured such that the first plurality of buttons forms a bowl shape having a first common center, and wherein the top surface of each of the second plurality of buttons is contoured such that the second plurality of buttons forms a bowl shape having a second common center. 35. The computer mouse of claim 34 further comprising: a mode switch button including a first control state and a second control state; and one or more processors to control: the mode switch button; the first plurality of buttons; and the second plurality of buttons, wherein the one or more processors generates a set of functions, wherein the set of functions are associated with the first plurality of buttons when the mode switch button is set to the first control state, and wherein the first set of functions are associated with the second plurality of buttons when the mode switch button is set to the second control state. 36. The computer mouse of claim 34 wherein the mode switch button includes a third control state, and wherein the set of functions are associated with both the first plurality of buttons and the second plurality of buttons when the mode switch button is set to the third control state. 37. The computer mouse of claim 34 further comprising: a profile selection button to select between at least a first profile and a second profile, wherein the first profile defines a first set of functions, and wherein the second profile defines a second set of functions; and one or more processors to control: the profile selection button; and the plurality of buttons, wherein the first profile is associated with the first plurality of buttons when the profile selection button is set to the first profile, and wherein the first profile is associated with the second plurality of buttons when the profile selection button is set to the second profile. 38. The computer mouse of claim 34 further comprising: a light-emitting element, controlled by the one or more processors and disposed on the housing, to emit any of a plurality of colored light, wherein the processor causes the light-emitting element to emit one of the plurality of colored light based on the selected profile. 39. The computer mouse of claim 38 wherein the light-emitting elements provides a back-lighting for the plurality of buttons. 40. The computer mouse of claim 34 wherein the side portion of the housing with the first and second plurality of buttons is concave.
Embodiments of the invention are directed to input devices configured for use with computing devices. The present invention relates to input device configured with a plurality of input members grouped into contoured-shaped bowls on a portion of the input devices. The input device may also be configured for use with multiple hand positions and multiple profiles based on the hand positions. The input device may enable switching between user-programmable profiles, and may include sensory feedback indicating the profiles active on the input device.1.-20. (canceled) 21. A computer input device comprising: a housing including: a side portion; and a plurality of buttons disposed on the side portion, wherein each of the plurality of buttons includes a top surface, wherein the top surface of each of the plurality of buttons is contoured such that the plurality of buttons forms a bowl shape with a common center. 22. The computer input device of claim 21, wherein the top surface of each of the plurality of buttons is of a different shape. 23. The computer input device of claim 21 further comprising: a second plurality of buttons disposed on the side portion, wherein the second plurality of buttons is adjacent to the plurality of buttons, wherein each of the second plurality of buttons includes a top surface, and wherein the top surface of each of the second plurality of buttons is contoured such that the second plurality of buttons forms a bowl shape. 24. The computer input device of claim 23 wherein the top surface of each of the second plurality of buttons is of a different shape. 25. The computer input device of claim 23 further comprising: a mode switch button including a first control state and a second control state; and one or more processors to control: the mode switch button; the plurality of buttons; and the second plurality of buttons, wherein the one or more processors generates a set of functions, wherein the set of functions are associated with the plurality of buttons when the mode switch button is set to the first control state, and wherein the set of functions are associated with the second plurality of buttons when the mode switch button is set to the second control state. 26. The computer input device of claim 21 further comprising: a profile selection button to select between at least a first profile and a second profile, wherein the first profile defines a first set of functions, and wherein the second profile defines a second set of functions; and one or more processors to control: the profile selection button; the plurality of buttons; and the second plurality of buttons, wherein the first profile is associated with the plurality of buttons when the profile selection button is set to the first profile, and wherein the first profile is associated with the second plurality of buttons when the profile selection button is set to the second profile. 27. The computer input device of claim 26 further comprising a scroll wheel controlled by the one or more processors and configured to perform at least a first function and a second function, wherein the scroll wheel is configured for perform the first function in response to the profile selection button being set to the first profile, and wherein the scroll wheel is configured for perform the second function in response to the profile selection button being set to the second profile. 28. The computer input device of claim 26 further comprising: one or more processors; and a light-emitting element, controlled by the one or more processors and disposed on the housing, the light-emitting element configured to emit any of a plurality of colored light, wherein the one or more processors causes the light-emitting element to emit one of the plurality of colored light based on the selected profile. 29. The computer input device of claim 28 wherein the light-emitting elements provides a back-lighting for the plurality of buttons. 30. The computer input device of claim 21 wherein the plurality of buttons includes more than six buttons. 31. The computer input device of claim 21 wherein the plurality of buttons includes less than six buttons. 32. The computer input device of claim 21 wherein the side portion of the housing with the plurality of buttons is concave. 33. The computer input device of claim 21 wherein the computer input device is a computer mouse. 34. A computer mouse comprising: a housing including: a top portion configured to receive a user's hand; a bottom portion configured to move along a work surface; and a side portion; a first plurality of buttons disposed on the side portion; and a second plurality of buttons disposed on the side portion, wherein each button of the first plurality of buttons and each button of the second plurality of buttons includes a top surface, wherein the top surface of each of the first plurality of buttons is contoured such that the first plurality of buttons forms a bowl shape having a first common center, and wherein the top surface of each of the second plurality of buttons is contoured such that the second plurality of buttons forms a bowl shape having a second common center. 35. The computer mouse of claim 34 further comprising: a mode switch button including a first control state and a second control state; and one or more processors to control: the mode switch button; the first plurality of buttons; and the second plurality of buttons, wherein the one or more processors generates a set of functions, wherein the set of functions are associated with the first plurality of buttons when the mode switch button is set to the first control state, and wherein the first set of functions are associated with the second plurality of buttons when the mode switch button is set to the second control state. 36. The computer mouse of claim 34 wherein the mode switch button includes a third control state, and wherein the set of functions are associated with both the first plurality of buttons and the second plurality of buttons when the mode switch button is set to the third control state. 37. The computer mouse of claim 34 further comprising: a profile selection button to select between at least a first profile and a second profile, wherein the first profile defines a first set of functions, and wherein the second profile defines a second set of functions; and one or more processors to control: the profile selection button; and the plurality of buttons, wherein the first profile is associated with the first plurality of buttons when the profile selection button is set to the first profile, and wherein the first profile is associated with the second plurality of buttons when the profile selection button is set to the second profile. 38. The computer mouse of claim 34 further comprising: a light-emitting element, controlled by the one or more processors and disposed on the housing, to emit any of a plurality of colored light, wherein the processor causes the light-emitting element to emit one of the plurality of colored light based on the selected profile. 39. The computer mouse of claim 38 wherein the light-emitting elements provides a back-lighting for the plurality of buttons. 40. The computer mouse of claim 34 wherein the side portion of the housing with the first and second plurality of buttons is concave.
2,600
10,976
10,976
16,118,927
2,662
A method of processing image data for an image, the image data including colour data expressed in a first colour space, transforms the colour data to a luminance-normalised colour space, and performs one or more image processing operations on the transformed colour data to generate processed image data.
1. An apparatus for processing image data for an image, wherein the image data comprises colour data expressed in a first colour space, the apparatus comprising: a transformation unit configured to transform the colour data to a luminance-normalised colour space comprising a luminance component, a first luminance-normalised chrominance component and a second luminance-normalised chrominance component; and one or more processing units configured to perform one or more processing operations on the transformed colour data expressed in the luminance-normalised colour space to generate processed image data. 2. The apparatus as claimed in claim 1, wherein the apparatus further comprises a second transformation unit configured to transform the processed image data to the first colour space. 3. The apparatus as claimed in claim 1, wherein the chrominance components are dependent on the luminance. 4. The apparatus as claimed in claim 1, wherein the luminance-normalised colour space is a luminance-normalised version of a YCbCr colour space. 5. The apparatus as claimed in claim 1, wherein the luminance-normalised colour space has components: Y = K R  R - K g  G + K B  B ; = R - Y YN R ; = B - Y YN B , where R, G and B are red, green and blue colour components respectively, Y is the luminance component, and are luminance-normalised chrominance components and KR, Kg, KB, NR and NB are constants. 6. The apparatus as claimed in claim 1, wherein the luminance-normalised colour space has components: Y ′ = K R  R ′ - K g  G ′ + K B  B ′ ; = R ′ - Y ′ Y ′  N R ; = B ′ - Y ′ Y ′  N B , where R′, G′ and B′ are gamma-corrected red, green and blue colour components respectively, Y′ is the gamma-corrected luminance, and are luminance-normalised chrominances and KR, Kg, KB, NR and NB are constants. 7. The apparatus as claimed in claim 1, wherein the colour data is expressed in an RGB colour space and is in fixed point format, and the transformation unit is configured to apply one of the following transformations to the colour data to transform the colour data to the luminance-normalised colour space: Y ← ( k R  R + k G  G + k B  B + 2 o - 1 )  2 - o    ← ( l R  ( R - Y ) )  Y - 1   ← ( l B  ( B - Y ) )  Y - 1   where   k R = int  ( 2 o  K R ) ,  k G = int  ( 2 o  K G ) ,  k B = 2 o - k R - k G ,  l R = int  ( 2 p N r ) ,  l B = int  ( 2 p N b ) ( 1 ) and o and p are specified integer values; or Y ′ ← ( k R  R ′ + k G  G ′ + k B  B ′ + 2 o - 1 )  2 - o   ← ( l R  ( R ′ - Y ′ ) )  Y - 1   ← ( l B  ( B ′ - Y ′ ) )  Y - 1   where   k R = int  ( 2 o  K R ) ,  k G = int  ( 2 o  K G ) ,  k B = 2 o - k R - k G ,  l R = int  ( 2 p N r ) ,  l B = int  ( 2 p N b ) ; ( 2 ) o and p are specified integer values; and R′ G′ and B′ are gamma-corrected red, green and blue colour components respectively. 8. The apparatus as claimed in claim 7, wherein the colour data comprises a set of n-bit RGB colour values, and the transformed colour data comprises a set of colour values each having and values, the transformation unit being configured to output m-bit and values, where m≥n. 9. The apparatus as claimed in claim 8, wherein m≥n+6. 10. The apparatus as claimed in claim 7, wherein kR=kG=kB=1, Nr=Nb=1, o=1 11. The apparatus as claimed in claim 10, wherein the colour data comprises a set of n-bit RGB colour values, and the transformed colour data comprises a set of colour values each having and values, the transformation unit being configured to output m-bit and values, where m=n+1. 12. The apparatus as claimed in claim 1, wherein the one or more processing units comprises a correction unit configured to apply a gamma correction to the luminance channel of the transformed image data. 13. The apparatus as claimed in claim 1, wherein the one or more processing units comprises a filtering unit configured to filter luminance-normalised chrominance channels of the transformed colour data. 14. The apparatus as claimed in claim 1, wherein at least one of the one or more processing units is configured to perform a processing operation comprising determining a median value of a plurality of luminance-normalised chrominance values of pixels within a kernel. 15. The apparatus as claimed claim 1, wherein the transformation unit is configured to perform a division operation to divide one or more chrominance values for a pixel by a luminance value for the pixel as part of transforming the colour data to a luminance-normalised colour space. 16. The apparatus as claimed in claim 15, wherein the transformation unit is configured to implement the division operation using a CORDIC algorithm, the transformation unit being configured to determine the number of iterations of the CORDIC algorithm based on the range of possible luminance-normalised chrominance values in the luminance-normalised colour space. 17. A method of processing image data for an image in an image processor, the method comprising: receiving the image data, wherein the image data comprises colour data expressed in a first colour space; transforming the colour data to a luminance-normalised colour space comprising a luminance component, a first luminance-normalised chrominance component and a second luminance-normalised chrominance component; and performing one or more image processing operations on the transformed colour data expressed in the luminance-normalised colour space to generate processed image data. 18. The method as claimed in claim 17, wherein the method further comprises transforming the processed image data to the first colour space. 19. The method as claimed in claim 17, wherein the chrominance components are dependent on the luminance. 20. A non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform a method of processing image data for an image, comprising: receiving the image data, wherein the image data comprises colour data expressed in a first colour space; transforming the colour data to a luminance-normalised colour space comprising a luminance component, a first luminance-normalised chrominance component and a second luminance-normalised chrominance component; and performing one or more image processing operations on the transformed colour data expressed in the luminance-normalised colour space to generate processed image data.
A method of processing image data for an image, the image data including colour data expressed in a first colour space, transforms the colour data to a luminance-normalised colour space, and performs one or more image processing operations on the transformed colour data to generate processed image data.1. An apparatus for processing image data for an image, wherein the image data comprises colour data expressed in a first colour space, the apparatus comprising: a transformation unit configured to transform the colour data to a luminance-normalised colour space comprising a luminance component, a first luminance-normalised chrominance component and a second luminance-normalised chrominance component; and one or more processing units configured to perform one or more processing operations on the transformed colour data expressed in the luminance-normalised colour space to generate processed image data. 2. The apparatus as claimed in claim 1, wherein the apparatus further comprises a second transformation unit configured to transform the processed image data to the first colour space. 3. The apparatus as claimed in claim 1, wherein the chrominance components are dependent on the luminance. 4. The apparatus as claimed in claim 1, wherein the luminance-normalised colour space is a luminance-normalised version of a YCbCr colour space. 5. The apparatus as claimed in claim 1, wherein the luminance-normalised colour space has components: Y = K R  R - K g  G + K B  B ; = R - Y YN R ; = B - Y YN B , where R, G and B are red, green and blue colour components respectively, Y is the luminance component, and are luminance-normalised chrominance components and KR, Kg, KB, NR and NB are constants. 6. The apparatus as claimed in claim 1, wherein the luminance-normalised colour space has components: Y ′ = K R  R ′ - K g  G ′ + K B  B ′ ; = R ′ - Y ′ Y ′  N R ; = B ′ - Y ′ Y ′  N B , where R′, G′ and B′ are gamma-corrected red, green and blue colour components respectively, Y′ is the gamma-corrected luminance, and are luminance-normalised chrominances and KR, Kg, KB, NR and NB are constants. 7. The apparatus as claimed in claim 1, wherein the colour data is expressed in an RGB colour space and is in fixed point format, and the transformation unit is configured to apply one of the following transformations to the colour data to transform the colour data to the luminance-normalised colour space: Y ← ( k R  R + k G  G + k B  B + 2 o - 1 )  2 - o    ← ( l R  ( R - Y ) )  Y - 1   ← ( l B  ( B - Y ) )  Y - 1   where   k R = int  ( 2 o  K R ) ,  k G = int  ( 2 o  K G ) ,  k B = 2 o - k R - k G ,  l R = int  ( 2 p N r ) ,  l B = int  ( 2 p N b ) ( 1 ) and o and p are specified integer values; or Y ′ ← ( k R  R ′ + k G  G ′ + k B  B ′ + 2 o - 1 )  2 - o   ← ( l R  ( R ′ - Y ′ ) )  Y - 1   ← ( l B  ( B ′ - Y ′ ) )  Y - 1   where   k R = int  ( 2 o  K R ) ,  k G = int  ( 2 o  K G ) ,  k B = 2 o - k R - k G ,  l R = int  ( 2 p N r ) ,  l B = int  ( 2 p N b ) ; ( 2 ) o and p are specified integer values; and R′ G′ and B′ are gamma-corrected red, green and blue colour components respectively. 8. The apparatus as claimed in claim 7, wherein the colour data comprises a set of n-bit RGB colour values, and the transformed colour data comprises a set of colour values each having and values, the transformation unit being configured to output m-bit and values, where m≥n. 9. The apparatus as claimed in claim 8, wherein m≥n+6. 10. The apparatus as claimed in claim 7, wherein kR=kG=kB=1, Nr=Nb=1, o=1 11. The apparatus as claimed in claim 10, wherein the colour data comprises a set of n-bit RGB colour values, and the transformed colour data comprises a set of colour values each having and values, the transformation unit being configured to output m-bit and values, where m=n+1. 12. The apparatus as claimed in claim 1, wherein the one or more processing units comprises a correction unit configured to apply a gamma correction to the luminance channel of the transformed image data. 13. The apparatus as claimed in claim 1, wherein the one or more processing units comprises a filtering unit configured to filter luminance-normalised chrominance channels of the transformed colour data. 14. The apparatus as claimed in claim 1, wherein at least one of the one or more processing units is configured to perform a processing operation comprising determining a median value of a plurality of luminance-normalised chrominance values of pixels within a kernel. 15. The apparatus as claimed claim 1, wherein the transformation unit is configured to perform a division operation to divide one or more chrominance values for a pixel by a luminance value for the pixel as part of transforming the colour data to a luminance-normalised colour space. 16. The apparatus as claimed in claim 15, wherein the transformation unit is configured to implement the division operation using a CORDIC algorithm, the transformation unit being configured to determine the number of iterations of the CORDIC algorithm based on the range of possible luminance-normalised chrominance values in the luminance-normalised colour space. 17. A method of processing image data for an image in an image processor, the method comprising: receiving the image data, wherein the image data comprises colour data expressed in a first colour space; transforming the colour data to a luminance-normalised colour space comprising a luminance component, a first luminance-normalised chrominance component and a second luminance-normalised chrominance component; and performing one or more image processing operations on the transformed colour data expressed in the luminance-normalised colour space to generate processed image data. 18. The method as claimed in claim 17, wherein the method further comprises transforming the processed image data to the first colour space. 19. The method as claimed in claim 17, wherein the chrominance components are dependent on the luminance. 20. A non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform a method of processing image data for an image, comprising: receiving the image data, wherein the image data comprises colour data expressed in a first colour space; transforming the colour data to a luminance-normalised colour space comprising a luminance component, a first luminance-normalised chrominance component and a second luminance-normalised chrominance component; and performing one or more image processing operations on the transformed colour data expressed in the luminance-normalised colour space to generate processed image data.
2,600
10,977
10,977
16,234,356
2,655
The activities of multiple virtual personal assistant (VPA) applications are coordinated. For example, different portions of a conversational natural language dialog involving a user and a computing device may be handled by different VPAs.
1-27. (canceled) 28. A method, comprising: receiving an input from a first virtual assistant application of a first device; using the input, identifying a second virtual assistant application; determining pairing data; transmitting the pairing data to the first virtual assistant application for the first virtual assistant application to establish a communicative coupling between the first virtual assistant application and the second virtual assistant application; wherein the second virtual assistant application is to, in response to the input, generate an output intent or cause a second device to present output; wherein the method is performed by one or more computing devices. 29. The method of claim 28, wherein the input comprises any one or more of the following: a dialog request message, an input intent, a text string, natural language dialog, a network address, a device identifier. 30. The method of claim 28, wherein the pairing data is usable to establish the communicative coupling using any one or more of the following: a cellular network, a telephone network, a local area network, a wide area network, a public network, the Internet, a short-range wireless network, a Wi-Fi connection, a near-field communication (NFC) connection, an optical communication connection, a BLUETOOTH connection. 31. The method of claim 28, wherein the first virtual assistant application is a general-purpose application and the second virtual assistant application is a domain-specific application or the first virtual assistant application is a first domain-specific virtual assistant application and the second virtual assistant application is a second domain-specific virtual assistant application. 32. The method of claim 28, comprising, in response to the input, enabling access, by the first virtual assistant application, to a domain-specific model or a user preference model or a domain-specific task flow or user preference task flow. 33. The method of claim 28, wherein the communicative coupling comprises a direct communication connection between the first virtual assistant application and the second virtual assistant application or a communicative coupling of more than two virtual assistant applications. 34. The method of claim 28, wherein the input is received from the first virtual assistant application and the pairing data is transmitted to the first virtual assistant application using an application program interface (API). 35. The method of claim 28, comprising executing a communication protocol over the communicative coupling and monitoring communications between the first virtual assistant application and the second virtual assistant application using the communication protocol. 36. The method of claim 28, comprising any one or more of the following: determining the second virtual assistant application by querying a directory service, adding a new virtual assistant application to the directory service, removing a virtual assistant application from the directory service, generating statistics relating to activity levels of virtual assistant applications that are registered with the directory service. 37. The method of claim 28, wherein the first and second virtual assistant applications comprise, individually or in combination, any one or more of the following: an e-commerce application, a website, a user-personal virtual assistant application, a device-based virtual assistant application, a hierarchical network of virtual assistants, a home-based virtual assistant application, a heating, ventilation and cooling (HVAC) virtual assistant application, a single-user virtual assistant application, a multiple-user virtual assistant application, a single domain virtual assistant application, a multiple-domain virtual assistant application, a military virtual assistant application, a business virtual assistant application, a family virtual assistant application, a social virtual assistant application, an entertainment virtual assistant application. 38. A computer-program product embodied in one or more non-transitory machine-readable storage medium, including instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving an input from a first virtual assistant application of a first device; using the input, identifying a second virtual assistant application; determining pairing data; transmitting the pairing data to the first virtual assistant application for the first virtual assistant application to establish a communicative coupling between the first virtual assistant application and the second virtual assistant application; wherein the second virtual assistant application is to, in response to the input, generate an output intent or cause a second device to present output. 39. The computer-program product of claim 38, wherein the input comprises any one or more of the following: a dialog request message, an input intent, a text string, natural language dialog, a network address, a device identifier. 40. The computer-program product of claim 38, wherein the pairing data is usable to establish the communicative coupling using any one or more of the following: a cellular network, a telephone network, a local area network, a wide area network, a public network, the Internet, a short-range wireless network, a Wi-Fi connection, a near-field communication (NFC) connection, an optical communication connection, a BLUETOOTH connection. 41. The computer-program product of claim 38, wherein the first virtual assistant application is a general-purpose application and the second virtual assistant application is a domain-specific application or the first virtual assistant application is a first domain-specific virtual assistant application and the second virtual assistant application is a second domain-specific virtual assistant application. 42. The computer-program product of claim 38, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising, in response to the input, enabling access, by the first virtual assistant application, to a domain-specific model or a user preference model or a domain-specific task flow or user preference task flow. 43. The computer-program product of claim 38, wherein the communicative coupling comprises a direct communication connection between the first virtual assistant application and the second virtual assistant application or a communicative coupling of more than two virtual assistant applications. 44. The computer-program product of claim 38, wherein the input is received from the first virtual assistant application and the pairing data is transmitted to the first virtual assistant application using an application program interface (API). 45. The computer-program product of claim 38, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising executing a communication protocol over the communicative coupling and monitoring communications between the first virtual assistant application and the second virtual assistant application using the communication protocol. 46. The computer-program product of claim 38, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising any one or more of the following: determining the second virtual assistant application by querying a directory service, adding a new virtual assistant application to the directory service, removing a virtual assistant application from the directory service, generating statistics relating to activity levels of virtual assistant applications that are registered with the directory service. 47. The computer-program product of claim 38, wherein the first and second virtual assistant applications comprise, individually or in combination, any one or more of the following: an e-commerce application, a website, a user-personal virtual assistant application, a device-based virtual assistant application, a hierarchical network of virtual assistants, a home-based virtual assistant application, a heating, ventilation and cooling (HVAC) virtual assistant application, a single-user virtual assistant application, a multiple-user virtual assistant application, a single domain virtual assistant application, a multiple-domain virtual assistant application, a military virtual assistant application, a business virtual assistant application, a family virtual assistant application, a social virtual assistant application, an entertainment virtual assistant application.
The activities of multiple virtual personal assistant (VPA) applications are coordinated. For example, different portions of a conversational natural language dialog involving a user and a computing device may be handled by different VPAs.1-27. (canceled) 28. A method, comprising: receiving an input from a first virtual assistant application of a first device; using the input, identifying a second virtual assistant application; determining pairing data; transmitting the pairing data to the first virtual assistant application for the first virtual assistant application to establish a communicative coupling between the first virtual assistant application and the second virtual assistant application; wherein the second virtual assistant application is to, in response to the input, generate an output intent or cause a second device to present output; wherein the method is performed by one or more computing devices. 29. The method of claim 28, wherein the input comprises any one or more of the following: a dialog request message, an input intent, a text string, natural language dialog, a network address, a device identifier. 30. The method of claim 28, wherein the pairing data is usable to establish the communicative coupling using any one or more of the following: a cellular network, a telephone network, a local area network, a wide area network, a public network, the Internet, a short-range wireless network, a Wi-Fi connection, a near-field communication (NFC) connection, an optical communication connection, a BLUETOOTH connection. 31. The method of claim 28, wherein the first virtual assistant application is a general-purpose application and the second virtual assistant application is a domain-specific application or the first virtual assistant application is a first domain-specific virtual assistant application and the second virtual assistant application is a second domain-specific virtual assistant application. 32. The method of claim 28, comprising, in response to the input, enabling access, by the first virtual assistant application, to a domain-specific model or a user preference model or a domain-specific task flow or user preference task flow. 33. The method of claim 28, wherein the communicative coupling comprises a direct communication connection between the first virtual assistant application and the second virtual assistant application or a communicative coupling of more than two virtual assistant applications. 34. The method of claim 28, wherein the input is received from the first virtual assistant application and the pairing data is transmitted to the first virtual assistant application using an application program interface (API). 35. The method of claim 28, comprising executing a communication protocol over the communicative coupling and monitoring communications between the first virtual assistant application and the second virtual assistant application using the communication protocol. 36. The method of claim 28, comprising any one or more of the following: determining the second virtual assistant application by querying a directory service, adding a new virtual assistant application to the directory service, removing a virtual assistant application from the directory service, generating statistics relating to activity levels of virtual assistant applications that are registered with the directory service. 37. The method of claim 28, wherein the first and second virtual assistant applications comprise, individually or in combination, any one or more of the following: an e-commerce application, a website, a user-personal virtual assistant application, a device-based virtual assistant application, a hierarchical network of virtual assistants, a home-based virtual assistant application, a heating, ventilation and cooling (HVAC) virtual assistant application, a single-user virtual assistant application, a multiple-user virtual assistant application, a single domain virtual assistant application, a multiple-domain virtual assistant application, a military virtual assistant application, a business virtual assistant application, a family virtual assistant application, a social virtual assistant application, an entertainment virtual assistant application. 38. A computer-program product embodied in one or more non-transitory machine-readable storage medium, including instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving an input from a first virtual assistant application of a first device; using the input, identifying a second virtual assistant application; determining pairing data; transmitting the pairing data to the first virtual assistant application for the first virtual assistant application to establish a communicative coupling between the first virtual assistant application and the second virtual assistant application; wherein the second virtual assistant application is to, in response to the input, generate an output intent or cause a second device to present output. 39. The computer-program product of claim 38, wherein the input comprises any one or more of the following: a dialog request message, an input intent, a text string, natural language dialog, a network address, a device identifier. 40. The computer-program product of claim 38, wherein the pairing data is usable to establish the communicative coupling using any one or more of the following: a cellular network, a telephone network, a local area network, a wide area network, a public network, the Internet, a short-range wireless network, a Wi-Fi connection, a near-field communication (NFC) connection, an optical communication connection, a BLUETOOTH connection. 41. The computer-program product of claim 38, wherein the first virtual assistant application is a general-purpose application and the second virtual assistant application is a domain-specific application or the first virtual assistant application is a first domain-specific virtual assistant application and the second virtual assistant application is a second domain-specific virtual assistant application. 42. The computer-program product of claim 38, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising, in response to the input, enabling access, by the first virtual assistant application, to a domain-specific model or a user preference model or a domain-specific task flow or user preference task flow. 43. The computer-program product of claim 38, wherein the communicative coupling comprises a direct communication connection between the first virtual assistant application and the second virtual assistant application or a communicative coupling of more than two virtual assistant applications. 44. The computer-program product of claim 38, wherein the input is received from the first virtual assistant application and the pairing data is transmitted to the first virtual assistant application using an application program interface (API). 45. The computer-program product of claim 38, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising executing a communication protocol over the communicative coupling and monitoring communications between the first virtual assistant application and the second virtual assistant application using the communication protocol. 46. The computer-program product of claim 38, wherein the instructions, when executed by one or more processors, cause the one or more processors to perform operations comprising any one or more of the following: determining the second virtual assistant application by querying a directory service, adding a new virtual assistant application to the directory service, removing a virtual assistant application from the directory service, generating statistics relating to activity levels of virtual assistant applications that are registered with the directory service. 47. The computer-program product of claim 38, wherein the first and second virtual assistant applications comprise, individually or in combination, any one or more of the following: an e-commerce application, a website, a user-personal virtual assistant application, a device-based virtual assistant application, a hierarchical network of virtual assistants, a home-based virtual assistant application, a heating, ventilation and cooling (HVAC) virtual assistant application, a single-user virtual assistant application, a multiple-user virtual assistant application, a single domain virtual assistant application, a multiple-domain virtual assistant application, a military virtual assistant application, a business virtual assistant application, a family virtual assistant application, a social virtual assistant application, an entertainment virtual assistant application.
2,600
10,978
10,978
15,743,501
2,664
A method for operating a camera of a household appliance, in particular a refrigerator, includes calibrating the camera when a door of the household appliance is opened, outputting a trigger signal for triggering the camera at a predetermined angular position of the door and using the camera to capture at least one image without color matching in response to the trigger signal. A household appliance includes at least one camera for capturing at least one image from at least one part of an inner chamber of the household appliance and the household appliance is configured to operate at least one camera according to the method.
1-9. (canceled) 10. A method for operating a camera of a household appliance, the method comprising the following steps: calibrating the camera when a door of the household appliance opens; outputting a trigger signal for triggering the camera when a predetermined angular position of the door is reached; and using the camera to capture at least one image without color matching in response to the trigger signal. 11. The method according to claim 10, which further comprises performing color matching with the calibration of the camera. 12. The method according to claim 10, which further comprises operating the camera without color matching. 13. The method according to claim 12, which further comprises: providing the camera as one of a first camera for capturing an image of a storage chamber and a second camera for capturing an image of an inner face of the door of the household appliance; and performing color correction on an image captured by the second camera by using an image captured by the first camera. 14. The method according to claim 13, which further comprises performing the color correction step outside the household appliance. 15. The method according to claim 10, which further comprises providing the camera with a fixed focal point. 16. The method according to claim 10, which further comprises using the camera to capture just one image in response to the trigger signal. 17. The method according to claim 10, which further comprises switching off the camera when the door closes. 18. A household appliance, comprising: an interior chamber of the household appliance; a door closing said interior chamber; at least one camera for capturing at least one image from at least one part of said interior chamber; and a control facility programmed for: calibrating said at least one camera when said door opens; outputting a trigger signal for triggering said at least one camera when a predetermined angular position of said door is reached; and using said at least one camera to capture at least one image without color matching in response to the trigger signal.
A method for operating a camera of a household appliance, in particular a refrigerator, includes calibrating the camera when a door of the household appliance is opened, outputting a trigger signal for triggering the camera at a predetermined angular position of the door and using the camera to capture at least one image without color matching in response to the trigger signal. A household appliance includes at least one camera for capturing at least one image from at least one part of an inner chamber of the household appliance and the household appliance is configured to operate at least one camera according to the method.1-9. (canceled) 10. A method for operating a camera of a household appliance, the method comprising the following steps: calibrating the camera when a door of the household appliance opens; outputting a trigger signal for triggering the camera when a predetermined angular position of the door is reached; and using the camera to capture at least one image without color matching in response to the trigger signal. 11. The method according to claim 10, which further comprises performing color matching with the calibration of the camera. 12. The method according to claim 10, which further comprises operating the camera without color matching. 13. The method according to claim 12, which further comprises: providing the camera as one of a first camera for capturing an image of a storage chamber and a second camera for capturing an image of an inner face of the door of the household appliance; and performing color correction on an image captured by the second camera by using an image captured by the first camera. 14. The method according to claim 13, which further comprises performing the color correction step outside the household appliance. 15. The method according to claim 10, which further comprises providing the camera with a fixed focal point. 16. The method according to claim 10, which further comprises using the camera to capture just one image in response to the trigger signal. 17. The method according to claim 10, which further comprises switching off the camera when the door closes. 18. A household appliance, comprising: an interior chamber of the household appliance; a door closing said interior chamber; at least one camera for capturing at least one image from at least one part of said interior chamber; and a control facility programmed for: calibrating said at least one camera when said door opens; outputting a trigger signal for triggering said at least one camera when a predetermined angular position of said door is reached; and using said at least one camera to capture at least one image without color matching in response to the trigger signal.
2,600
10,979
10,979
16,707,010
2,646
Example methods and systems for adjusting the beam width of radio frequency (RF) signals for purposes of balloon-to-ground communication are described. One example method includes determining, based on respective locations of a plurality of balloons and areas covered by respective ground-facing communication beams of the balloons, a contiguous ground coverage area served by the plurality of balloons, where the communication beam of a balloon defines a corresponding individual coverage area within the ground coverage area, determining a change in position of at least one of the balloons, based on the change in position of the at least one balloon, determining an adjustment to a first of the individual coverage areas in an effort to maintain the contiguous ground coverage area after the change in position of at least one of the balloons, and adjusting a width of the ground-facing communication beam of the balloon corresponding to the first individual coverage area in order to make the determined adjustment to the first individual coverage area.
1. A signal routing method comprising: determining, by one or more processors in a communication network, state information for a plurality of high altitude platforms forming at least part of the communication network, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information; determining, by the one or more processors according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion; selecting, by the one or more processors, a transparent routing path from among the one or more routing paths; and transmitting the communication signal to a receiver device via the transparent routing path. 2. The signal routing method of claim 1, wherein the transparent routing path comprises a plurality of free-space optical links between the subset of high altitude platforms. 3. The signal routing method of claim 1, wherein determining the one or more routing paths includes identifying adaptive routing between first and second high altitude platforms of the plurality of high altitude platforms, where a lightpath between the first and second high altitude platforms is determined and set-up when a connection is needed and released at a later time. 4. The signal routing method of claim 3, wherein the lightpath is determined dynamically depending upon at least one of a current state, a past state, or a predicted state of the plurality of high altitude platforms. 5. The signal routing method of claim 1, wherein determining the one or more routing paths includes evaluating which paths implement wavelength division multiplexing. 6. The signal routing method of claim 1, wherein selecting the transparent routing path includes assigning a same wavelength for all optical links on the transparent routing path. 7. The signal routing method of claim 1, wherein one or more of the plurality of high altitude platforms comprises a balloon. 8. A system comprising: a plurality of high altitude platforms forming at least part of a wireless communication network; and a control system including one or more processors, the one or more processors being configured to: determine state information for the plurality of high altitude platforms, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information; determine, according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion; select a transparent routing path from among the one or more routing paths; and inform all high altitude platforms along the transparent routing path of the selection. 9. The system of claim 8, wherein one or more of the high altitude platforms along the transparent routing path comprise one or more lighter-than-air platforms. 10. The system of claim 9, wherein the one or more lighter-than-air platforms comprise one or more balloons. 11. The system of claim 8, wherein the transparent routing path comprises a plurality of free-space optical links between the subset of high altitude platforms. 12. The system of claim 8, wherein the determination of the one or more routing paths includes identification of adaptive routing between first and second high altitude platforms of the plurality of high altitude platforms, where a lightpath between the first and second high altitude platforms is determined and set-up when a connection is needed and released at a later time. 13. The system of claim 12, wherein the lightpath is determined dynamically depending upon at least one of a current state, a past state, or a predicted state of the plurality of high altitude platforms. 14. The system of claim 8, wherein determination of the one or more routing paths includes evaluation of which paths implement wavelength division multiplexing. 15. The system of claim 8, wherein selection of the transparent routing path includes assignment of a same wavelength for all optical links on the transparent routing path. 16. A non-transitory computer readable medium having instructions stored therein, the instructions, when executed by a computing system, cause the computing system to perform a signal routing method comprising: determining state information for a plurality of high altitude platforms forming at least part of the communication network, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information; determining, according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion; selecting a transparent routing path from among the one or more routing paths; and transmitting the communication signal to a receiver device via the transparent routing path. 17. The non-transitory computer readable medium of claim 16, wherein determining the one or more routing paths includes identifying adaptive routing between first and second high altitude platforms of the plurality of high altitude platforms, where a lightpath between the first and second high altitude platforms is determined and set-up when a connection is needed and released at a later time. 18. The non-transitory computer readable medium of claim 17, wherein the lightpath is determined dynamically depending upon at least one of a current state, a past state, or a predicted state of the plurality of high altitude platforms. 19. The non-transitory computer readable medium of claim 16, wherein determining the one or more routing paths includes evaluating which paths implement wavelength division multiplexing. 20. The non-transitory computer readable medium of claim 16, wherein selecting the transparent routing path includes assigning a same wavelength for all optical links on the transparent routing path.
Example methods and systems for adjusting the beam width of radio frequency (RF) signals for purposes of balloon-to-ground communication are described. One example method includes determining, based on respective locations of a plurality of balloons and areas covered by respective ground-facing communication beams of the balloons, a contiguous ground coverage area served by the plurality of balloons, where the communication beam of a balloon defines a corresponding individual coverage area within the ground coverage area, determining a change in position of at least one of the balloons, based on the change in position of the at least one balloon, determining an adjustment to a first of the individual coverage areas in an effort to maintain the contiguous ground coverage area after the change in position of at least one of the balloons, and adjusting a width of the ground-facing communication beam of the balloon corresponding to the first individual coverage area in order to make the determined adjustment to the first individual coverage area.1. A signal routing method comprising: determining, by one or more processors in a communication network, state information for a plurality of high altitude platforms forming at least part of the communication network, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information; determining, by the one or more processors according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion; selecting, by the one or more processors, a transparent routing path from among the one or more routing paths; and transmitting the communication signal to a receiver device via the transparent routing path. 2. The signal routing method of claim 1, wherein the transparent routing path comprises a plurality of free-space optical links between the subset of high altitude platforms. 3. The signal routing method of claim 1, wherein determining the one or more routing paths includes identifying adaptive routing between first and second high altitude platforms of the plurality of high altitude platforms, where a lightpath between the first and second high altitude platforms is determined and set-up when a connection is needed and released at a later time. 4. The signal routing method of claim 3, wherein the lightpath is determined dynamically depending upon at least one of a current state, a past state, or a predicted state of the plurality of high altitude platforms. 5. The signal routing method of claim 1, wherein determining the one or more routing paths includes evaluating which paths implement wavelength division multiplexing. 6. The signal routing method of claim 1, wherein selecting the transparent routing path includes assigning a same wavelength for all optical links on the transparent routing path. 7. The signal routing method of claim 1, wherein one or more of the plurality of high altitude platforms comprises a balloon. 8. A system comprising: a plurality of high altitude platforms forming at least part of a wireless communication network; and a control system including one or more processors, the one or more processors being configured to: determine state information for the plurality of high altitude platforms, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information; determine, according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion; select a transparent routing path from among the one or more routing paths; and inform all high altitude platforms along the transparent routing path of the selection. 9. The system of claim 8, wherein one or more of the high altitude platforms along the transparent routing path comprise one or more lighter-than-air platforms. 10. The system of claim 9, wherein the one or more lighter-than-air platforms comprise one or more balloons. 11. The system of claim 8, wherein the transparent routing path comprises a plurality of free-space optical links between the subset of high altitude platforms. 12. The system of claim 8, wherein the determination of the one or more routing paths includes identification of adaptive routing between first and second high altitude platforms of the plurality of high altitude platforms, where a lightpath between the first and second high altitude platforms is determined and set-up when a connection is needed and released at a later time. 13. The system of claim 12, wherein the lightpath is determined dynamically depending upon at least one of a current state, a past state, or a predicted state of the plurality of high altitude platforms. 14. The system of claim 8, wherein determination of the one or more routing paths includes evaluation of which paths implement wavelength division multiplexing. 15. The system of claim 8, wherein selection of the transparent routing path includes assignment of a same wavelength for all optical links on the transparent routing path. 16. A non-transitory computer readable medium having instructions stored therein, the instructions, when executed by a computing system, cause the computing system to perform a signal routing method comprising: determining state information for a plurality of high altitude platforms forming at least part of the communication network, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information; determining, according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion; selecting a transparent routing path from among the one or more routing paths; and transmitting the communication signal to a receiver device via the transparent routing path. 17. The non-transitory computer readable medium of claim 16, wherein determining the one or more routing paths includes identifying adaptive routing between first and second high altitude platforms of the plurality of high altitude platforms, where a lightpath between the first and second high altitude platforms is determined and set-up when a connection is needed and released at a later time. 18. The non-transitory computer readable medium of claim 17, wherein the lightpath is determined dynamically depending upon at least one of a current state, a past state, or a predicted state of the plurality of high altitude platforms. 19. The non-transitory computer readable medium of claim 16, wherein determining the one or more routing paths includes evaluating which paths implement wavelength division multiplexing. 20. The non-transitory computer readable medium of claim 16, wherein selecting the transparent routing path includes assigning a same wavelength for all optical links on the transparent routing path.
2,600
10,980
10,980
15,982,138
2,642
Methods and systems are presented for remotely commanding a mobile device. In one aspect, a method includes receiving input identifying a mobile device, presenting to a user one or more remote commands corresponding to the mobile device, receiving user input selecting a remote command from the one or more presented remote commands, generating a remote command message instructing the mobile device to execute the selected remote command, and transmitting the remote command message to a server for publication in a message topic. Further, a selectable list of mobile devices associated with a remote management account can be presented to the user, the selectable list including information uniquely identifying each mobile device. Additionally, the selectable list of mobile devices can include an indication of whether an included mobile device is online.
1. (canceled) 2. A computer-implemented method of commanding a remote device, the method comprising: authenticating a credential associated with a user account; determining a remote device associated with the user account, wherein the remote device is uniquely identified; presenting one or more remote commands enabled for execution at the remote device, wherein at least one command of the one or more remote commands has been enabled for execution by input at the remote device; receiving input selecting a remote command from the one or more remote commands; and transmitting, to the remote device, an instruction to execute the selected remote command. 3. The computer-implemented method of claim 2, wherein the transmitting comprises concurrently transmitting multiple commands to the remote device. 4. The computer-implemented method of claim 3, wherein the multiple commands are associated with a predetermined order of execution. 5. The computer-implemented method of claim 2, further comprising receiving location information from the remote device in response to a locate command. 6. The computer-implemented method of claim 2, wherein a command of the one or more remote commands is enabled by default for execution by the remote device. 7. The computer-implemented method of claim 2, wherein the remote device is selected from a plurality of remote devices associated with the account. 8. The computer-implemented method of claim 2, wherein presenting one or more remote commands enabled for execution at the remote device comprises presenting only remote commands that are enabled for execution by the remote device. 9. The computer-implemented method of claim 2, wherein the selected remote command causes the remote device to generate an output. 10. The computer-implemented method of claim 9, wherein the output comprises a message to be presented on a display or a sound to be output from a speaker. 11. The computer-implemented method of claim 2, wherein the selected remote command causes the remote device to be locked or to be wiped. 12. A computing device comprising: an input interface; an output interface; a wireless network connection; a processor coupled to the input interface, the output interface, and the network connection, the processor configured to cause the computer apparatus to: authenticate a credential associated with a user account; determine a remote device associated with the user account, wherein the remote device is uniquely identified; present, via the output interface, one or more remote commands enabled for execution at the remote device, wherein a command of the one or more remote commands has been enabled for execution by input at the remote device; receive, via the input interface, input selecting a remote command from the one or more remote commands; and transmit, via the wireless network connection, an instruction for the remote device to execute the selected remote command. 13. The computing device of claim 12, wherein the processor is further configured to cause the computing device to transmit, concurrently with the instruction for the remote device to execute the selected remote command, an instruction for the remote device to execute one or more additional remote commands. 14. The computing device of claim 13, wherein the selected remote command and the one or more additional remote commands are associated with a predetermined order of execution. 15. The computing device of claim 12, wherein the processor is further configured to cause the computing device to receive location information from the remote device in response to a locate command. 16. The computing device of claim 12, wherein a command of the one or more remote commands is enabled by default for execution by the remote device. 17. The computing device of claim 12, wherein the remote device is selected from a plurality of remote devices associated with the account. 18. The computing device of claim 12, wherein presenting one or more remote commands enabled for execution at the remote device comprises presenting only remote commands that are enabled for execution by the remote device. 19. A non-transitory computer-readable medium, storing instructions executable to cause one or more data processing apparatus to: authenticate a credential associated with a user account; determine a remote device associated with the user account, wherein the remote device is uniquely identified; present one or more remote commands enabled for execution at the remote device, wherein a command of the one or more remote commands has been enabled for execution by input at the remote device; receive input selecting a remote command from the one or more remote commands; and transmit an instruction for the remote device to execute the selected remote command. 20. The non-transitory computer-readable medium of claim 19, wherein the transmitting comprises concurrently transmitting multiple commands to the remote device. 21. The non-transitory computer-readable medium of claim 19, wherein a command of the one or more remote commands is enabled by default for execution by the remote device.
Methods and systems are presented for remotely commanding a mobile device. In one aspect, a method includes receiving input identifying a mobile device, presenting to a user one or more remote commands corresponding to the mobile device, receiving user input selecting a remote command from the one or more presented remote commands, generating a remote command message instructing the mobile device to execute the selected remote command, and transmitting the remote command message to a server for publication in a message topic. Further, a selectable list of mobile devices associated with a remote management account can be presented to the user, the selectable list including information uniquely identifying each mobile device. Additionally, the selectable list of mobile devices can include an indication of whether an included mobile device is online.1. (canceled) 2. A computer-implemented method of commanding a remote device, the method comprising: authenticating a credential associated with a user account; determining a remote device associated with the user account, wherein the remote device is uniquely identified; presenting one or more remote commands enabled for execution at the remote device, wherein at least one command of the one or more remote commands has been enabled for execution by input at the remote device; receiving input selecting a remote command from the one or more remote commands; and transmitting, to the remote device, an instruction to execute the selected remote command. 3. The computer-implemented method of claim 2, wherein the transmitting comprises concurrently transmitting multiple commands to the remote device. 4. The computer-implemented method of claim 3, wherein the multiple commands are associated with a predetermined order of execution. 5. The computer-implemented method of claim 2, further comprising receiving location information from the remote device in response to a locate command. 6. The computer-implemented method of claim 2, wherein a command of the one or more remote commands is enabled by default for execution by the remote device. 7. The computer-implemented method of claim 2, wherein the remote device is selected from a plurality of remote devices associated with the account. 8. The computer-implemented method of claim 2, wherein presenting one or more remote commands enabled for execution at the remote device comprises presenting only remote commands that are enabled for execution by the remote device. 9. The computer-implemented method of claim 2, wherein the selected remote command causes the remote device to generate an output. 10. The computer-implemented method of claim 9, wherein the output comprises a message to be presented on a display or a sound to be output from a speaker. 11. The computer-implemented method of claim 2, wherein the selected remote command causes the remote device to be locked or to be wiped. 12. A computing device comprising: an input interface; an output interface; a wireless network connection; a processor coupled to the input interface, the output interface, and the network connection, the processor configured to cause the computer apparatus to: authenticate a credential associated with a user account; determine a remote device associated with the user account, wherein the remote device is uniquely identified; present, via the output interface, one or more remote commands enabled for execution at the remote device, wherein a command of the one or more remote commands has been enabled for execution by input at the remote device; receive, via the input interface, input selecting a remote command from the one or more remote commands; and transmit, via the wireless network connection, an instruction for the remote device to execute the selected remote command. 13. The computing device of claim 12, wherein the processor is further configured to cause the computing device to transmit, concurrently with the instruction for the remote device to execute the selected remote command, an instruction for the remote device to execute one or more additional remote commands. 14. The computing device of claim 13, wherein the selected remote command and the one or more additional remote commands are associated with a predetermined order of execution. 15. The computing device of claim 12, wherein the processor is further configured to cause the computing device to receive location information from the remote device in response to a locate command. 16. The computing device of claim 12, wherein a command of the one or more remote commands is enabled by default for execution by the remote device. 17. The computing device of claim 12, wherein the remote device is selected from a plurality of remote devices associated with the account. 18. The computing device of claim 12, wherein presenting one or more remote commands enabled for execution at the remote device comprises presenting only remote commands that are enabled for execution by the remote device. 19. A non-transitory computer-readable medium, storing instructions executable to cause one or more data processing apparatus to: authenticate a credential associated with a user account; determine a remote device associated with the user account, wherein the remote device is uniquely identified; present one or more remote commands enabled for execution at the remote device, wherein a command of the one or more remote commands has been enabled for execution by input at the remote device; receive input selecting a remote command from the one or more remote commands; and transmit an instruction for the remote device to execute the selected remote command. 20. The non-transitory computer-readable medium of claim 19, wherein the transmitting comprises concurrently transmitting multiple commands to the remote device. 21. The non-transitory computer-readable medium of claim 19, wherein a command of the one or more remote commands is enabled by default for execution by the remote device.
2,600
10,981
10,981
16,450,475
2,612
An immersive display and a method of operating the immersive display to provide information relating to an object. The method includes receiving information from an input device of the immersive display or coupled to the immersive display, detecting an object based on the information received from the input device, and displaying a representation of the object on images displayed on a display of the immersive display such that attributes of the representation distinguish the representation from the images displayed on the display, wherein the representation is displayed at a location on the display that corresponds with a location of the object.
1-18. (canceled) 23. A method of operating an immersive display to provide surround-type of representation of real-world visual information, comprising: sharing a portion of the surround-type of representation of real-world visual information from a plurality of cameras, where each of the cameras is coupled to a controller. displaying at least a portion of the surround-type of representation of real-world visual information in each of the plurality of immersive displays, where each of the plurality of immersive displays is responsive to a controller; and activating an indicator in at least one of the immersive displays that is response to the controller when the scope of coverage of the plurality of cameras is insufficient to support the surround-type of representation of real-world visual information from the plurality of cameras.
An immersive display and a method of operating the immersive display to provide information relating to an object. The method includes receiving information from an input device of the immersive display or coupled to the immersive display, detecting an object based on the information received from the input device, and displaying a representation of the object on images displayed on a display of the immersive display such that attributes of the representation distinguish the representation from the images displayed on the display, wherein the representation is displayed at a location on the display that corresponds with a location of the object.1-18. (canceled) 23. A method of operating an immersive display to provide surround-type of representation of real-world visual information, comprising: sharing a portion of the surround-type of representation of real-world visual information from a plurality of cameras, where each of the cameras is coupled to a controller. displaying at least a portion of the surround-type of representation of real-world visual information in each of the plurality of immersive displays, where each of the plurality of immersive displays is responsive to a controller; and activating an indicator in at least one of the immersive displays that is response to the controller when the scope of coverage of the plurality of cameras is insufficient to support the surround-type of representation of real-world visual information from the plurality of cameras.
2,600
10,982
10,982
15,323,508
2,646
A reporting node is configured to report the result of a radio signal measurement to a recipient node. The reporting node in this regard is configured to obtain the result as well as a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for the result. The reporting node is configured to map the result to multiple reported values that jointly represent that result, using the obtained joint mapping. The multiple reported values include one of the different possible values for the first reporting variable and one of the different possible values for the second reporting variable. The reporting node is configured to report the result of the radio signal measurement to the recipient node by sending the multiple reported values from the reporting node to the recipient node.
1-54. (canceled) 55. A method, implemented by a reporting node, for reporting the result of a radio signal measurement to a recipient node, the method comprising: obtaining, at the reporting node, the result of a radio signal measurement; obtaining, at the reporting node, a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for the result of the radio signal measurement; mapping the result of the radio signal measurement to multiple reported values that jointly represent that result, using the obtained joint mapping, wherein the multiple reported values include one of the different possible values for the first reporting variable and one of the different possible values for the second reporting variable; and reporting the result of the radio signal measurement to the recipient node by sending the multiple reported values from the reporting node to the recipient node. 56. The method of claim 55, wherein the different possible values for the first reporting variable are mapped to different possible values for the result of the radio signal measurement at a first resolution, and the different possible values for the second reporting variable are mapped to different possible values for the result of the radio signal measurement at a second resolution. 57. The method of claim 56, wherein the second resolution is higher than the first resolution. 58. The method of claim 55, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different possible ranges of results of the radio signal measurement within a range of results reported by the first reporting variable. 59. The method of claim 55, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different delta values. 60. The method of claim 55, wherein the different possible values for the first reporting variable correspond to different ranges of results of the radio signal measurement, and the different possible values for the second reporting variable correspond to different delta values by which to increment or decrement a component of a range of measurement results reported with the first reporting variable. 61. The method of claim 55, further comprising dynamically determining whether the reporting node is to map the result of the radio signal measurement to the multiple reported values using the joint mapping or to a single reported value using a single mapping, wherein the single mapping maps different possible values for a single reporting variable to different possible values for the result of the radio signal measurement. 62. The method of claim 61, wherein said dynamically determining is performed based on an accuracy or quality of the radio signal measurement. 63. The method of claim 61, wherein said dynamically determining is performed based on radio conditions, a type of radio environment, or a radio deployment scenario in which the radio signal measurement is performed. 64. The method of claim 55, wherein said joint mapping is embodied across a first mapping table for the first reporting variable and a second mapping table for the second reporting variable. 65. The method of claim 55, wherein the radio signal measurement is a positioning measurement. 66. The method of claim 55, wherein the radio signal measurement is a received signal time difference, RSTD, measurement or an observed time difference of arrive, OTDOA, measurement. 67. The method of claim 55, wherein the reporting node and the recipient node are configured for use in a Long Term Evolution (LTE) system. 68. The method of claim 55, wherein the reporting node is a user equipment and the recipient node is a base station. 69. A method, implemented by a recipient node, for determining the result of a radio signal measurement as reported by a reporting node, the method comprising: receiving, at the recipient node, multiple reported values that jointly represent a reported result of the radio signal measurement, wherein the multiple reported values comprise one of different possible values for a first reporting variable and one of different possible values for a second reporting variable, wherein the different possible values for the first reporting variable and the different possible values for the second reporting variable are jointly mapped to different possible values for the reported result of the radio signal measurement; and performing one or more operations based on the multiple reported values. 70. The method of claim 69, further comprising: obtaining, at the recipient node, a joint mapping in which the different possible values for the first reporting variable and different possible values for the second reporting variable are jointly mapped to different possible values for the reported result of the radio signal measurement; mapping the multiple reported values to one of the different possible values for the reported result of the radio signal measurement, using the obtained joint mapping; and performing the one or more operations based on the reported result. 71. The method of claim 69, wherein the different possible values for the first reporting variable are mapped to different possible values for the reported result of the radio signal measurement at a first resolution, and the different possible values for the second reporting variable are mapped to different possible values for the reported result of the radio signal measurement at a second resolution. 72. The method of claim 71, wherein the second resolution is higher than the first resolution. 73. The method of claim 69, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different possible ranges of results of the radio signal measurement within a range of results reported by the first reporting variable. 74. The method of claim 69, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different delta values. 75. The method of claim 69, wherein the different possible values for the first reporting variable correspond to different intermediate results of the radio signal measurement, and the different possible values for the second reporting variable correspond to different delta values by which the recipient node is to increment or decrement an intermediate result corresponding to the first reporting variable. 76. The method of claim 69, further comprising dynamically determining whether to map the one or more reported values to the reported result of the radio signal measurement using the joint mapping or to map a single reported value to the reported result of the radio signal measurement using a single mapping, wherein the single mapping maps different possible values for a single reporting variable to different possible values for the result of the radio signal measurement. 77. The method of claim 76, wherein said dynamically determining is performed based on an accuracy or quality of the radio signal measurement. 78. The method of claim 76, wherein said dynamically determining is performed based on radio conditions, a type of radio environment, or a radio deployment scenario in which the radio signal measurement is performed. 79. The method of claim 69, wherein said joint mapping is embodied across a first mapping table for the first reporting variable and a second mapping table for the second reporting variable. 80. The method of claim 69, wherein performing one or more operations comprises determining the position of a node for which the positioning measurement is performed. 81. The method of claim 69, wherein the radio signal measurement is a positioning measurement. 82. The method of claim 69, wherein the radio signal measurement is a received signal time difference, RSTD, measurement or an observed time difference of arrive, OTDOA, measurement. 83. The method of claim 69, wherein the reporting node and the recipient node are configured for use in a Long Term Evolution (LTE) system. 84. The method of claim 69, wherein the reporting node is a user equipment and the recipient node is a base station. 85. A method, implemented by a configuring node, for configuring a reporting node for reporting the result of a radio signal measurement to a recipient node, the method comprising: generating configuration information for configuring the reporting node to report the result of a radio signal measurement using a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for a reported result of the radio signal measurement; and sending the generated configuration information to the reporting node. 86. The method of claim 85, wherein the configuration information includes at least one of: the different possible values for the first reporting variable; and the different possible values for the second reporting variable. 87. The method of claim 85, wherein the configuration information indicates whether the reporting node is to report the result of the radio signal measurement using the joint mapping or whether the reporting node is to instead report the result of the radio signal measurement using a single mapping, wherein the single mapping maps different possible values for a single reporting variable to different possible values for the reported result of the radio signal measurement. 88. The method of claim 85, wherein the configuration information indicates one or more conditions under which the reporting node is to report the result of the radio signal measurement using the joint mapping. 89. A reporting node configured to report the result of a radio signal measurement to a recipient node, the reporting node comprising: processing circuitry and a memory, the memory containing instructions executable by the processing circuitry whereby the reporting node is configured to: obtain the result of a radio signal measurement; obtain a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for the result of the radio signal measurement; map the result of the radio signal measurement to multiple reported values that jointly represent that result, using the obtained joint mapping, wherein the multiple reported values include one of the different possible values for the first reporting variable and one of the different possible values for the second reporting variable; and report the result of the radio signal measurement to the recipient node by sending the multiple reported values from the reporting node to the recipient node. 90. The reporting node of claim 89, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different possible ranges of results of the radio signal measurement within a range of results reported by the first reporting variable. 91. The reporting node of claim 89, wherein the different possible values for the first reporting variable correspond to different ranges of results of the radio signal measurement, and the different possible values for the second reporting variable correspond to different delta values by which to increment or decrement a component of a range of measurement results reported with the first reporting variable. 92. The reporting node of claim 89, wherein the radio signal measurement is a received signal time difference, RSTD, measurement or an observed time difference of arrive, OTDOA, measurement. 93. The reporting node of claim 89, wherein the reporting node is a user equipment and the recipient node is a base station. 94. A recipient node configured to determine the result of a radio signal measurement as reported by a reporting node, the recipient node comprising: processing circuitry and a memory, the memory containing instructions executable by the processing circuitry whereby the recipient node is configured to: receive multiple reported values that jointly represent a reported result of the radio signal measurement, wherein the multiple reported values comprise one of different possible values for a first reporting variable and one of different possible values for a second reporting variable, wherein the different possible values for the first reporting variable and the different possible values for the second reporting variable are jointly mapped to different possible values for the reported result of the radio signal measurement; and perform one or more operations based on the multiple reported values, 95. The recipient node of claim 94, wherein the memory contains instructions executable by the processing circuitry whereby the recipient node is configured to: obtain, at the recipient node, a joint mapping in which the different possible values for the first reporting variable and different possible values for the second reporting variable are jointly mapped to different possible values for the reported result of the radio signal measurement; map the multiple reported values to one of the different possible values for the reported result of the radio signal measurement, using the obtained joint mapping; and perform the one or more operations based on the reported result. 96. The recipient node of claim 94, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different possible ranges of results of the radio signal measurement within a range of results reported by the first reporting variable. 97. The recipient node of claim 94, wherein the different possible values for the first reporting variable correspond to different ranges of results of the radio signal measurement, and the different possible values for the second reporting variable correspond to different delta values by which to increment or decrement a component of a range of measurement results reported with the first reporting variable. 98. The recipient node of claim 94, wherein the radio signal measurement is a received signal time difference, RSTD, measurement or an observed time difference of arrive, OTDOA, measurement. 99. The recipient node of claim 94, wherein the reporting node is a user equipment and the recipient node is a base station. 100. A configuring node for configuring a reporting node for reporting the result of a radio signal measurement to a recipient node, the configuring node comprising: processing circuitry and a memory, the memory containing instructions executable by the processing circuitry whereby the configuring node is configured to: generate configuration information for configuring the reporting node to report the result of a radio signal measurement using a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for a reported result of the radio signal measurement; and send the generated configuration information to the reporting node.
A reporting node is configured to report the result of a radio signal measurement to a recipient node. The reporting node in this regard is configured to obtain the result as well as a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for the result. The reporting node is configured to map the result to multiple reported values that jointly represent that result, using the obtained joint mapping. The multiple reported values include one of the different possible values for the first reporting variable and one of the different possible values for the second reporting variable. The reporting node is configured to report the result of the radio signal measurement to the recipient node by sending the multiple reported values from the reporting node to the recipient node.1-54. (canceled) 55. A method, implemented by a reporting node, for reporting the result of a radio signal measurement to a recipient node, the method comprising: obtaining, at the reporting node, the result of a radio signal measurement; obtaining, at the reporting node, a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for the result of the radio signal measurement; mapping the result of the radio signal measurement to multiple reported values that jointly represent that result, using the obtained joint mapping, wherein the multiple reported values include one of the different possible values for the first reporting variable and one of the different possible values for the second reporting variable; and reporting the result of the radio signal measurement to the recipient node by sending the multiple reported values from the reporting node to the recipient node. 56. The method of claim 55, wherein the different possible values for the first reporting variable are mapped to different possible values for the result of the radio signal measurement at a first resolution, and the different possible values for the second reporting variable are mapped to different possible values for the result of the radio signal measurement at a second resolution. 57. The method of claim 56, wherein the second resolution is higher than the first resolution. 58. The method of claim 55, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different possible ranges of results of the radio signal measurement within a range of results reported by the first reporting variable. 59. The method of claim 55, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different delta values. 60. The method of claim 55, wherein the different possible values for the first reporting variable correspond to different ranges of results of the radio signal measurement, and the different possible values for the second reporting variable correspond to different delta values by which to increment or decrement a component of a range of measurement results reported with the first reporting variable. 61. The method of claim 55, further comprising dynamically determining whether the reporting node is to map the result of the radio signal measurement to the multiple reported values using the joint mapping or to a single reported value using a single mapping, wherein the single mapping maps different possible values for a single reporting variable to different possible values for the result of the radio signal measurement. 62. The method of claim 61, wherein said dynamically determining is performed based on an accuracy or quality of the radio signal measurement. 63. The method of claim 61, wherein said dynamically determining is performed based on radio conditions, a type of radio environment, or a radio deployment scenario in which the radio signal measurement is performed. 64. The method of claim 55, wherein said joint mapping is embodied across a first mapping table for the first reporting variable and a second mapping table for the second reporting variable. 65. The method of claim 55, wherein the radio signal measurement is a positioning measurement. 66. The method of claim 55, wherein the radio signal measurement is a received signal time difference, RSTD, measurement or an observed time difference of arrive, OTDOA, measurement. 67. The method of claim 55, wherein the reporting node and the recipient node are configured for use in a Long Term Evolution (LTE) system. 68. The method of claim 55, wherein the reporting node is a user equipment and the recipient node is a base station. 69. A method, implemented by a recipient node, for determining the result of a radio signal measurement as reported by a reporting node, the method comprising: receiving, at the recipient node, multiple reported values that jointly represent a reported result of the radio signal measurement, wherein the multiple reported values comprise one of different possible values for a first reporting variable and one of different possible values for a second reporting variable, wherein the different possible values for the first reporting variable and the different possible values for the second reporting variable are jointly mapped to different possible values for the reported result of the radio signal measurement; and performing one or more operations based on the multiple reported values. 70. The method of claim 69, further comprising: obtaining, at the recipient node, a joint mapping in which the different possible values for the first reporting variable and different possible values for the second reporting variable are jointly mapped to different possible values for the reported result of the radio signal measurement; mapping the multiple reported values to one of the different possible values for the reported result of the radio signal measurement, using the obtained joint mapping; and performing the one or more operations based on the reported result. 71. The method of claim 69, wherein the different possible values for the first reporting variable are mapped to different possible values for the reported result of the radio signal measurement at a first resolution, and the different possible values for the second reporting variable are mapped to different possible values for the reported result of the radio signal measurement at a second resolution. 72. The method of claim 71, wherein the second resolution is higher than the first resolution. 73. The method of claim 69, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different possible ranges of results of the radio signal measurement within a range of results reported by the first reporting variable. 74. The method of claim 69, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different delta values. 75. The method of claim 69, wherein the different possible values for the first reporting variable correspond to different intermediate results of the radio signal measurement, and the different possible values for the second reporting variable correspond to different delta values by which the recipient node is to increment or decrement an intermediate result corresponding to the first reporting variable. 76. The method of claim 69, further comprising dynamically determining whether to map the one or more reported values to the reported result of the radio signal measurement using the joint mapping or to map a single reported value to the reported result of the radio signal measurement using a single mapping, wherein the single mapping maps different possible values for a single reporting variable to different possible values for the result of the radio signal measurement. 77. The method of claim 76, wherein said dynamically determining is performed based on an accuracy or quality of the radio signal measurement. 78. The method of claim 76, wherein said dynamically determining is performed based on radio conditions, a type of radio environment, or a radio deployment scenario in which the radio signal measurement is performed. 79. The method of claim 69, wherein said joint mapping is embodied across a first mapping table for the first reporting variable and a second mapping table for the second reporting variable. 80. The method of claim 69, wherein performing one or more operations comprises determining the position of a node for which the positioning measurement is performed. 81. The method of claim 69, wherein the radio signal measurement is a positioning measurement. 82. The method of claim 69, wherein the radio signal measurement is a received signal time difference, RSTD, measurement or an observed time difference of arrive, OTDOA, measurement. 83. The method of claim 69, wherein the reporting node and the recipient node are configured for use in a Long Term Evolution (LTE) system. 84. The method of claim 69, wherein the reporting node is a user equipment and the recipient node is a base station. 85. A method, implemented by a configuring node, for configuring a reporting node for reporting the result of a radio signal measurement to a recipient node, the method comprising: generating configuration information for configuring the reporting node to report the result of a radio signal measurement using a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for a reported result of the radio signal measurement; and sending the generated configuration information to the reporting node. 86. The method of claim 85, wherein the configuration information includes at least one of: the different possible values for the first reporting variable; and the different possible values for the second reporting variable. 87. The method of claim 85, wherein the configuration information indicates whether the reporting node is to report the result of the radio signal measurement using the joint mapping or whether the reporting node is to instead report the result of the radio signal measurement using a single mapping, wherein the single mapping maps different possible values for a single reporting variable to different possible values for the reported result of the radio signal measurement. 88. The method of claim 85, wherein the configuration information indicates one or more conditions under which the reporting node is to report the result of the radio signal measurement using the joint mapping. 89. A reporting node configured to report the result of a radio signal measurement to a recipient node, the reporting node comprising: processing circuitry and a memory, the memory containing instructions executable by the processing circuitry whereby the reporting node is configured to: obtain the result of a radio signal measurement; obtain a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for the result of the radio signal measurement; map the result of the radio signal measurement to multiple reported values that jointly represent that result, using the obtained joint mapping, wherein the multiple reported values include one of the different possible values for the first reporting variable and one of the different possible values for the second reporting variable; and report the result of the radio signal measurement to the recipient node by sending the multiple reported values from the reporting node to the recipient node. 90. The reporting node of claim 89, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different possible ranges of results of the radio signal measurement within a range of results reported by the first reporting variable. 91. The reporting node of claim 89, wherein the different possible values for the first reporting variable correspond to different ranges of results of the radio signal measurement, and the different possible values for the second reporting variable correspond to different delta values by which to increment or decrement a component of a range of measurement results reported with the first reporting variable. 92. The reporting node of claim 89, wherein the radio signal measurement is a received signal time difference, RSTD, measurement or an observed time difference of arrive, OTDOA, measurement. 93. The reporting node of claim 89, wherein the reporting node is a user equipment and the recipient node is a base station. 94. A recipient node configured to determine the result of a radio signal measurement as reported by a reporting node, the recipient node comprising: processing circuitry and a memory, the memory containing instructions executable by the processing circuitry whereby the recipient node is configured to: receive multiple reported values that jointly represent a reported result of the radio signal measurement, wherein the multiple reported values comprise one of different possible values for a first reporting variable and one of different possible values for a second reporting variable, wherein the different possible values for the first reporting variable and the different possible values for the second reporting variable are jointly mapped to different possible values for the reported result of the radio signal measurement; and perform one or more operations based on the multiple reported values, 95. The recipient node of claim 94, wherein the memory contains instructions executable by the processing circuitry whereby the recipient node is configured to: obtain, at the recipient node, a joint mapping in which the different possible values for the first reporting variable and different possible values for the second reporting variable are jointly mapped to different possible values for the reported result of the radio signal measurement; map the multiple reported values to one of the different possible values for the reported result of the radio signal measurement, using the obtained joint mapping; and perform the one or more operations based on the reported result. 96. The recipient node of claim 94, wherein the different possible values for the first reporting variable correspond to different possible ranges of results of the radio signal measurement and the different possible values for the second reporting variable correspond to different possible ranges of results of the radio signal measurement within a range of results reported by the first reporting variable. 97. The recipient node of claim 94, wherein the different possible values for the first reporting variable correspond to different ranges of results of the radio signal measurement, and the different possible values for the second reporting variable correspond to different delta values by which to increment or decrement a component of a range of measurement results reported with the first reporting variable. 98. The recipient node of claim 94, wherein the radio signal measurement is a received signal time difference, RSTD, measurement or an observed time difference of arrive, OTDOA, measurement. 99. The recipient node of claim 94, wherein the reporting node is a user equipment and the recipient node is a base station. 100. A configuring node for configuring a reporting node for reporting the result of a radio signal measurement to a recipient node, the configuring node comprising: processing circuitry and a memory, the memory containing instructions executable by the processing circuitry whereby the configuring node is configured to: generate configuration information for configuring the reporting node to report the result of a radio signal measurement using a joint mapping in which different possible values for a first reporting variable and different possible values for a second reporting variable are jointly mapped to different possible values for a reported result of the radio signal measurement; and send the generated configuration information to the reporting node.
2,600
10,983
10,983
11,368,348
2,625
A method of providing visual information to a human viewer includes the steps of defining a range of distances from a surface and a range of viewing angles with respect to the surface, determining the location and viewing angle of a human viewer with respect to the surface, and providing a virtual image to the human viewer via a visual display device worn by the human viewer when the location and viewing angle of the human viewer with respect to the surface is determined to be within the defined range of distances and viewing angles, such that the virtual image is perceived to be defined on the surface by the human viewer.
1. A method of providing visual information to a human viewer, the method comprising the steps of: a) defining a range of distances from a surface and a range of viewing angles with respect to the surface, b) determining the location and viewing angle of a human viewer with respect to the surface, and c) providing a virtual image to the human viewer via a visual display device worn by the human viewer when the location and viewing angle of the human viewer with respect to the surface is determined to be within the range of distances and viewing angles selected in step a), such that the virtual image is perceived to be defined on the surface by the human viewer. 2. The method of claim 1 wherein the surface is selected from the group consisting of a billboard, a wall, a static display and a hand-held item. 3. The method of claims 2 wherein the hand-held item is selected from the group consisting of a book, a magazine, a newspaper and a menu. 4. The method of claim 2 wherein at least a portion of the surface is blank. 5. The method of claim 2 wherein at least a portion of the surface is a blue surface. 6. The method of claim 1 wherein in step b) the location and viewing angle of the human viewer with respect to the surface are determined using the GPS coordinates of the human viewer and the surface. 7. The method of claim 1 wherein in step c) the visual display device comprises a heads-up display. 8. The method of claim 1 wherein in step c) the virtual image provided to the human viewer comprises an image selected from the group consisting of an advertisement, a menu and a public notice. 9. The method of claim 1 wherein in step c) the virtual image is provided to the human viewer via wireless transmission means. 10. The method of claim 1 wherein the human viewer is a member of an organization and the virtual image is provided to the human viewer from a central site affiliated with the organization. 11. The method of claim 10 wherein prior to step c) the human viewer selects at least one good or service for which the human viewer requests the provision of a virtual image. 12. The method of claim 10 wherein prior to step c) the human viewer selects at least one advertisement format in which the virtual image is to be displayed. 13. The method of claim 12 wherein the advertisement format is selected from the group consisting of text, still images, video images and combinations thereof. 14. The method of claim 13 wherein the advertisement format comprises images of human models. 15. The method of claim 11 wherein the virtual image comprises an image selected from the group consisting of an advertisement and a menu. 16. The method of claim 10 wherein prior to step c) the human viewer selects at least one event for which the human viewer requests the provision of a virtual image. 17. The method of claim 16 wherein the event is selected from the group consisting of a sale, a sporting event, a movie and a live performance. 18. The method of claim 17 wherein the event is a sale and wherein the human viewer selects at least one good or service that is the subject of the sale. 19. The method of claim 1 wherein the virtual image is accompanied by an audio stream. 20. The method of claim 19 wherein the audio stream comprises a verbal advisory. 21. The method of claim 15 wherein a premium is provided to the human viewer when the human viewer views the virtual image. 22. The method of claim 1 wherein the surface comprises at least a portion of a surface of a vehicle. 23. The method of claim 22 wherein the vehicle is in motion. 24. The method of claim 1 wherein the surface comprises an article of clothing worn by a human performer. 25. The method of claim 24 wherein the virtual image comprises an image of a costume. 26. The method of claim 24 wherein the article of clothing is a mask and wherein the virtual image comprises an image of a human face. 27. A method of receiving visual information from a provider, the visual information being transmitted to a human viewer as a virtual image, the human viewer wearing a visual display device enabling viewing of a virtual image, the method comprising the steps of: a) providing the location of a human viewer to a central site with which a provider is associated, b) determining the distance and the viewing angle between the human viewer and a surface associated with the provider, and c) receiving a virtual image from the provider via a visual display device worn by the human viewer enabling viewing of the virtual image when the distance and viewing angle between the human viewer and the surface is determined to be within a range of distances and viewing angles specified by the provider. 28. The method of claim 27 wherein in step a) the location of the human viewer is determined using the GPS coordinates of the human viewer. 29. The method of claim 27 wherein the human viewer is a member of an organization and the central site is associated with the organization. 30. The method of claim 30 wherein prior to step a) the human viewer selects at least one good or service for which the human viewer requests the provision of a virtual image. 31. The method of claim 29 wherein prior to step a) the human viewer selects at least one advertisement format in which the virtual image is to be displayed. 32. The method of claim 29 wherein prior to step a) the human viewer selects at least one event for which the human viewer requests the provision of a virtual image. 33. A system for providing visual information to a human viewer, the system comprising: a) means for determining the location and viewing angle of a human viewer with respect to a surface, b) a visual display device worn by the human viewer, and c) means for providing a virtual image to the human viewer via the visual display device when the location and viewing angle of the human viewer with respect to the surface is determined to be within a preselected range of distances and viewing angles with respect to the surface, such that the virtual image is perceived to be defined on the surface by the human viewer. 34. The system of claim 33 wherein an item adapted to be held by the human viewer comprises the surface. 35. The system of claim 34 wherein the item is selected from the group consisting of a book, a magazine and a newspaper.
A method of providing visual information to a human viewer includes the steps of defining a range of distances from a surface and a range of viewing angles with respect to the surface, determining the location and viewing angle of a human viewer with respect to the surface, and providing a virtual image to the human viewer via a visual display device worn by the human viewer when the location and viewing angle of the human viewer with respect to the surface is determined to be within the defined range of distances and viewing angles, such that the virtual image is perceived to be defined on the surface by the human viewer.1. A method of providing visual information to a human viewer, the method comprising the steps of: a) defining a range of distances from a surface and a range of viewing angles with respect to the surface, b) determining the location and viewing angle of a human viewer with respect to the surface, and c) providing a virtual image to the human viewer via a visual display device worn by the human viewer when the location and viewing angle of the human viewer with respect to the surface is determined to be within the range of distances and viewing angles selected in step a), such that the virtual image is perceived to be defined on the surface by the human viewer. 2. The method of claim 1 wherein the surface is selected from the group consisting of a billboard, a wall, a static display and a hand-held item. 3. The method of claims 2 wherein the hand-held item is selected from the group consisting of a book, a magazine, a newspaper and a menu. 4. The method of claim 2 wherein at least a portion of the surface is blank. 5. The method of claim 2 wherein at least a portion of the surface is a blue surface. 6. The method of claim 1 wherein in step b) the location and viewing angle of the human viewer with respect to the surface are determined using the GPS coordinates of the human viewer and the surface. 7. The method of claim 1 wherein in step c) the visual display device comprises a heads-up display. 8. The method of claim 1 wherein in step c) the virtual image provided to the human viewer comprises an image selected from the group consisting of an advertisement, a menu and a public notice. 9. The method of claim 1 wherein in step c) the virtual image is provided to the human viewer via wireless transmission means. 10. The method of claim 1 wherein the human viewer is a member of an organization and the virtual image is provided to the human viewer from a central site affiliated with the organization. 11. The method of claim 10 wherein prior to step c) the human viewer selects at least one good or service for which the human viewer requests the provision of a virtual image. 12. The method of claim 10 wherein prior to step c) the human viewer selects at least one advertisement format in which the virtual image is to be displayed. 13. The method of claim 12 wherein the advertisement format is selected from the group consisting of text, still images, video images and combinations thereof. 14. The method of claim 13 wherein the advertisement format comprises images of human models. 15. The method of claim 11 wherein the virtual image comprises an image selected from the group consisting of an advertisement and a menu. 16. The method of claim 10 wherein prior to step c) the human viewer selects at least one event for which the human viewer requests the provision of a virtual image. 17. The method of claim 16 wherein the event is selected from the group consisting of a sale, a sporting event, a movie and a live performance. 18. The method of claim 17 wherein the event is a sale and wherein the human viewer selects at least one good or service that is the subject of the sale. 19. The method of claim 1 wherein the virtual image is accompanied by an audio stream. 20. The method of claim 19 wherein the audio stream comprises a verbal advisory. 21. The method of claim 15 wherein a premium is provided to the human viewer when the human viewer views the virtual image. 22. The method of claim 1 wherein the surface comprises at least a portion of a surface of a vehicle. 23. The method of claim 22 wherein the vehicle is in motion. 24. The method of claim 1 wherein the surface comprises an article of clothing worn by a human performer. 25. The method of claim 24 wherein the virtual image comprises an image of a costume. 26. The method of claim 24 wherein the article of clothing is a mask and wherein the virtual image comprises an image of a human face. 27. A method of receiving visual information from a provider, the visual information being transmitted to a human viewer as a virtual image, the human viewer wearing a visual display device enabling viewing of a virtual image, the method comprising the steps of: a) providing the location of a human viewer to a central site with which a provider is associated, b) determining the distance and the viewing angle between the human viewer and a surface associated with the provider, and c) receiving a virtual image from the provider via a visual display device worn by the human viewer enabling viewing of the virtual image when the distance and viewing angle between the human viewer and the surface is determined to be within a range of distances and viewing angles specified by the provider. 28. The method of claim 27 wherein in step a) the location of the human viewer is determined using the GPS coordinates of the human viewer. 29. The method of claim 27 wherein the human viewer is a member of an organization and the central site is associated with the organization. 30. The method of claim 30 wherein prior to step a) the human viewer selects at least one good or service for which the human viewer requests the provision of a virtual image. 31. The method of claim 29 wherein prior to step a) the human viewer selects at least one advertisement format in which the virtual image is to be displayed. 32. The method of claim 29 wherein prior to step a) the human viewer selects at least one event for which the human viewer requests the provision of a virtual image. 33. A system for providing visual information to a human viewer, the system comprising: a) means for determining the location and viewing angle of a human viewer with respect to a surface, b) a visual display device worn by the human viewer, and c) means for providing a virtual image to the human viewer via the visual display device when the location and viewing angle of the human viewer with respect to the surface is determined to be within a preselected range of distances and viewing angles with respect to the surface, such that the virtual image is perceived to be defined on the surface by the human viewer. 34. The system of claim 33 wherein an item adapted to be held by the human viewer comprises the surface. 35. The system of claim 34 wherein the item is selected from the group consisting of a book, a magazine and a newspaper.
2,600
10,984
10,984
16,374,404
2,667
Data are encoded into one or more optically encoded images. The optically encoded images are then inserted as image data into a video sequence—i.e., in video frames. Data are transmitted in-band within the video, via any conceivable video distribution channel or format. The video may be trans-coded as required—because the data are optically encoded, any video processing that even crudely preserves the frame images will preserve the optically encoded data. This scheme of in-band data transfer in video is very robust. A video receiving apparatus receives the video, inspects the image data from video frames in memory, detects optically encoded images in the image data, and decodes the optically encoded images to recover the data. The frames carrying optically encoded images are typically discarded and not rendered to a display. The data from a plurality of optically encoded images may be concatenated, and further processed.
1. An efficient and robust method of transferring data in-band in a video sequence via optically encoded images, comprising: obtaining digital data to be transferred; if the data exceeds a predetermined size, segmenting the digital data into one or more data segments, each of or below the predetermined size; optically encoding each data segment into an optically encoded image; embedding each optically encoded image into a frame of a video sequence; and transferring the video to a recipient. 2. The method of claim 1 wherein an optically encoded image comprises a one- or two-dimensional bar code. 3. The method of claim 2 wherein the two-dimensional code comprises a Quick Response (QR) code. 4. The method of claim 1 wherein embedding each optically encoded image into a frame of a video sequence comprises embedding two or more optically encoded images in contiguous frames. 5. The method of claim 1 wherein embedding each optically encoded image into a frame of a video sequence comprises interspersing two or more video frames containing optically encoded images with video frames containing video content images. 6. The method of claim 1 wherein embedding each optically encoded image into a frame of a video sequence comprises embedding two or more optically encoded images into a single video frame. 7. The method of claim 1 wherein embedding each optically encoded image into a frame of a video sequence comprises, for each optically encoded image, altering one or more visual aspects of video content images in one or more frames of video according to the encoded image pattern. 8. The method of claim 7 wherein altering one or more visual aspects of video content images in one or more frames of video according to the encoded image pattern comprises altering the visual aspects over a plurality of frames of video, and to a degree that the alterations are imperceptible to humans viewing a rendering of the video on a display. 9. The method of claim 1 wherein the video sequence is in any format that decodes to a sequence of still images. 10. The method of claim 1 wherein transferring the video to a recipient comprises transcoding the video from one format to another. 11. A video receiving apparatus configured to extract digital data transferred in-band in a video sequence via optically encoded images, comprising: an interface configured to obtain a digital representation of a video sequence; memory configured to store image data from the video sequence; and processing circuitry configured to detect one or more optically encoded images in the image data stored in memory, and decode each optically encoded image to yield a data segment. 12. The video receiving apparatus of claim 11 wherein, if more than one optically encoded image is detected and decoded, the processing circuitry is further configured to assemble two or more decoded data segments to recover the full digital data transferred. 13. The video receiving apparatus of claim 11 wherein the interface configured to obtain a digital representation of a video sequence comprises a digital video encoder configured to receive and digitize an analog video signal. 14. The video receiving apparatus of claim 11 further comprising a video player configured to render at least part of a received video sequence to a display. 15. The video receiving apparatus of claim 14 wherein the video receiving apparatus is further configured to suppress frames containing optically encoded images from the video sequence rendered to a display. 16. The video receiving apparatus of claim 11 further comprising an asset management system, and wherein data decoded from optically encoded images transmitted in-band in the video sequence comprise metadata related to the video in which they are transmitted and wherein the video receiving apparatus is configured to output at least the metadata to the asset management system. 17. The video receiving apparatus of claim 11 wherein data decoded from optically encoded images transmitted in-band in the video sequence comprise control signals configured to control an external device. 18. The video receiving apparatus of claim 17 wherein the device is a musical instrument. 19. The video receiving apparatus of claim 18 wherein the control signals comply with the Musical Instrument Digital Interface (MIDI) protocol. 20. A method of improving the operation of a video receiving apparatus by extracting data transferred in-band in a video sequence via optically encoded images, comprising: obtaining a digital representation of a video sequence, the video including a least one frame comprising one or more optically encoded images; determining one or more candidate video frames likely to include one or more optically encoded images; and for at least each candidate video frame, inspecting a digital representation of the frame in memory; detecting one or more optically encoded images in the frame; and decoding each optically encoded image to extract a data segment. 21. The method of claim 20 further comprising, if more than one optically encoded image was detected and decoded, assembling two or more corresponding data segments. 22. The method of claim 20 wherein each optically encoded image comprises a one- or two-dimensional bar code. 23. The method of claim 20 wherein the two-dimensional code comprises a Quick Response (QR) code. 24. The method of claim 20 further comprising processing or outputting the data extracted from optically encoded images without rendering the frames comprising optically encoded images to a display. 25. The method of claim 24 wherein outputting the data comprises outputting one or more text or image files. 26. The method of claim 24 wherein outputting the data comprises outputting to a musical instrument control signals complying with the Musical Instrument Digital Interface (MIDI) protocol. 27. The method of claim 24 wherein the optically encoded data comprises metadata related to the video sequence, and wherein processing the data comprises updating an asset management system with the metadata.
Data are encoded into one or more optically encoded images. The optically encoded images are then inserted as image data into a video sequence—i.e., in video frames. Data are transmitted in-band within the video, via any conceivable video distribution channel or format. The video may be trans-coded as required—because the data are optically encoded, any video processing that even crudely preserves the frame images will preserve the optically encoded data. This scheme of in-band data transfer in video is very robust. A video receiving apparatus receives the video, inspects the image data from video frames in memory, detects optically encoded images in the image data, and decodes the optically encoded images to recover the data. The frames carrying optically encoded images are typically discarded and not rendered to a display. The data from a plurality of optically encoded images may be concatenated, and further processed.1. An efficient and robust method of transferring data in-band in a video sequence via optically encoded images, comprising: obtaining digital data to be transferred; if the data exceeds a predetermined size, segmenting the digital data into one or more data segments, each of or below the predetermined size; optically encoding each data segment into an optically encoded image; embedding each optically encoded image into a frame of a video sequence; and transferring the video to a recipient. 2. The method of claim 1 wherein an optically encoded image comprises a one- or two-dimensional bar code. 3. The method of claim 2 wherein the two-dimensional code comprises a Quick Response (QR) code. 4. The method of claim 1 wherein embedding each optically encoded image into a frame of a video sequence comprises embedding two or more optically encoded images in contiguous frames. 5. The method of claim 1 wherein embedding each optically encoded image into a frame of a video sequence comprises interspersing two or more video frames containing optically encoded images with video frames containing video content images. 6. The method of claim 1 wherein embedding each optically encoded image into a frame of a video sequence comprises embedding two or more optically encoded images into a single video frame. 7. The method of claim 1 wherein embedding each optically encoded image into a frame of a video sequence comprises, for each optically encoded image, altering one or more visual aspects of video content images in one or more frames of video according to the encoded image pattern. 8. The method of claim 7 wherein altering one or more visual aspects of video content images in one or more frames of video according to the encoded image pattern comprises altering the visual aspects over a plurality of frames of video, and to a degree that the alterations are imperceptible to humans viewing a rendering of the video on a display. 9. The method of claim 1 wherein the video sequence is in any format that decodes to a sequence of still images. 10. The method of claim 1 wherein transferring the video to a recipient comprises transcoding the video from one format to another. 11. A video receiving apparatus configured to extract digital data transferred in-band in a video sequence via optically encoded images, comprising: an interface configured to obtain a digital representation of a video sequence; memory configured to store image data from the video sequence; and processing circuitry configured to detect one or more optically encoded images in the image data stored in memory, and decode each optically encoded image to yield a data segment. 12. The video receiving apparatus of claim 11 wherein, if more than one optically encoded image is detected and decoded, the processing circuitry is further configured to assemble two or more decoded data segments to recover the full digital data transferred. 13. The video receiving apparatus of claim 11 wherein the interface configured to obtain a digital representation of a video sequence comprises a digital video encoder configured to receive and digitize an analog video signal. 14. The video receiving apparatus of claim 11 further comprising a video player configured to render at least part of a received video sequence to a display. 15. The video receiving apparatus of claim 14 wherein the video receiving apparatus is further configured to suppress frames containing optically encoded images from the video sequence rendered to a display. 16. The video receiving apparatus of claim 11 further comprising an asset management system, and wherein data decoded from optically encoded images transmitted in-band in the video sequence comprise metadata related to the video in which they are transmitted and wherein the video receiving apparatus is configured to output at least the metadata to the asset management system. 17. The video receiving apparatus of claim 11 wherein data decoded from optically encoded images transmitted in-band in the video sequence comprise control signals configured to control an external device. 18. The video receiving apparatus of claim 17 wherein the device is a musical instrument. 19. The video receiving apparatus of claim 18 wherein the control signals comply with the Musical Instrument Digital Interface (MIDI) protocol. 20. A method of improving the operation of a video receiving apparatus by extracting data transferred in-band in a video sequence via optically encoded images, comprising: obtaining a digital representation of a video sequence, the video including a least one frame comprising one or more optically encoded images; determining one or more candidate video frames likely to include one or more optically encoded images; and for at least each candidate video frame, inspecting a digital representation of the frame in memory; detecting one or more optically encoded images in the frame; and decoding each optically encoded image to extract a data segment. 21. The method of claim 20 further comprising, if more than one optically encoded image was detected and decoded, assembling two or more corresponding data segments. 22. The method of claim 20 wherein each optically encoded image comprises a one- or two-dimensional bar code. 23. The method of claim 20 wherein the two-dimensional code comprises a Quick Response (QR) code. 24. The method of claim 20 further comprising processing or outputting the data extracted from optically encoded images without rendering the frames comprising optically encoded images to a display. 25. The method of claim 24 wherein outputting the data comprises outputting one or more text or image files. 26. The method of claim 24 wherein outputting the data comprises outputting to a musical instrument control signals complying with the Musical Instrument Digital Interface (MIDI) protocol. 27. The method of claim 24 wherein the optically encoded data comprises metadata related to the video sequence, and wherein processing the data comprises updating an asset management system with the metadata.
2,600
10,985
10,985
14,273,439
2,647
An Automatic Emergency Call Initiator (AECI) initiates an automatic emergency call protocol on a mobile communication system, which can be done using signaling messages. The user uses the AECI to initiate the call on a mobile station. GPS or other location data is automatically determined. Data is stored on an Emergency Notification Server (ENS) associated with an event identifier for easy retrieval or notification to emergency responders. The ENS generates an automated call to a call center and also supports the automatic emergency call protocol by storing GPS and identifying info on mobile stations meeting location criteria of the AECI initiated call. The ENS can also tag a mobile station to continue tracking mobile stations coming within a specified distance of the mobile station. A security alert protocol for predetermined mobile stations detected by a network can also be implemented using the ENS.
1-53. (canceled) 54. A method for automatically making an emergency call in an IP data packet-based communication system, comprising the steps of: transmitting an emergency call packet from a first communicating device, the transmitting initiated by a connected Bluetooth device; causing the Bluetooth device to initiate the emergency call with a condition input sensor device associated with the Bluetooth device; wherein the first communicating device transmits at least geographic location data derived at least in part from a combined radiolocation and GPS method to summon aid, without voice or text communication from a user, in a packet message format. 55. The method of claim 54, further comprising the step of: executing a tagging protocol to track and store the location of the first communicating device in real time. 56. The method of claim 54, further comprising the step of: performing a locator and identifying protocol to locate and identify any mobile communicating device within at least a predetermined radius centered on the first communicating device. 57. The method of claim 54, further comprising the step of: executing a virtual, mobile emergency alarm protocol to communicate to a plurality of mobile communication devices within a designated radius centered on the first communication device that an emergency call has been initiated. 58. The method of claim 54, further comprising the step of: executing a security alert protocol, wherein an associated telecommunication system stores at least one detected communicating device identifier indexed against an alert identifier associated with a watch list to tag said detected communicating device. 59. The method of claim 58, further comprising the step of: identifying and tagging any second communicating device remaining within a predetermined radius centered on the tagged communicating device for a predetermined elapsed period of time. 60. The method of claim 54, wherein the condition input includes at least one of the following: an audio input; a pulse input; a temperature input; a respiration input; a biological input; a medical input; a G-force input; a temperature input; a speed input; an environmental input; or a physical parameter input. 61. A method for using a mobile communication device, comprising the steps of: receiving a data packet on a wireless communication network from a first mobile communication device indicating an emergency as indicated by a detected condition; executing a locator protocol to determine the geographic location of the first mobile communication device using at least partially or optionally a hybrid GPS and radiolocation method; wherein the data packet's address routes the data packet to an emergency service to respond by dispatching emergency aid without voice or text communication from a user. 62. The method of claim 61, further comprising the step of: tagging the first mobile communication device. 63. The method of claim 61, further comprising the step of: tagging a second communication device using the geographic location of the first mobile communication device. 64. The method of claim 61, further comprising the step of: executing the locator protocol to identify and determine the geographic location of any mobile communication device located within a specified geographic radius centered on the first mobile communication device, and store the identity and geographic location in a database. 65. The method of claim 61, further comprising the step of: identifying and tagging any second mobile communication device remaining within a predetermined radius centered on an already tagged mobile communication device for a predetermined elapsed period of time. 66. A method of communicating with a call center, comprising the steps of: receiving a message packet activating an automatic communication protocol and an automatic locator protocol at a communication node of a packet-based communication system, the message packet generated in response to an input indicating an emergency and a user unable to communicate; transmitting an automatic communication containing a location, identifier, and voice or text data automatically generated by a node interfaced with a remote device to report an emergency event. 67. The system of claim 66, further comprising: at least one node used to track the location of a first cellular telephone in real time, wherein the first cellular telephone transmits the automatic communication indicating the emergency event and user inability to communicate; and a node used to tag the first cellular telephone and at least a second cellular telephone meeting a location criteria in relation to the first cellular telephone in response to the automatic communication. 68. The system of claim 66, further comprising: the transceiver communication node used to determine the location of at least a first cellular telephone sending the automatic communication with a hybrid method integrating both GPS and network radiolocation methods, or one of the two methods, as required. 69. The system of claim 66, wherein the communication system executes a locator protocol to identify and record identifying data for all cellular telephones located within a predetermined location criteria centered on a first cellular telephone initiating the automatic communication. 70. The system of claim 66, wherein the communication system executes a tagging protocol to identify and track the location for all cellular telephones located within a predetermined location criteria centered on a first cellular telephone initiating the automatic communication. 71. The system of claim 66, further comprising using the communication system to implement a security alert system, wherein at least one node on the communication system is provisioned with data associated with at least one cellular telephone identifier on a watch list and used to detect said one cellular telephone; the communication system used in an automatic security call protocol to communicate to a call center a security alert data packet upon detection of the one cellular telephone; and the communication system tags the at least one cellular telephone to continue location tracking in real time. 72. The system of claim 66, wherein the system tags a cellular telephone remaining within a location and time criteria in relation to an already tagged cellular telephone. 73. The system of claim 66, wherein the remote device comprises a sensor devise to cause transmitting of the notification message by a response to a condition that includes at least one of the following: an audio input; a pulse input; a temperature input; a respiration input; a biological input; a medical input; a G-force input; a temperature input; a speed input; an environmental input; or a physical parameter input. 74. The system of claim 66, wherein the node comprises a cellular telephone interfaced to the remote device by a Bluetooth communication protocol.
An Automatic Emergency Call Initiator (AECI) initiates an automatic emergency call protocol on a mobile communication system, which can be done using signaling messages. The user uses the AECI to initiate the call on a mobile station. GPS or other location data is automatically determined. Data is stored on an Emergency Notification Server (ENS) associated with an event identifier for easy retrieval or notification to emergency responders. The ENS generates an automated call to a call center and also supports the automatic emergency call protocol by storing GPS and identifying info on mobile stations meeting location criteria of the AECI initiated call. The ENS can also tag a mobile station to continue tracking mobile stations coming within a specified distance of the mobile station. A security alert protocol for predetermined mobile stations detected by a network can also be implemented using the ENS.1-53. (canceled) 54. A method for automatically making an emergency call in an IP data packet-based communication system, comprising the steps of: transmitting an emergency call packet from a first communicating device, the transmitting initiated by a connected Bluetooth device; causing the Bluetooth device to initiate the emergency call with a condition input sensor device associated with the Bluetooth device; wherein the first communicating device transmits at least geographic location data derived at least in part from a combined radiolocation and GPS method to summon aid, without voice or text communication from a user, in a packet message format. 55. The method of claim 54, further comprising the step of: executing a tagging protocol to track and store the location of the first communicating device in real time. 56. The method of claim 54, further comprising the step of: performing a locator and identifying protocol to locate and identify any mobile communicating device within at least a predetermined radius centered on the first communicating device. 57. The method of claim 54, further comprising the step of: executing a virtual, mobile emergency alarm protocol to communicate to a plurality of mobile communication devices within a designated radius centered on the first communication device that an emergency call has been initiated. 58. The method of claim 54, further comprising the step of: executing a security alert protocol, wherein an associated telecommunication system stores at least one detected communicating device identifier indexed against an alert identifier associated with a watch list to tag said detected communicating device. 59. The method of claim 58, further comprising the step of: identifying and tagging any second communicating device remaining within a predetermined radius centered on the tagged communicating device for a predetermined elapsed period of time. 60. The method of claim 54, wherein the condition input includes at least one of the following: an audio input; a pulse input; a temperature input; a respiration input; a biological input; a medical input; a G-force input; a temperature input; a speed input; an environmental input; or a physical parameter input. 61. A method for using a mobile communication device, comprising the steps of: receiving a data packet on a wireless communication network from a first mobile communication device indicating an emergency as indicated by a detected condition; executing a locator protocol to determine the geographic location of the first mobile communication device using at least partially or optionally a hybrid GPS and radiolocation method; wherein the data packet's address routes the data packet to an emergency service to respond by dispatching emergency aid without voice or text communication from a user. 62. The method of claim 61, further comprising the step of: tagging the first mobile communication device. 63. The method of claim 61, further comprising the step of: tagging a second communication device using the geographic location of the first mobile communication device. 64. The method of claim 61, further comprising the step of: executing the locator protocol to identify and determine the geographic location of any mobile communication device located within a specified geographic radius centered on the first mobile communication device, and store the identity and geographic location in a database. 65. The method of claim 61, further comprising the step of: identifying and tagging any second mobile communication device remaining within a predetermined radius centered on an already tagged mobile communication device for a predetermined elapsed period of time. 66. A method of communicating with a call center, comprising the steps of: receiving a message packet activating an automatic communication protocol and an automatic locator protocol at a communication node of a packet-based communication system, the message packet generated in response to an input indicating an emergency and a user unable to communicate; transmitting an automatic communication containing a location, identifier, and voice or text data automatically generated by a node interfaced with a remote device to report an emergency event. 67. The system of claim 66, further comprising: at least one node used to track the location of a first cellular telephone in real time, wherein the first cellular telephone transmits the automatic communication indicating the emergency event and user inability to communicate; and a node used to tag the first cellular telephone and at least a second cellular telephone meeting a location criteria in relation to the first cellular telephone in response to the automatic communication. 68. The system of claim 66, further comprising: the transceiver communication node used to determine the location of at least a first cellular telephone sending the automatic communication with a hybrid method integrating both GPS and network radiolocation methods, or one of the two methods, as required. 69. The system of claim 66, wherein the communication system executes a locator protocol to identify and record identifying data for all cellular telephones located within a predetermined location criteria centered on a first cellular telephone initiating the automatic communication. 70. The system of claim 66, wherein the communication system executes a tagging protocol to identify and track the location for all cellular telephones located within a predetermined location criteria centered on a first cellular telephone initiating the automatic communication. 71. The system of claim 66, further comprising using the communication system to implement a security alert system, wherein at least one node on the communication system is provisioned with data associated with at least one cellular telephone identifier on a watch list and used to detect said one cellular telephone; the communication system used in an automatic security call protocol to communicate to a call center a security alert data packet upon detection of the one cellular telephone; and the communication system tags the at least one cellular telephone to continue location tracking in real time. 72. The system of claim 66, wherein the system tags a cellular telephone remaining within a location and time criteria in relation to an already tagged cellular telephone. 73. The system of claim 66, wherein the remote device comprises a sensor devise to cause transmitting of the notification message by a response to a condition that includes at least one of the following: an audio input; a pulse input; a temperature input; a respiration input; a biological input; a medical input; a G-force input; a temperature input; a speed input; an environmental input; or a physical parameter input. 74. The system of claim 66, wherein the node comprises a cellular telephone interfaced to the remote device by a Bluetooth communication protocol.
2,600
10,986
10,986
16,808,548
2,683
Communications capabilities are supplied to components of pool water recirculation systems, even if the components lack electrical power or supply wires. Capabilities may be furnished by wireless RF devices that connect to existing fittings or ports of the components, for example. The devices are configured to obtain desired information relating to the components (or the water within them) and transmit the information remotely for processing or consideration.
1-25. (canceled) 26. A method of determining a characteristic of a pump of a pool- or spa-water circulation system, the pump also including a filter, an impeller, and a fluid outlet, the method comprising: a. mechanically connecting a first device including a first pressure sensor and a first electronic transmitter to the pump downstream of the filter and upstream of the impeller; b. mechanically connecting a second device including a second pressure sensor and a second electronic transmitter to the pump downstream of the impeller and upstream of the fluid outlet; c. electronically transmitting first pressure values obtained by the first pressure sensor to a first location remote from the pump; d. electronically transmitting second pressure values obtained by the second pressure sensor to the first location; and e. evaluating the transmitted first and second pressure values to determine the characteristic of the pump.
Communications capabilities are supplied to components of pool water recirculation systems, even if the components lack electrical power or supply wires. Capabilities may be furnished by wireless RF devices that connect to existing fittings or ports of the components, for example. The devices are configured to obtain desired information relating to the components (or the water within them) and transmit the information remotely for processing or consideration.1-25. (canceled) 26. A method of determining a characteristic of a pump of a pool- or spa-water circulation system, the pump also including a filter, an impeller, and a fluid outlet, the method comprising: a. mechanically connecting a first device including a first pressure sensor and a first electronic transmitter to the pump downstream of the filter and upstream of the impeller; b. mechanically connecting a second device including a second pressure sensor and a second electronic transmitter to the pump downstream of the impeller and upstream of the fluid outlet; c. electronically transmitting first pressure values obtained by the first pressure sensor to a first location remote from the pump; d. electronically transmitting second pressure values obtained by the second pressure sensor to the first location; and e. evaluating the transmitted first and second pressure values to determine the characteristic of the pump.
2,600
10,987
10,987
16,079,710
2,653
An intercom device and a method of recording an audio stream at the intercom device. The intercom device includes a user interface, a memory, and an electronic processor. The electronic processor is communicatively coupled to the user interface and the memory and is configured to receive a user-selectable entry from an input mechanism of the user interface. The electronic processor sets an away mode of the intercom device in response to the user-selectable entry. An audio stream is received by the intercom device from another intercom device. Are cord signal indicative of a request to record the audio stream is also received from the another intercom device. The intercom device records the audio stream in the memory in response to receiving the record signal when operating in the away mode.
1. An intercom device for recording an audio stream, the intercom device comprising: a user interface including an input mechanism; a memory; and an electronic processor communicatively coupled to the user interface and the memory, the electronic processor configured to receive a user-selectable entry from the input mechanism; set an away mode of the intercom device in response to the user-selectable entry; receive an audio stream from another intercom device; receive a record signal indicative of a request to record the audio stream from the another intercom device; and record the audio stream in the memory in response to receiving the record signal when operating in the away mode. 2. The intercom device according to claim 1, wherein the electronic processor is further configured to receive a call signal from the another intercom device prior to receiving the audio stream, the call signal initiating communication with the intercom device. 3. The intercom device according to claim 2, wherein the electronic processor is further configured to send an away signal indicative of the away mode to the another intercom device when the away mode is set and the call signal is received. 4. The intercom device according to claim 1, wherein the electronic processor is further configured to broadcast an away signal to other intercom devices indicating that the intercom device is in the away mode prior to receiving an audio stream from another intercom device. 5. The intercom device according to claim 1, wherein the user interface includes a speaker configured to play the audio stream, and a display configured to display an indication that the away mode of the intercom device is set. 6. The intercom device according to claim 5, wherein the display includes a graphical user interface and the indication of the away mode of the intercom device includes an icon displayed on the graphical user interface. 7. An intercom device for transmitting an audio stream, the intercom device comprising: a user interface including an input mechanism; a memory; and an electronic processor communicatively coupled to the user interface and the memory, the electronic processor configured to transmit an audio stream to another intercom device; receive an away signal from the another intercom device indicating that the another intercom device is in an away mode, receive a user-selectable entry from the input mechanism; transmit a record signal to the another intercom device in response to the user-selectable entry that causes the another intercom device to record the audio stream. 8. The intercom device according to claim 7, wherein the electronic processor is further configured to transmit a call signal to the another intercom device prior to transmitting the audio stream, the call signal initiating communication with the another intercom device. 9. The intercom device according to claim 8, wherein the electronic processor is further configured to receive an away signal indicating that the another intercom device is in the away mode when the away mode is set and the call signal is transmitted. 10. The intercom device according to claim 7, further comprising a display, and wherein the electronic processor is further configured to generate an indication of the away mode of the another intercom device on the display when the away signal is received. 11. The intercom device according to claim 10, wherein the display of the intercom device includes a graphical user interface and the indication of the away status includes an icon displayed on the graphical user interface. 12. The intercom device according to claim 7, wherein the input mechanism includes a plurality of multi-position switches that are each configured to select one of a plurality of intercom devices. 13. The intercom device according to claim 7, wherein the electronic processor is further configured to simultaneously receive one entry selecting the another intercom device and another entry selecting a request to record the audio stream during transmission of the audio stream. 14. A method of recording an audio stream at a first intercom device, the method comprising: receiving a first user-selectable entry from a first input mechanism of the first intercom device; setting an away mode of the first intercom device in response to the first user-selectable entry; receiving call signal from a second intercom device, the call signal initiating a call to the first intercom device; sending, from the first intercom device to the second intercom device, an away signal indicating that the first intercom device is set to the away mode; receiving a second user-selectable entry from a second input mechanism of the second intercom device; transmitting an audio stream, from the second intercom device to the first intercom device, in response to the second user-selectable entry; transmitting a record signal, from the second intercom device to the first intercom device, in response to the second user-selectable entry, the record signal causing the first intercom device to record the audio stream; and recording the audio stream in a memory of the first intercom device in response to the record signal. 15. The method of recording the audio stream according to claim 14, the method further comprising: generating an indication of the away mode of the first intercom device on a first display of the first intercom device and on a second display of the second intercom device. 16. The method of recording the audio stream according to claim 15, wherein generating the indication of the away mode of the first intercom device includes displaying an icon on a first graphical user interface of the first display and displaying another icon on a second graphical user interface of the second display. 17. The method of recording the audio stream according to claim 14, wherein receiving the second user-selectable entry from the second input mechanism includes receiving a selection from one of a plurality of multi-position switches, each of the plurality of multi-position switches being configured to select one of a plurality of intercom devices for transmission. 18. The method of recording the audio stream according to claim 14, wherein receiving the second user-selectable entry from the second input mechanism of the second intercom device includes receiving one entry selecting the first intercom device and another entry selecting a request to record the audio stream. 19. The method of recording the audio stream according to claim 18, wherein receiving one entry selecting the first intercom device and another entry selecting the request to record the audio stream are received simultaneously while transmitting the audio stream.
An intercom device and a method of recording an audio stream at the intercom device. The intercom device includes a user interface, a memory, and an electronic processor. The electronic processor is communicatively coupled to the user interface and the memory and is configured to receive a user-selectable entry from an input mechanism of the user interface. The electronic processor sets an away mode of the intercom device in response to the user-selectable entry. An audio stream is received by the intercom device from another intercom device. Are cord signal indicative of a request to record the audio stream is also received from the another intercom device. The intercom device records the audio stream in the memory in response to receiving the record signal when operating in the away mode.1. An intercom device for recording an audio stream, the intercom device comprising: a user interface including an input mechanism; a memory; and an electronic processor communicatively coupled to the user interface and the memory, the electronic processor configured to receive a user-selectable entry from the input mechanism; set an away mode of the intercom device in response to the user-selectable entry; receive an audio stream from another intercom device; receive a record signal indicative of a request to record the audio stream from the another intercom device; and record the audio stream in the memory in response to receiving the record signal when operating in the away mode. 2. The intercom device according to claim 1, wherein the electronic processor is further configured to receive a call signal from the another intercom device prior to receiving the audio stream, the call signal initiating communication with the intercom device. 3. The intercom device according to claim 2, wherein the electronic processor is further configured to send an away signal indicative of the away mode to the another intercom device when the away mode is set and the call signal is received. 4. The intercom device according to claim 1, wherein the electronic processor is further configured to broadcast an away signal to other intercom devices indicating that the intercom device is in the away mode prior to receiving an audio stream from another intercom device. 5. The intercom device according to claim 1, wherein the user interface includes a speaker configured to play the audio stream, and a display configured to display an indication that the away mode of the intercom device is set. 6. The intercom device according to claim 5, wherein the display includes a graphical user interface and the indication of the away mode of the intercom device includes an icon displayed on the graphical user interface. 7. An intercom device for transmitting an audio stream, the intercom device comprising: a user interface including an input mechanism; a memory; and an electronic processor communicatively coupled to the user interface and the memory, the electronic processor configured to transmit an audio stream to another intercom device; receive an away signal from the another intercom device indicating that the another intercom device is in an away mode, receive a user-selectable entry from the input mechanism; transmit a record signal to the another intercom device in response to the user-selectable entry that causes the another intercom device to record the audio stream. 8. The intercom device according to claim 7, wherein the electronic processor is further configured to transmit a call signal to the another intercom device prior to transmitting the audio stream, the call signal initiating communication with the another intercom device. 9. The intercom device according to claim 8, wherein the electronic processor is further configured to receive an away signal indicating that the another intercom device is in the away mode when the away mode is set and the call signal is transmitted. 10. The intercom device according to claim 7, further comprising a display, and wherein the electronic processor is further configured to generate an indication of the away mode of the another intercom device on the display when the away signal is received. 11. The intercom device according to claim 10, wherein the display of the intercom device includes a graphical user interface and the indication of the away status includes an icon displayed on the graphical user interface. 12. The intercom device according to claim 7, wherein the input mechanism includes a plurality of multi-position switches that are each configured to select one of a plurality of intercom devices. 13. The intercom device according to claim 7, wherein the electronic processor is further configured to simultaneously receive one entry selecting the another intercom device and another entry selecting a request to record the audio stream during transmission of the audio stream. 14. A method of recording an audio stream at a first intercom device, the method comprising: receiving a first user-selectable entry from a first input mechanism of the first intercom device; setting an away mode of the first intercom device in response to the first user-selectable entry; receiving call signal from a second intercom device, the call signal initiating a call to the first intercom device; sending, from the first intercom device to the second intercom device, an away signal indicating that the first intercom device is set to the away mode; receiving a second user-selectable entry from a second input mechanism of the second intercom device; transmitting an audio stream, from the second intercom device to the first intercom device, in response to the second user-selectable entry; transmitting a record signal, from the second intercom device to the first intercom device, in response to the second user-selectable entry, the record signal causing the first intercom device to record the audio stream; and recording the audio stream in a memory of the first intercom device in response to the record signal. 15. The method of recording the audio stream according to claim 14, the method further comprising: generating an indication of the away mode of the first intercom device on a first display of the first intercom device and on a second display of the second intercom device. 16. The method of recording the audio stream according to claim 15, wherein generating the indication of the away mode of the first intercom device includes displaying an icon on a first graphical user interface of the first display and displaying another icon on a second graphical user interface of the second display. 17. The method of recording the audio stream according to claim 14, wherein receiving the second user-selectable entry from the second input mechanism includes receiving a selection from one of a plurality of multi-position switches, each of the plurality of multi-position switches being configured to select one of a plurality of intercom devices for transmission. 18. The method of recording the audio stream according to claim 14, wherein receiving the second user-selectable entry from the second input mechanism of the second intercom device includes receiving one entry selecting the first intercom device and another entry selecting a request to record the audio stream. 19. The method of recording the audio stream according to claim 18, wherein receiving one entry selecting the first intercom device and another entry selecting the request to record the audio stream are received simultaneously while transmitting the audio stream.
2,600
10,988
10,988
16,439,722
2,626
A device substrate includes a carrier, a device array, first fan-out lines, and second fan-out lines. The carrier has a first side, a second side, a third side, and a fourth side. The first side is opposite to the second side. The third side is opposite to the fourth side. The device array is disposed on a first surface of the carrier. The device array includes sub-pixels. Each of the sub-pixels includes a switching element and an optoelectronic element electrically connected with the switching element. The first fan-out lines are extending from the first side to the first surface and electrically connected with the device array. The second fan-out lines are extending from the second side to the first surface and electrically connected with the device array. The first fan-out lines and the second fan-out lines are asymmetrically disposed on the first side and the second side, respectively.
1. A device substrate, comprising: a carrier, having a first side, a second side, a third side, and a fourth side, wherein the first side is opposite to the second side, and the third side is opposite to the fourth side; a device array, disposed on a first surface of the carrier, wherein the device array comprises a plurality of sub-pixels, and each of the plurality of sub-pixels comprises a switching element and an optoelectronic element electrically connected with the switching element; a plurality of first fan-out lines, extending from a first edge of the first side of the carrier to the sub-pixels on the first surface of the carrier and electrically connected with the device array through a part of scan lines or a part of data lines of the device array; and a plurality of second fan-out lines, extending from a second edge of the second side of the carrier to the sub-pixels on the first surface of the carrier and electrically connected with the device array through another part of scan lines or another part of data lines of the device array, wherein the plurality of first fan-out lines and the plurality of second fan-out lines are asymmetrically disposed on the first side and the second side, respectively. 2. The device substrate according to claim 1, wherein the plurality of first fan-out lines are extending on the first surface of the carrier along a first extending direction, and at least part of the plurality of first fan-out lines are not overlapped with the plurality of second fan-out lines in the first extending direction. 3. The device substrate according to claim 1, further comprising: a plurality of third fan-out lines, extending from a third edge of the third side of the carrier to the sub-pixels on the first surface of the carrier and electrically connected with the device array through a part of data lines or a part of scan lines of the device array; and a plurality of fourth fan-out lines, extending from a fourth edge of the fourth side of the carrier to the sub-pixels on the first surface of the carrier and electrically connected with the device array through another part of data lines or another part of scan lines of the device array, wherein the plurality of third fan-out lines and the plurality of fourth fan-out lines are asymmetrically disposed on the third side and the fourth side, respectively, wherein the plurality of third fan-out lines and the plurality of fourth fan-out lines are electrically connected with the data lines when the plurality of first fan-out lines and the plurality of second fan-out lines are electrically connected with the scan lines, and the plurality of third fan-out lines and the plurality of fourth fan-out lines are electrically connected with the scan lines when the plurality of first fan-out lines and the plurality of second fan-out lines are electrically connected with the data lines. 4. The device substrate according to claim 1, wherein the plurality of first fan-out lines and the plurality of second fan-out lines are electrically connected with scan lines or data lines of the device array. 5. The device substrate according to claim 1, wherein the optoelectronic element comprises a self-luminous element or a non self-luminous element. 6. The device substrate according to claim 1, wherein the carrier is a flexible carrier, and the carrier is bent at the first side. 7. The device substrate according to claim 1, further comprising: a first flexible printed circuit board, located on the first side of the carrier, and bent from the first surface of the carrier through the first edge of the first side to a second surface of the carrier opposite to the first surface, wherein the plurality of first fan-out lines are located on the first flexible printed circuit board. 8. The device substrate according to claim 1, wherein the plurality of first fan-out lines are bent from the first surface of the carrier through the first edge of the first side to a second surface of the carrier opposite to the first surface. 9. A spliced electronic apparatus, comprising: two said device substrates of claim 1, the first side of one of the two said device substrates being adjacent to the second side of another one of the two said device substrates. 10. The spliced electronic apparatus according to claim 9, wherein the plurality of first fan-out lines of one of the two said device substrates are asymmetric to the plurality of second fan-out lines of another one of the two said device substrates. 11. The spliced electronic apparatus according to claim 9, wherein each of the two said device substrates further comprises: a plurality of third fan-out lines, extending from the third side of the carrier to the first surface of the carrier and electrically connected with the device array; and a plurality of fourth fan-out lines, extending from the fourth side of the carrier to the first surface of the carrier and electrically connected with the device array, wherein the plurality of third fan-out lines and the plurality of fourth fan-out lines are asymmetrically disposed on the third side and the fourth side, respectively. 12. The spliced electronic apparatus according to claim 10, wherein the plurality of first fan-out lines of one of the two said device substrates are symmetrical to the plurality of first fan-out lines of the other one of the two said device substrates, and the plurality of second fan-out lines of one of the two said device substrates are symmetrical to the plurality of second fan-out lines of the other one of the two said device substrates.
A device substrate includes a carrier, a device array, first fan-out lines, and second fan-out lines. The carrier has a first side, a second side, a third side, and a fourth side. The first side is opposite to the second side. The third side is opposite to the fourth side. The device array is disposed on a first surface of the carrier. The device array includes sub-pixels. Each of the sub-pixels includes a switching element and an optoelectronic element electrically connected with the switching element. The first fan-out lines are extending from the first side to the first surface and electrically connected with the device array. The second fan-out lines are extending from the second side to the first surface and electrically connected with the device array. The first fan-out lines and the second fan-out lines are asymmetrically disposed on the first side and the second side, respectively.1. A device substrate, comprising: a carrier, having a first side, a second side, a third side, and a fourth side, wherein the first side is opposite to the second side, and the third side is opposite to the fourth side; a device array, disposed on a first surface of the carrier, wherein the device array comprises a plurality of sub-pixels, and each of the plurality of sub-pixels comprises a switching element and an optoelectronic element electrically connected with the switching element; a plurality of first fan-out lines, extending from a first edge of the first side of the carrier to the sub-pixels on the first surface of the carrier and electrically connected with the device array through a part of scan lines or a part of data lines of the device array; and a plurality of second fan-out lines, extending from a second edge of the second side of the carrier to the sub-pixels on the first surface of the carrier and electrically connected with the device array through another part of scan lines or another part of data lines of the device array, wherein the plurality of first fan-out lines and the plurality of second fan-out lines are asymmetrically disposed on the first side and the second side, respectively. 2. The device substrate according to claim 1, wherein the plurality of first fan-out lines are extending on the first surface of the carrier along a first extending direction, and at least part of the plurality of first fan-out lines are not overlapped with the plurality of second fan-out lines in the first extending direction. 3. The device substrate according to claim 1, further comprising: a plurality of third fan-out lines, extending from a third edge of the third side of the carrier to the sub-pixels on the first surface of the carrier and electrically connected with the device array through a part of data lines or a part of scan lines of the device array; and a plurality of fourth fan-out lines, extending from a fourth edge of the fourth side of the carrier to the sub-pixels on the first surface of the carrier and electrically connected with the device array through another part of data lines or another part of scan lines of the device array, wherein the plurality of third fan-out lines and the plurality of fourth fan-out lines are asymmetrically disposed on the third side and the fourth side, respectively, wherein the plurality of third fan-out lines and the plurality of fourth fan-out lines are electrically connected with the data lines when the plurality of first fan-out lines and the plurality of second fan-out lines are electrically connected with the scan lines, and the plurality of third fan-out lines and the plurality of fourth fan-out lines are electrically connected with the scan lines when the plurality of first fan-out lines and the plurality of second fan-out lines are electrically connected with the data lines. 4. The device substrate according to claim 1, wherein the plurality of first fan-out lines and the plurality of second fan-out lines are electrically connected with scan lines or data lines of the device array. 5. The device substrate according to claim 1, wherein the optoelectronic element comprises a self-luminous element or a non self-luminous element. 6. The device substrate according to claim 1, wherein the carrier is a flexible carrier, and the carrier is bent at the first side. 7. The device substrate according to claim 1, further comprising: a first flexible printed circuit board, located on the first side of the carrier, and bent from the first surface of the carrier through the first edge of the first side to a second surface of the carrier opposite to the first surface, wherein the plurality of first fan-out lines are located on the first flexible printed circuit board. 8. The device substrate according to claim 1, wherein the plurality of first fan-out lines are bent from the first surface of the carrier through the first edge of the first side to a second surface of the carrier opposite to the first surface. 9. A spliced electronic apparatus, comprising: two said device substrates of claim 1, the first side of one of the two said device substrates being adjacent to the second side of another one of the two said device substrates. 10. The spliced electronic apparatus according to claim 9, wherein the plurality of first fan-out lines of one of the two said device substrates are asymmetric to the plurality of second fan-out lines of another one of the two said device substrates. 11. The spliced electronic apparatus according to claim 9, wherein each of the two said device substrates further comprises: a plurality of third fan-out lines, extending from the third side of the carrier to the first surface of the carrier and electrically connected with the device array; and a plurality of fourth fan-out lines, extending from the fourth side of the carrier to the first surface of the carrier and electrically connected with the device array, wherein the plurality of third fan-out lines and the plurality of fourth fan-out lines are asymmetrically disposed on the third side and the fourth side, respectively. 12. The spliced electronic apparatus according to claim 10, wherein the plurality of first fan-out lines of one of the two said device substrates are symmetrical to the plurality of first fan-out lines of the other one of the two said device substrates, and the plurality of second fan-out lines of one of the two said device substrates are symmetrical to the plurality of second fan-out lines of the other one of the two said device substrates.
2,600
10,989
10,989
14,202,573
2,689
A system and methods comprise a touchscreen at a premises. The touchscreen includes a processor running gateways and coupled to a security system at the premises. User interfaces are presented via the touchscreen. The user interfaces include a security interface that provides control of functions of the security system and access to data collected by the security system, and a network interface that provides access to network devices. A network device at the premises is coupled to the touchscreen via a Wi-Fi channel. A security server at a remote location is coupled to the touchscreen. The security server comprises a client interface through which remote client devices exchange data with the touchscreen and the security system.
1. A system comprising: a wireless access point located at a premises; a premises management device located at the premises and in communication with the wireless access point; a touchscreen device located at the premises and in communication with the premises management device, wherein the touchscreen device is configured to: receive, from the premises management device, premises data, and control, based on a capability of the premises management device, settings of the wireless access point; and a remote server in communication with the touchscreen device, wherein the remote server is configured to send data associated with a client interface and enable a remote client device to access, via the client interface, the premises data. 2. The system of claim 1, wherein the settings of the wireless access point comprise at least one of a bandwidth, a channel, an enablement of multimedia, or an enablement of dynamic frequency selection (DFS). 3. The system of claim 1, wherein the premises management device comprises at least one of a sensor device, a camera device, an alarm device, or a home automation device. 4. The system of claim 1, further comprising a second wireless access point; and wherein the touchscreen device is further configured to establish a connection, via a network of the second wireless access point, between the second wireless access point and the premises management device. 5. The system of claim 1, further comprising a second wireless access point and a second premises management device; and wherein the touchscreen device is further configured to control, based on a capability of the second premises management device, settings of the second wireless access point. 6. The system of claim 1, wherein the touchscreen device comprises a user interface configured to output a control icon associated with the premises management device. 7. The system of claim 1, wherein the touchscreen device is further configured to control communication between the premises management device and the wireless access point. 8. A method comprising: determining, by a touchscreen device located at a premises, a capability of a premises management device located at the premises; causing, based on the capability of the premises management device, configuration of a wireless access point located at the premises, wherein the wireless access point is associated with a wireless network at the premises; receiving, from the premises management device and via the wireless network associated with the wireless access point, premises data; and sending, to a server located external to the premises, the premises data. 9. The method of claim 8, wherein the causing configuration of the wireless access point comprises configuring a security setting of the wireless network associated with the wireless access point. 10. The method of claim 8, wherein the causing configuration of the wireless access point comprises configuring a key facilitating access to the wireless network associated with the wireless access point. 11. The method of claim 8, wherein the capability of the premises management device comprises a wireless capability of the premises management device. 12. The method of claim 8, wherein the causing configuration of the wireless access point comprises sending, to the wireless access point, a command to configure a setting of the wireless access point. 13. The method of claim 8, further comprising assigning each of a plurality of premises management devices to at least one of a plurality of wireless access points. 14. The method of claim 8, further comprising sending, to the premises management device, an indication of the wireless access point. 15. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: determine a capability of a premises management device located at a premises; cause, based on the capability of the premises management device, configuration of a wireless access point located at the premises, wherein the wireless access point is associated with a wireless network at the premises; receive, from the premises management device and via the wireless network associated with the wireless access point, premises data; cause output, via a touchscreen, of an indication of the premises data; and send, to a server located external to the premises, the premises data. 16. The device of claim 15, wherein the capability of the premises management device comprises a signal strength capability of the premises management device. 17. The device of claim 15, wherein the premises management device comprises at least one of a tablet device, a laptop computer, or a mobile phone. 18. The device of claim 15, wherein the instructions, when executed, further cause the device to send, to the premises management device, a service set identifier of the wireless access point. 19. The device of claim 15, wherein the wireless network comprises a password-protected network. 20. The device of claim 15, wherein the capability of the premises management device comprises a data exchange rate of the premises management device.
A system and methods comprise a touchscreen at a premises. The touchscreen includes a processor running gateways and coupled to a security system at the premises. User interfaces are presented via the touchscreen. The user interfaces include a security interface that provides control of functions of the security system and access to data collected by the security system, and a network interface that provides access to network devices. A network device at the premises is coupled to the touchscreen via a Wi-Fi channel. A security server at a remote location is coupled to the touchscreen. The security server comprises a client interface through which remote client devices exchange data with the touchscreen and the security system.1. A system comprising: a wireless access point located at a premises; a premises management device located at the premises and in communication with the wireless access point; a touchscreen device located at the premises and in communication with the premises management device, wherein the touchscreen device is configured to: receive, from the premises management device, premises data, and control, based on a capability of the premises management device, settings of the wireless access point; and a remote server in communication with the touchscreen device, wherein the remote server is configured to send data associated with a client interface and enable a remote client device to access, via the client interface, the premises data. 2. The system of claim 1, wherein the settings of the wireless access point comprise at least one of a bandwidth, a channel, an enablement of multimedia, or an enablement of dynamic frequency selection (DFS). 3. The system of claim 1, wherein the premises management device comprises at least one of a sensor device, a camera device, an alarm device, or a home automation device. 4. The system of claim 1, further comprising a second wireless access point; and wherein the touchscreen device is further configured to establish a connection, via a network of the second wireless access point, between the second wireless access point and the premises management device. 5. The system of claim 1, further comprising a second wireless access point and a second premises management device; and wherein the touchscreen device is further configured to control, based on a capability of the second premises management device, settings of the second wireless access point. 6. The system of claim 1, wherein the touchscreen device comprises a user interface configured to output a control icon associated with the premises management device. 7. The system of claim 1, wherein the touchscreen device is further configured to control communication between the premises management device and the wireless access point. 8. A method comprising: determining, by a touchscreen device located at a premises, a capability of a premises management device located at the premises; causing, based on the capability of the premises management device, configuration of a wireless access point located at the premises, wherein the wireless access point is associated with a wireless network at the premises; receiving, from the premises management device and via the wireless network associated with the wireless access point, premises data; and sending, to a server located external to the premises, the premises data. 9. The method of claim 8, wherein the causing configuration of the wireless access point comprises configuring a security setting of the wireless network associated with the wireless access point. 10. The method of claim 8, wherein the causing configuration of the wireless access point comprises configuring a key facilitating access to the wireless network associated with the wireless access point. 11. The method of claim 8, wherein the capability of the premises management device comprises a wireless capability of the premises management device. 12. The method of claim 8, wherein the causing configuration of the wireless access point comprises sending, to the wireless access point, a command to configure a setting of the wireless access point. 13. The method of claim 8, further comprising assigning each of a plurality of premises management devices to at least one of a plurality of wireless access points. 14. The method of claim 8, further comprising sending, to the premises management device, an indication of the wireless access point. 15. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: determine a capability of a premises management device located at a premises; cause, based on the capability of the premises management device, configuration of a wireless access point located at the premises, wherein the wireless access point is associated with a wireless network at the premises; receive, from the premises management device and via the wireless network associated with the wireless access point, premises data; cause output, via a touchscreen, of an indication of the premises data; and send, to a server located external to the premises, the premises data. 16. The device of claim 15, wherein the capability of the premises management device comprises a signal strength capability of the premises management device. 17. The device of claim 15, wherein the premises management device comprises at least one of a tablet device, a laptop computer, or a mobile phone. 18. The device of claim 15, wherein the instructions, when executed, further cause the device to send, to the premises management device, a service set identifier of the wireless access point. 19. The device of claim 15, wherein the wireless network comprises a password-protected network. 20. The device of claim 15, wherein the capability of the premises management device comprises a data exchange rate of the premises management device.
2,600
10,990
10,990
16,126,099
2,674
A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.
1. A virtual reality (VR) system comprising: a three-dimensional (3D) point cloud comprising a plurality of points; a VR viewer having a current position; a graphics processing unit (GPU) coupled to the VR viewer; and a central processing unit (CPU) coupled to the VR viewer, the CPU responsive to first executable instructions that when executed on the CPU perform a first method comprising determining a field-of-view (FOV) based at least in part on the current position of the VR viewer, selecting a subset of the plurality of points using occlusion culling and based at least in part on the FOV, and providing the subset of the plurality of points to the GPU, wherein the GPU is responsive to second executable computer instructions that when executed on the GPU perform a second method comprising receiving the subset of the plurality of points from the CPU and rendering an image for display on the VR viewer based at least in part on the received subset of the plurality of points, wherein the selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate. 2. The system of claim 1, wherein the determining, selecting, providing, receiving, and rendering are performed periodically. 3. The system of claim 1, wherein the first method further comprises monitoring the current position of the VR viewer and the determining, selecting, providing, receiving, and rendering are performed based at least in part on determining, based on the monitoring, that the current position of the VR viewer has changed. 4. The system of claim 1, wherein the providing the subset of the plurality of points to the GPU comprises storing the subset of the plurality of points to a shared storage location accessible by the GPU and the CPU, and wherein the receiving the subset of the plurality of points comprises accessing the shared storage location. 5. The system of claim 1, wherein the first FPS rate is about ten FPS and the second FPS rate is about ninety FPS. 6. The system of claim 1, further comprising a laser scanner for capturing the plurality of points. 7. The system of claim 1, wherein the subset of the plurality of points is characterized by a plurality of different densities including first points for display at a first density at a first location in the FOV and second points for display at a second density at a second location in the FOV. 8. The system of claim 1, wherein the subset of the plurality of points includes points outside of the FOV. 9. The system of claim 1, wherein the VR viewer is a headset. 10. The system of claim 1, wherein the VR viewer is a smartphone VR viewer. 11. A method comprising: providing a virtual reality (VR) system comprising a three-dimensional (3D) point cloud comprising a plurality of points, a central processing unit (CPU), a VR viewer having a current position, and a graphics processing unit (GPU), the VR viewer coupled to the CPU and to the GPU; determining, at the CPU, a field-of-view (FOV) based at least in part on the current position of the VR viewer; selecting, at the CPU, a subset of the plurality of points in the 3D point cloud, the selecting using occlusion culling and based at least in part on the FOV; providing, by the CPU, the subset of the plurality of points to the GPU; receiving, at the GPU, a subset of the plurality of points from the CPU; and rendering, at the GPU, an image for display on the VR viewer, the rendering based at least in part on the received plurality of points, wherein the selecting is at first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate. 12. The method of claim 11, further comprising repeating the determining, selecting, providing, receiving, and rendering periodically. 13. The method of claim 11 further comprising: monitoring, by the CPU, the current position of the VR viewer; and repeating the determining, selecting, providing, receiving, and rendering based at least in part on determining, based on the monitoring, that the current position of the VR viewer has changed. 14. The method of claim 11, wherein the providing the subset of the plurality of points to the GPU comprises storing the subset of the plurality of points to a shared storage location accessible by the GPU and the CPU, and wherein the receiving a subset of the plurality of points comprises accessing the shared storage location. 15. The method of claim 11, wherein the first FPS rate is about ten FPS and the second FPS rate is about ninety FPS. 16. The method of claim 11, wherein a laser scanner captures the plurality of points. 17. The method of claim 1, wherein the subset of the plurality of points is characterized by a plurality of different densities including first points for display at a first density at a first location in the FOV and second points for display at a second density at a second location in the FOV. 18. The method of claim 1, wherein the wherein the subset of the plurality of points includes points outside of the FOV. 19. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions: determining, at a central processing unit (CPU), a field-of-view (FOV) based at least in part on a current position of a VR viewer; selecting, at the CPU, a subset of a plurality of points in a three-dimensional (3D) point cloud, the selecting using occlusion culling and based at least in part on the FOV; providing, by the CPU, the subset of the plurality of points to the GPU; receiving, at the GPU, a subset of the plurality of points from the CPU; and rendering, at the GPU, an image for display on the VR viewer, the rendering based at least in part on the received plurality of points, wherein the selecting is at first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate. 20. The computer program product of claim 19, wherein one or both of: the subset of the plurality of points is characterized by a plurality of different densities including first points for display at a first density at a first location in the FOV and second points for display at a second density at a second location in the FOV; and the subset of the plurality of points includes points outside of the FOV.
A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.1. A virtual reality (VR) system comprising: a three-dimensional (3D) point cloud comprising a plurality of points; a VR viewer having a current position; a graphics processing unit (GPU) coupled to the VR viewer; and a central processing unit (CPU) coupled to the VR viewer, the CPU responsive to first executable instructions that when executed on the CPU perform a first method comprising determining a field-of-view (FOV) based at least in part on the current position of the VR viewer, selecting a subset of the plurality of points using occlusion culling and based at least in part on the FOV, and providing the subset of the plurality of points to the GPU, wherein the GPU is responsive to second executable computer instructions that when executed on the GPU perform a second method comprising receiving the subset of the plurality of points from the CPU and rendering an image for display on the VR viewer based at least in part on the received subset of the plurality of points, wherein the selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate. 2. The system of claim 1, wherein the determining, selecting, providing, receiving, and rendering are performed periodically. 3. The system of claim 1, wherein the first method further comprises monitoring the current position of the VR viewer and the determining, selecting, providing, receiving, and rendering are performed based at least in part on determining, based on the monitoring, that the current position of the VR viewer has changed. 4. The system of claim 1, wherein the providing the subset of the plurality of points to the GPU comprises storing the subset of the plurality of points to a shared storage location accessible by the GPU and the CPU, and wherein the receiving the subset of the plurality of points comprises accessing the shared storage location. 5. The system of claim 1, wherein the first FPS rate is about ten FPS and the second FPS rate is about ninety FPS. 6. The system of claim 1, further comprising a laser scanner for capturing the plurality of points. 7. The system of claim 1, wherein the subset of the plurality of points is characterized by a plurality of different densities including first points for display at a first density at a first location in the FOV and second points for display at a second density at a second location in the FOV. 8. The system of claim 1, wherein the subset of the plurality of points includes points outside of the FOV. 9. The system of claim 1, wherein the VR viewer is a headset. 10. The system of claim 1, wherein the VR viewer is a smartphone VR viewer. 11. A method comprising: providing a virtual reality (VR) system comprising a three-dimensional (3D) point cloud comprising a plurality of points, a central processing unit (CPU), a VR viewer having a current position, and a graphics processing unit (GPU), the VR viewer coupled to the CPU and to the GPU; determining, at the CPU, a field-of-view (FOV) based at least in part on the current position of the VR viewer; selecting, at the CPU, a subset of the plurality of points in the 3D point cloud, the selecting using occlusion culling and based at least in part on the FOV; providing, by the CPU, the subset of the plurality of points to the GPU; receiving, at the GPU, a subset of the plurality of points from the CPU; and rendering, at the GPU, an image for display on the VR viewer, the rendering based at least in part on the received plurality of points, wherein the selecting is at first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate. 12. The method of claim 11, further comprising repeating the determining, selecting, providing, receiving, and rendering periodically. 13. The method of claim 11 further comprising: monitoring, by the CPU, the current position of the VR viewer; and repeating the determining, selecting, providing, receiving, and rendering based at least in part on determining, based on the monitoring, that the current position of the VR viewer has changed. 14. The method of claim 11, wherein the providing the subset of the plurality of points to the GPU comprises storing the subset of the plurality of points to a shared storage location accessible by the GPU and the CPU, and wherein the receiving a subset of the plurality of points comprises accessing the shared storage location. 15. The method of claim 11, wherein the first FPS rate is about ten FPS and the second FPS rate is about ninety FPS. 16. The method of claim 11, wherein a laser scanner captures the plurality of points. 17. The method of claim 1, wherein the subset of the plurality of points is characterized by a plurality of different densities including first points for display at a first density at a first location in the FOV and second points for display at a second density at a second location in the FOV. 18. The method of claim 1, wherein the wherein the subset of the plurality of points includes points outside of the FOV. 19. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions: determining, at a central processing unit (CPU), a field-of-view (FOV) based at least in part on a current position of a VR viewer; selecting, at the CPU, a subset of a plurality of points in a three-dimensional (3D) point cloud, the selecting using occlusion culling and based at least in part on the FOV; providing, by the CPU, the subset of the plurality of points to the GPU; receiving, at the GPU, a subset of the plurality of points from the CPU; and rendering, at the GPU, an image for display on the VR viewer, the rendering based at least in part on the received plurality of points, wherein the selecting is at first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate. 20. The computer program product of claim 19, wherein one or both of: the subset of the plurality of points is characterized by a plurality of different densities including first points for display at a first density at a first location in the FOV and second points for display at a second density at a second location in the FOV; and the subset of the plurality of points includes points outside of the FOV.
2,600
10,991
10,991
15,564,251
2,628
A gaze monitoring system comprising: an eye tracker comprising; a camera having an optical axis; a first IR source configured to illuminate a user's eyes; the first IR source located relatively near the optical axis of the camera; and at least one second IR source configured to illuminate the user's eyes, the at least one second IR source located relatively far from the camera's optical axis, in a position such that when the gaze angle is too large to get corneal reflection images of the first IR source, the image reflection of the at least one second IR source is visible by the camera, as a corneal reflection.
1-44. (canceled) 45. A gaze monitoring system comprising: an eye tracker comprising; a camera having an optical axis; a first IR source located relatively near said optical axis of said camera; and at least one second IR source; said first and at least one second IR sources configured to illuminate at least one eye of a user; said first and at least one second IR sources are located based on the structure of said user's eye, thereby providing said camera with a reflection image of the IR that is within cornea region; said system configured to enable increasing said gaze monitoring range so that when the gaze angle is too large to get images with corneal reflection of said first IR source, a corneal reflection of said at least one second IR source is visible by said camera, as a corneal reflection. 46. The gaze monitoring system of claim 45, wherein said first IR source comprises a plurality of first IR sources. 47. The gaze monitoring system of claim 45, wherein said at least one second IR source comprises a plurality of second IR sources. 48. The gaze monitoring system of claim 45, further configured to select an IR source for activation so that the reflection image is as close as possible to a desired location relative to the eye image. 49. The gaze monitoring system of claim 45, further configured to use at least two of the group consisting of said first and said at least one second IR sources simultaneously to evaluate gaze angle. 50. The gaze monitoring system of claim 45, further configured to use at least two of the group consisting of said first and said at least one second IR sources successively to evaluate gaze angle. 51. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user, said at least one second IR source is located relatively far from said optical axis, a method of evaluating the desired angle and direction of said at least one second IR source location as a function of a desired increase in a gaze angle and direction that can be monitored, comprising: evaluating any of the difference between said desired increase in the gaze angle in a given direction and the current maximum gaze angle that can be monitored by said eye tracker, using said first IR source, in the same direction and said increased gaze angle; and locating said at least one second IR source along said direction, at an angle based on said evaluation. 52. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user, said at least one second IR source is located relatively far from said optical axis; a method of selecting an IR source to be used, comprising: determining a gaze angle of said user; for at least two of the group consisting of said first and said at least one second IR sources calculating the angles between said gaze angle and each of the line of sight angles to said at least two of the group consisting of said first and said at least one second IR sources; and selecting, for gaze monitoring, the IR source for which the angle between said gaze angle and the line of sight angle of that IR source is the smallest. 53. In a gaze monitoring system, comprising: a computer and storage device configured to store data indicating which IR source should be used for a given gaze angle; an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user, said at least one second IR source is located relatively far from said optical axis; a method of selecting an IR source to be used, comprising: calculating a gaze angle of said user; accessing said stored data for information on which IR source should be used with said gaze angle; and selecting that IR source to determine user's gaze angle. 54. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user, said at least one second IR source is located relatively far from said optical axis; a method of selecting an IR source to be used, comprising: calculating a gaze angle of said user; and selecting the IR source that is the nearest to a line determined by two-times said calculated gaze angle. 55. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user; identifying at least one IR source according to at least one of: modulation of at least part of said IR sources, which of said IR sources is active, relative geometrical arrangement of at least part of said IR sources and colors of at least part of said IR sources. 56. The method of claim 55, wherein an identified IR source is selected so that the reflection image of said selected IR will be within the image of the cornea of said user. 57. The method of claim 55, wherein a plurality of IR sources, comprising said first IR source and said at least one second IR source, are active at a given time. 58. The method of claim 55, wherein said IR modulation frequency is smaller than said camera's frame rate. 59. The method of claim 55, wherein said IR modulation is synchronized with said camera. 60. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user; a method of calibrating said system, comprising: a. guiding a user to gaze at specific calibration gaze angles; b. collecting images from said camera during each such gaze, the images for each such gaze comprise corneal reflection of at least one of said first IR source and said at least one second IR sources; c. analyzing said images to extract relations between an image of said at least one eye of said user and the reflection image of said at least one second IR source; d. extracting a representative relation from all these images for the calibration gaze angle;  repeating step d for all the calibration gaze angles;  repeating steps a through d for at least one more IR source; and e. storing said representative relations for use as calibration data.
A gaze monitoring system comprising: an eye tracker comprising; a camera having an optical axis; a first IR source configured to illuminate a user's eyes; the first IR source located relatively near the optical axis of the camera; and at least one second IR source configured to illuminate the user's eyes, the at least one second IR source located relatively far from the camera's optical axis, in a position such that when the gaze angle is too large to get corneal reflection images of the first IR source, the image reflection of the at least one second IR source is visible by the camera, as a corneal reflection.1-44. (canceled) 45. A gaze monitoring system comprising: an eye tracker comprising; a camera having an optical axis; a first IR source located relatively near said optical axis of said camera; and at least one second IR source; said first and at least one second IR sources configured to illuminate at least one eye of a user; said first and at least one second IR sources are located based on the structure of said user's eye, thereby providing said camera with a reflection image of the IR that is within cornea region; said system configured to enable increasing said gaze monitoring range so that when the gaze angle is too large to get images with corneal reflection of said first IR source, a corneal reflection of said at least one second IR source is visible by said camera, as a corneal reflection. 46. The gaze monitoring system of claim 45, wherein said first IR source comprises a plurality of first IR sources. 47. The gaze monitoring system of claim 45, wherein said at least one second IR source comprises a plurality of second IR sources. 48. The gaze monitoring system of claim 45, further configured to select an IR source for activation so that the reflection image is as close as possible to a desired location relative to the eye image. 49. The gaze monitoring system of claim 45, further configured to use at least two of the group consisting of said first and said at least one second IR sources simultaneously to evaluate gaze angle. 50. The gaze monitoring system of claim 45, further configured to use at least two of the group consisting of said first and said at least one second IR sources successively to evaluate gaze angle. 51. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user, said at least one second IR source is located relatively far from said optical axis, a method of evaluating the desired angle and direction of said at least one second IR source location as a function of a desired increase in a gaze angle and direction that can be monitored, comprising: evaluating any of the difference between said desired increase in the gaze angle in a given direction and the current maximum gaze angle that can be monitored by said eye tracker, using said first IR source, in the same direction and said increased gaze angle; and locating said at least one second IR source along said direction, at an angle based on said evaluation. 52. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user, said at least one second IR source is located relatively far from said optical axis; a method of selecting an IR source to be used, comprising: determining a gaze angle of said user; for at least two of the group consisting of said first and said at least one second IR sources calculating the angles between said gaze angle and each of the line of sight angles to said at least two of the group consisting of said first and said at least one second IR sources; and selecting, for gaze monitoring, the IR source for which the angle between said gaze angle and the line of sight angle of that IR source is the smallest. 53. In a gaze monitoring system, comprising: a computer and storage device configured to store data indicating which IR source should be used for a given gaze angle; an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user, said at least one second IR source is located relatively far from said optical axis; a method of selecting an IR source to be used, comprising: calculating a gaze angle of said user; accessing said stored data for information on which IR source should be used with said gaze angle; and selecting that IR source to determine user's gaze angle. 54. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user, said at least one second IR source is located relatively far from said optical axis; a method of selecting an IR source to be used, comprising: calculating a gaze angle of said user; and selecting the IR source that is the nearest to a line determined by two-times said calculated gaze angle. 55. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source is located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user; identifying at least one IR source according to at least one of: modulation of at least part of said IR sources, which of said IR sources is active, relative geometrical arrangement of at least part of said IR sources and colors of at least part of said IR sources. 56. The method of claim 55, wherein an identified IR source is selected so that the reflection image of said selected IR will be within the image of the cornea of said user. 57. The method of claim 55, wherein a plurality of IR sources, comprising said first IR source and said at least one second IR source, are active at a given time. 58. The method of claim 55, wherein said IR modulation frequency is smaller than said camera's frame rate. 59. The method of claim 55, wherein said IR modulation is synchronized with said camera. 60. In a gaze monitoring system, comprising: an eye tracker, comprising: a camera having an optical axis; a first IR source configured to illuminate at least one eye of a user, said first IR source located relatively near said optical axis; and at least one second IR source configured to illuminate said at least one eye of said user; a method of calibrating said system, comprising: a. guiding a user to gaze at specific calibration gaze angles; b. collecting images from said camera during each such gaze, the images for each such gaze comprise corneal reflection of at least one of said first IR source and said at least one second IR sources; c. analyzing said images to extract relations between an image of said at least one eye of said user and the reflection image of said at least one second IR source; d. extracting a representative relation from all these images for the calibration gaze angle;  repeating step d for all the calibration gaze angles;  repeating steps a through d for at least one more IR source; and e. storing said representative relations for use as calibration data.
2,600
10,992
10,992
16,414,267
2,644
A wireless multiple antenna system ( 200 ) uses a multi-antenna subsystem ( 211 ) to generate a composite sample waveform by continuously sweeping a plurality of receive beams (RX 1 -RXM) during each SSB transmission in a plurality of transmit beams (TX 1 -TX 64 ), generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams to determine the presence of the SSB, and then jointly searching the composite sample waveform for an optimal receive beam and an SSB frequency of any detected SSB that are used by the UE ( 210 ) to perform a cell search which matches a transmit beam from the base station ( 201 ) to the optimal receive beam.
1. A method performed at a user equipment device to make initial access with a base station in a multiple antenna system, comprising: continuously sweeping a plurality of receive beams over a first receiver sweep period at the user equipment device to generate a composite sample waveform of a plurality of synchronization signal blocks (SSB) transmitted by the base station in different transmit beams; generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the first receiver sweep period to detect one or more SSBs in the composite sample waveform; searching the composite sample waveform for an optimal receive beam and an optimal receive SSB frequency of any detected SSB; and locking the user equipment device onto the optimal receive beam and optimal SSB receive frequency to perform a cell search. 2. The method of claim 1, where continuously sweeping the plurality of receive beams comprises sweeping a plurality of frequency bands at the user equipment device. 3. The method of claim 1, where continuously sweeping the plurality of receive beams comprises applying a plurality of receive beamforming weights to a multi-antenna subsystem at the user equipment device to directionally orient each of the plurality of receive beams in a different direction over the first receiver sweep period when generating the composite sample waveform. 4. The method of claim 1, where continuously sweeping the plurality of receive beams comprises continuously sweeping a plurality of m receive beams in a circular round-robin fashion for n receiver sweep periods to detect one or more SSBs transmitted by the base station in different transmit beams when generating the composite sample waveform, where each of the n receiver sweep periods has a duration that equals a duration for transmitting a single SSB. 5. The method of claim 1, where searching the composite sample waveform comprises: generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the first receiver sweep period; determining if the composite received signal strength metric value exceeds a first threshold value to identify one or more candidate SSB signals; transforming the composite sample waveform into a frequency domain signal if one or more candidate SSB signals is identified; and performing an SSB raster search in the frequency domain by sweeping a correlator across the frequency domain signal to identify one or more SSB frequencies for the one or more candidate SSB signals. 6. The method of claim 5, where searching the composite sample waveform comprises selecting at least one of the plurality of receive beams and one of the one or more SSB frequencies as the optimal receive beam and optimal SSB frequency, respectively, based on a ranking of a plurality of received signal strength metric values computed per-receive beam and per-SSB frequency. 7. The method of claim 6, where selecting at least one of the plurality of receive beams comprises choosing at least one receive beam if a corresponding received signal strength metric value exceeds a second threshold value. 8. The method of claim 2, where the plurality of frequency bands is swept at a rate faster than a sweep rate for the plurality of receive beams such that a plurality of frequency bands is swept within a time duration of scanning one receive beam. 9. The method of claim 2, where the plurality of receive beams is swept at a rate faster than a sweep rate for the plurality of frequency bands such that a plurality of receive beams is swept within a time duration of scanning one frequency band. 10. The method of claim 1, further comprising computing received signal strength metric values derived from the composite sample waveform for use in jointly identifying the optimal receive beam and the optimal SSB frequency. 11. A wireless device (WD) comprising: a multiple antenna subsystem, frequency tuner subsystem and beamformer subsystem connected to and configured by a digital controller to wirelessly make initial access with an access node (AN) by: continuously sweeping a plurality of receive beams over one or more receiver sweep periods at the WD to generate a composite sample waveform of a plurality of synchronization signal blocks (SSB) transmitted by the AN in different transmit beams; generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the one or more receiver sweep periods to detect one or more SSBs in the composite sample waveform; jointly searching the composite sample waveform for an optimal receive beam and an optimal SSB receive frequency band of every detected SSB; and locking the WD onto the optimal receive beam and optimal SSB receive frequency band to perform a cell search. 12. The wireless device of claim 11, where the multiple antenna system comprises a millimeter-wave antenna array that is configurable for communicating with a millimeter wave radio access technology system. 13. The wireless device of claim 11, where the digital controller is configured to continuously sweep the plurality of receive beams by applying a plurality of receive beamforming weights to a multi-antenna subsystem at the WD to directionally orient each of the plurality of receive beams in a different direction over the one or more receiver sweep periods when generating the composite sample waveform. 14. The wireless device of claim 13, where the digital controller is configured to continuously sweep a plurality of frequency bands by using the frequency tuner subsystem at the to scan the plurality of frequency beams while each receive beamforming weight is applied to the multi-antenna subsystem. 15. The wireless device of claim 13, where the digital controller is configured to continuously sweep a plurality of frequency bands by using the frequency tuner subsystem at the wd to sequentially scan each of the plurality of frequency bands so that that beamformer subsystem can apply the plurality of receive beamforming weights to the multi-antenna subsystem while the wd is tuned to each frequency band. 16. The wireless device of claim 11, where the digital controller is configured to jointly search the composite sample waveform for the optimal receive beam and optimal SSB receive frequency band by: generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the one or more receiver sweep periods; determining if the composite received signal strength metric value exceeds a first threshold value to identify one or more candidate SSB signals; transforming the composite sample waveform into a frequency domain signal if one or more candidate SSB signals is identified; and performing an SSB raster search in the frequency domain by sweeping a correlator across the frequency domain signal to identify an SSB receive frequency of the one or more candidate SSB signals. 17. The wireless device of claim 11, where the digital controller is configured to continuously sweep a plurality of frequency bands at a first sweep rate that is faster than a second sweep rate for the plurality of receive beams such that the plurality of frequency bands is swept within a time duration of scanning one receive beam. 18. The wireless device of claim 11, where the digital controller is configured to continuously sweep the plurality of receive beams at a first sweep rate that is faster than a second sweep rate for a plurality of frequency bands such that the plurality of receive beams is swept within a time duration of scanning one frequency band. 19. A communication device for making initial access with a base station in a multiple antenna wireless communication system, comprising: a multi-antenna array; a beamformer module connected to apply a plurality of receive beamforming weights to the multi-antenna array to directionally orient each of a plurality of receive beams in a different direction to continuously sweep the plurality of receive beams over a receiver sweep period to generate a composite sample waveform of a plurality of synchronization signal blocks (SSB) transmitted by the base station with different transmit beams during the first synchronization signal block (SSB) burst to the communication device; and a digital signal processing controller configured to generate a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the receiver sweep period to determine the presence of the SSB and to jointly search the composite sample waveform to identify an optimal SSB receive frequency band and an optimal receive beam based on a ranking of a plurality of received signal strength metric values, where each received signal strength metric value is computed for each receive beam and SSB receive frequency band from samples of each transmitted SSB measured in said receive beam and SSB frequency band. 20. The communication device of claim 19, further comprising a frequency tuner connected to tune the multi-antenna array to continuously sweep a plurality of frequency bands at a first sweep rate that is faster than a second sweep rate for sweeping the plurality of receive beams such that the plurality of frequency bands is swept within a time duration of scanning one receive beam.
A wireless multiple antenna system ( 200 ) uses a multi-antenna subsystem ( 211 ) to generate a composite sample waveform by continuously sweeping a plurality of receive beams (RX 1 -RXM) during each SSB transmission in a plurality of transmit beams (TX 1 -TX 64 ), generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams to determine the presence of the SSB, and then jointly searching the composite sample waveform for an optimal receive beam and an SSB frequency of any detected SSB that are used by the UE ( 210 ) to perform a cell search which matches a transmit beam from the base station ( 201 ) to the optimal receive beam.1. A method performed at a user equipment device to make initial access with a base station in a multiple antenna system, comprising: continuously sweeping a plurality of receive beams over a first receiver sweep period at the user equipment device to generate a composite sample waveform of a plurality of synchronization signal blocks (SSB) transmitted by the base station in different transmit beams; generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the first receiver sweep period to detect one or more SSBs in the composite sample waveform; searching the composite sample waveform for an optimal receive beam and an optimal receive SSB frequency of any detected SSB; and locking the user equipment device onto the optimal receive beam and optimal SSB receive frequency to perform a cell search. 2. The method of claim 1, where continuously sweeping the plurality of receive beams comprises sweeping a plurality of frequency bands at the user equipment device. 3. The method of claim 1, where continuously sweeping the plurality of receive beams comprises applying a plurality of receive beamforming weights to a multi-antenna subsystem at the user equipment device to directionally orient each of the plurality of receive beams in a different direction over the first receiver sweep period when generating the composite sample waveform. 4. The method of claim 1, where continuously sweeping the plurality of receive beams comprises continuously sweeping a plurality of m receive beams in a circular round-robin fashion for n receiver sweep periods to detect one or more SSBs transmitted by the base station in different transmit beams when generating the composite sample waveform, where each of the n receiver sweep periods has a duration that equals a duration for transmitting a single SSB. 5. The method of claim 1, where searching the composite sample waveform comprises: generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the first receiver sweep period; determining if the composite received signal strength metric value exceeds a first threshold value to identify one or more candidate SSB signals; transforming the composite sample waveform into a frequency domain signal if one or more candidate SSB signals is identified; and performing an SSB raster search in the frequency domain by sweeping a correlator across the frequency domain signal to identify one or more SSB frequencies for the one or more candidate SSB signals. 6. The method of claim 5, where searching the composite sample waveform comprises selecting at least one of the plurality of receive beams and one of the one or more SSB frequencies as the optimal receive beam and optimal SSB frequency, respectively, based on a ranking of a plurality of received signal strength metric values computed per-receive beam and per-SSB frequency. 7. The method of claim 6, where selecting at least one of the plurality of receive beams comprises choosing at least one receive beam if a corresponding received signal strength metric value exceeds a second threshold value. 8. The method of claim 2, where the plurality of frequency bands is swept at a rate faster than a sweep rate for the plurality of receive beams such that a plurality of frequency bands is swept within a time duration of scanning one receive beam. 9. The method of claim 2, where the plurality of receive beams is swept at a rate faster than a sweep rate for the plurality of frequency bands such that a plurality of receive beams is swept within a time duration of scanning one frequency band. 10. The method of claim 1, further comprising computing received signal strength metric values derived from the composite sample waveform for use in jointly identifying the optimal receive beam and the optimal SSB frequency. 11. A wireless device (WD) comprising: a multiple antenna subsystem, frequency tuner subsystem and beamformer subsystem connected to and configured by a digital controller to wirelessly make initial access with an access node (AN) by: continuously sweeping a plurality of receive beams over one or more receiver sweep periods at the WD to generate a composite sample waveform of a plurality of synchronization signal blocks (SSB) transmitted by the AN in different transmit beams; generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the one or more receiver sweep periods to detect one or more SSBs in the composite sample waveform; jointly searching the composite sample waveform for an optimal receive beam and an optimal SSB receive frequency band of every detected SSB; and locking the WD onto the optimal receive beam and optimal SSB receive frequency band to perform a cell search. 12. The wireless device of claim 11, where the multiple antenna system comprises a millimeter-wave antenna array that is configurable for communicating with a millimeter wave radio access technology system. 13. The wireless device of claim 11, where the digital controller is configured to continuously sweep the plurality of receive beams by applying a plurality of receive beamforming weights to a multi-antenna subsystem at the WD to directionally orient each of the plurality of receive beams in a different direction over the one or more receiver sweep periods when generating the composite sample waveform. 14. The wireless device of claim 13, where the digital controller is configured to continuously sweep a plurality of frequency bands by using the frequency tuner subsystem at the to scan the plurality of frequency beams while each receive beamforming weight is applied to the multi-antenna subsystem. 15. The wireless device of claim 13, where the digital controller is configured to continuously sweep a plurality of frequency bands by using the frequency tuner subsystem at the wd to sequentially scan each of the plurality of frequency bands so that that beamformer subsystem can apply the plurality of receive beamforming weights to the multi-antenna subsystem while the wd is tuned to each frequency band. 16. The wireless device of claim 11, where the digital controller is configured to jointly search the composite sample waveform for the optimal receive beam and optimal SSB receive frequency band by: generating a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the one or more receiver sweep periods; determining if the composite received signal strength metric value exceeds a first threshold value to identify one or more candidate SSB signals; transforming the composite sample waveform into a frequency domain signal if one or more candidate SSB signals is identified; and performing an SSB raster search in the frequency domain by sweeping a correlator across the frequency domain signal to identify an SSB receive frequency of the one or more candidate SSB signals. 17. The wireless device of claim 11, where the digital controller is configured to continuously sweep a plurality of frequency bands at a first sweep rate that is faster than a second sweep rate for the plurality of receive beams such that the plurality of frequency bands is swept within a time duration of scanning one receive beam. 18. The wireless device of claim 11, where the digital controller is configured to continuously sweep the plurality of receive beams at a first sweep rate that is faster than a second sweep rate for a plurality of frequency bands such that the plurality of receive beams is swept within a time duration of scanning one frequency band. 19. A communication device for making initial access with a base station in a multiple antenna wireless communication system, comprising: a multi-antenna array; a beamformer module connected to apply a plurality of receive beamforming weights to the multi-antenna array to directionally orient each of a plurality of receive beams in a different direction to continuously sweep the plurality of receive beams over a receiver sweep period to generate a composite sample waveform of a plurality of synchronization signal blocks (SSB) transmitted by the base station with different transmit beams during the first synchronization signal block (SSB) burst to the communication device; and a digital signal processing controller configured to generate a composite received signal strength metric value from a batch of samples collected over the plurality of receive beams detected in the receiver sweep period to determine the presence of the SSB and to jointly search the composite sample waveform to identify an optimal SSB receive frequency band and an optimal receive beam based on a ranking of a plurality of received signal strength metric values, where each received signal strength metric value is computed for each receive beam and SSB receive frequency band from samples of each transmitted SSB measured in said receive beam and SSB frequency band. 20. The communication device of claim 19, further comprising a frequency tuner connected to tune the multi-antenna array to continuously sweep a plurality of frequency bands at a first sweep rate that is faster than a second sweep rate for sweeping the plurality of receive beams such that the plurality of frequency bands is swept within a time duration of scanning one receive beam.
2,600
10,993
10,993
16,731,507
2,687
A set top box receives from a remote control device one or more of a codeset identifier, data indicative of a brand and model for a consumer electronic device, and remote control diagnostic information. The set top box then causes information representative of the received codeset identifier, data indicative of a brand and model for a consumer electronic device, and remote control diagnostic information to be displayed in a display device associated with the set top box.
1. A non-transitory, computer readable media having stored thereon instructions, the instructions, when executed by a remote control device, causing the remote control device to perform steps comprising: retrieving from a memory of the remote control device a codeset identifier that corresponds to a codeset that is currently being used by the remote control device in a normal operating mode of the remote control device to control functional operations of an intended target device; and transmitting a signal to a receiving device, the signal having a data indicative of an entirety of the codeset identifier retrieved from the memory of the remote control device. 2. The non-transitory, computer readable media as recited in claim 1, wherein the receiving device comprises a set-top box device and the signal uses a protocol recognizable by the set-top box device. 3. The non-transitory, computer readable media as recited in claim 1, wherein the instructions further cause the remote control device to both retrieve from the memory the codeset identifier and to transmit the signal in response to a predetermined key entry code being provided to the remote control device. 4. The non-transitory, computer readable media as recited in claim 1, wherein the instructions further cause the remote control device to determine a condition of a battery installed within the remote control device and to transmit to the receiving device a further signal comprising a further data indicative of the condition of the battery. 5. The non-transitory, computer readable media as recited in claim 4, wherein the instructions further cause the remote control device to both determine the condition of the battery and to transmit the further signal in response to a predetermined key entry code being provided to the remote control device. 6. The non-transitory, computer readable media as recited in claim 5, wherein the receiving device comprises a set-top box device and the signal and the further signal both use a protocol recognizable by the set-top box device. 7. A system, comprising: a remote control device having a first processing device and a first memory storing first instructions executable by the first processing device; and a receiving device having a second processing device and a second memory storing second instructions executable by the second processing device; wherein the first instructions, when executed by the first processing device, cause the remote control device to retrieve from the first memory a codeset identifier that corresponds to a codeset that is currently being used by the remote control device in a normal operating mode of the remote control device to control functional operations of an intended target device, and transmit a signal to the receiving device, the signal having a data indicative of an entirety of the codeset identifier retrieved from the first memory; and wherein the second instructions, when executed by the second processing device, cause the receiving device to use the signal when received from the remote control device to cause a display of information representative of the codeset identifier retrieved from the first memory. 8. The system as recited in claim 7, wherein the receiving device comprises a set-top box device and the signal uses a protocol recognizable by the set-top box device. 9. The system as recited in claim 7, wherein the first instructions further cause the remote control device to both retrieve from the first memory the codeset identifier and to transmit the signal in response to a predetermined key entry code being provided to the remote control device. 10. The system as recited in claim 7, wherein the first instructions further cause the remote control device to determine a condition of a battery installed within the remote control device and to transmit to the receiving device a further signal comprising a further data indicative of the condition of the battery. 11. The system as recited in claim 10, wherein the first instructions further cause the remote control device to both determine the condition of the battery and to transmit the further signal in response to a predetermined key entry code being provided to the remote control device. 12. The system as recited in claim 11, wherein the receiving device comprises a set-top box device and the signal and the further signal both use a protocol recognizable by the set-top box device. 13. The system as recited in claim 7, wherein the second instructions cause the receiving device to use the signal when received from the remote control device to cause a display of information representative of a brand for the intended target device. 14. The system as recited in claim 13, wherein the second instructions cause the receiving device to use the signal when received from the remote control device to cause a display of information representative of a model for the intended target device
A set top box receives from a remote control device one or more of a codeset identifier, data indicative of a brand and model for a consumer electronic device, and remote control diagnostic information. The set top box then causes information representative of the received codeset identifier, data indicative of a brand and model for a consumer electronic device, and remote control diagnostic information to be displayed in a display device associated with the set top box.1. A non-transitory, computer readable media having stored thereon instructions, the instructions, when executed by a remote control device, causing the remote control device to perform steps comprising: retrieving from a memory of the remote control device a codeset identifier that corresponds to a codeset that is currently being used by the remote control device in a normal operating mode of the remote control device to control functional operations of an intended target device; and transmitting a signal to a receiving device, the signal having a data indicative of an entirety of the codeset identifier retrieved from the memory of the remote control device. 2. The non-transitory, computer readable media as recited in claim 1, wherein the receiving device comprises a set-top box device and the signal uses a protocol recognizable by the set-top box device. 3. The non-transitory, computer readable media as recited in claim 1, wherein the instructions further cause the remote control device to both retrieve from the memory the codeset identifier and to transmit the signal in response to a predetermined key entry code being provided to the remote control device. 4. The non-transitory, computer readable media as recited in claim 1, wherein the instructions further cause the remote control device to determine a condition of a battery installed within the remote control device and to transmit to the receiving device a further signal comprising a further data indicative of the condition of the battery. 5. The non-transitory, computer readable media as recited in claim 4, wherein the instructions further cause the remote control device to both determine the condition of the battery and to transmit the further signal in response to a predetermined key entry code being provided to the remote control device. 6. The non-transitory, computer readable media as recited in claim 5, wherein the receiving device comprises a set-top box device and the signal and the further signal both use a protocol recognizable by the set-top box device. 7. A system, comprising: a remote control device having a first processing device and a first memory storing first instructions executable by the first processing device; and a receiving device having a second processing device and a second memory storing second instructions executable by the second processing device; wherein the first instructions, when executed by the first processing device, cause the remote control device to retrieve from the first memory a codeset identifier that corresponds to a codeset that is currently being used by the remote control device in a normal operating mode of the remote control device to control functional operations of an intended target device, and transmit a signal to the receiving device, the signal having a data indicative of an entirety of the codeset identifier retrieved from the first memory; and wherein the second instructions, when executed by the second processing device, cause the receiving device to use the signal when received from the remote control device to cause a display of information representative of the codeset identifier retrieved from the first memory. 8. The system as recited in claim 7, wherein the receiving device comprises a set-top box device and the signal uses a protocol recognizable by the set-top box device. 9. The system as recited in claim 7, wherein the first instructions further cause the remote control device to both retrieve from the first memory the codeset identifier and to transmit the signal in response to a predetermined key entry code being provided to the remote control device. 10. The system as recited in claim 7, wherein the first instructions further cause the remote control device to determine a condition of a battery installed within the remote control device and to transmit to the receiving device a further signal comprising a further data indicative of the condition of the battery. 11. The system as recited in claim 10, wherein the first instructions further cause the remote control device to both determine the condition of the battery and to transmit the further signal in response to a predetermined key entry code being provided to the remote control device. 12. The system as recited in claim 11, wherein the receiving device comprises a set-top box device and the signal and the further signal both use a protocol recognizable by the set-top box device. 13. The system as recited in claim 7, wherein the second instructions cause the receiving device to use the signal when received from the remote control device to cause a display of information representative of a brand for the intended target device. 14. The system as recited in claim 13, wherein the second instructions cause the receiving device to use the signal when received from the remote control device to cause a display of information representative of a model for the intended target device
2,600
10,994
10,994
16,235,629
2,651
A hearing device cable including a body portion is described herein. The body portion may extend between a first end region and a second end region along a tube centerline. The body portion may include a first radial portion proximate the first end region and second radial portion proximate the second end region. The first radial portion may define a radius of curvature that is greater than or equal to a radius of curvature defined by the second radial portion. The tube centerline may lie along an x-y plane between the first and second end regions. In one or more embodiments, the body portion may define a passageway extending between the first and second end regions. Further, the hearing device cable may include a superelastic wire within the passageway extending between the first and second end regions.
1. A hearing device cable comprising: a body portion extending between a first end region and a second end region, wherein the body portion comprises a first radial portion proximate the first end region and a second radial portion proximate the second end region, wherein the first radial portion defines a radius of curvature that is greater than or equal to a radius of curvature defined by the second radial portion. 2. The cable of claim 1, wherein the radius of curvature of the first radial portion is greater than or equal to 100% and less than or equal to 200% of the radius of curvature of the second radial portion. 3. The cable of claim 1, wherein the body portion defines an S-shape such that the first radial portion extends along an arc that curves in a direction opposite an arc along which the second radial portion extends. 4. The cable of claim 1, wherein the body portion comprises one or more conductive wires and Kevlar. 5. The cable of claim 1, wherein the body portion is adapted to fit within a human ear such that the first end region is positioned above the human ear and the second end region is positioned within an ear canal of the human ear. 6. The cable of claim 1, wherein the body portion comprises a UV resistant material. 7. The cable of claim 1, wherein the cable is configurable in a relaxed state and a deflected state, wherein a direct distance between the first end region and the second end region is different in the relaxed state than the deflected state. 8. The cable of claim 1, wherein the body portion defines a passageway extending between the first end region and the second end region. 9. The cable of claim 8, further comprising a superelastic wire within the passageway extending between the first end region and the second end region. 10. A hearing device cable comprising: a body portion extending between a first end region and a second end region along a tube centerline, wherein the body portion comprises a first radial portion proximate the first end region and a second radial portion proximate the second end region, wherein the tube centerline lies along an x-y plane between the first and second end regions. 11. The cable of claim 10, wherein the body portion defines an S-shape such that the first radial portion extends along an arc that curves in a direction opposite an arc along which the second radial portion extends. 12. The cable of claim 10, wherein the body portion is adapted to fit within a human ear such that the first end region is positioned above the human ear and the second end region is positioned within an ear canal of the human ear. 13. The cable of claim 10, wherein the cable is configurable in a relaxed state and a deflected state, wherein a direct distance between the first end region and the second end region is different in the relaxed state than the deflected state. 14. The cable of claim 10, wherein the body portion defines a passageway extending between the first end region and the second end region. 15. A hearing device cable comprising: a body portion extending between a first end region and a second end region along a tube centerline, wherein the body portion defines a passageway extending between the first end region and the second end region; and a superelastic wire within the passageway extending between the first end region and the second end region. 16. The cable of claim 15, wherein the superelastic wire comprises nitinol. 17. The cable of claim 15, wherein the superelastic wire defines a deformation temperature greater than or equal to 900 degrees Fahrenheit. 18. The cable of claim 15, wherein the body portion defines a constant interior length and inside diameter between the first end region and the second end region. 19. The cable of claim 15, wherein the body portion comprises a first radial portion proximate the first end region and a second radial portion proximate the second end region, wherein the first radial portion defines a radius of curvature that is greater than or equal to a radius of curvature defined by the second radial portion. 20. The cable of claim 15, wherein the body portion comprises a first radial portion proximate the first end region and a second radial portion proximate the second end region, wherein the tube centerline lies along an x-y plane between the first and second end regions.
A hearing device cable including a body portion is described herein. The body portion may extend between a first end region and a second end region along a tube centerline. The body portion may include a first radial portion proximate the first end region and second radial portion proximate the second end region. The first radial portion may define a radius of curvature that is greater than or equal to a radius of curvature defined by the second radial portion. The tube centerline may lie along an x-y plane between the first and second end regions. In one or more embodiments, the body portion may define a passageway extending between the first and second end regions. Further, the hearing device cable may include a superelastic wire within the passageway extending between the first and second end regions.1. A hearing device cable comprising: a body portion extending between a first end region and a second end region, wherein the body portion comprises a first radial portion proximate the first end region and a second radial portion proximate the second end region, wherein the first radial portion defines a radius of curvature that is greater than or equal to a radius of curvature defined by the second radial portion. 2. The cable of claim 1, wherein the radius of curvature of the first radial portion is greater than or equal to 100% and less than or equal to 200% of the radius of curvature of the second radial portion. 3. The cable of claim 1, wherein the body portion defines an S-shape such that the first radial portion extends along an arc that curves in a direction opposite an arc along which the second radial portion extends. 4. The cable of claim 1, wherein the body portion comprises one or more conductive wires and Kevlar. 5. The cable of claim 1, wherein the body portion is adapted to fit within a human ear such that the first end region is positioned above the human ear and the second end region is positioned within an ear canal of the human ear. 6. The cable of claim 1, wherein the body portion comprises a UV resistant material. 7. The cable of claim 1, wherein the cable is configurable in a relaxed state and a deflected state, wherein a direct distance between the first end region and the second end region is different in the relaxed state than the deflected state. 8. The cable of claim 1, wherein the body portion defines a passageway extending between the first end region and the second end region. 9. The cable of claim 8, further comprising a superelastic wire within the passageway extending between the first end region and the second end region. 10. A hearing device cable comprising: a body portion extending between a first end region and a second end region along a tube centerline, wherein the body portion comprises a first radial portion proximate the first end region and a second radial portion proximate the second end region, wherein the tube centerline lies along an x-y plane between the first and second end regions. 11. The cable of claim 10, wherein the body portion defines an S-shape such that the first radial portion extends along an arc that curves in a direction opposite an arc along which the second radial portion extends. 12. The cable of claim 10, wherein the body portion is adapted to fit within a human ear such that the first end region is positioned above the human ear and the second end region is positioned within an ear canal of the human ear. 13. The cable of claim 10, wherein the cable is configurable in a relaxed state and a deflected state, wherein a direct distance between the first end region and the second end region is different in the relaxed state than the deflected state. 14. The cable of claim 10, wherein the body portion defines a passageway extending between the first end region and the second end region. 15. A hearing device cable comprising: a body portion extending between a first end region and a second end region along a tube centerline, wherein the body portion defines a passageway extending between the first end region and the second end region; and a superelastic wire within the passageway extending between the first end region and the second end region. 16. The cable of claim 15, wherein the superelastic wire comprises nitinol. 17. The cable of claim 15, wherein the superelastic wire defines a deformation temperature greater than or equal to 900 degrees Fahrenheit. 18. The cable of claim 15, wherein the body portion defines a constant interior length and inside diameter between the first end region and the second end region. 19. The cable of claim 15, wherein the body portion comprises a first radial portion proximate the first end region and a second radial portion proximate the second end region, wherein the first radial portion defines a radius of curvature that is greater than or equal to a radius of curvature defined by the second radial portion. 20. The cable of claim 15, wherein the body portion comprises a first radial portion proximate the first end region and a second radial portion proximate the second end region, wherein the tube centerline lies along an x-y plane between the first and second end regions.
2,600
10,995
10,995
16,556,358
2,658
Systems, methods performed by data processing apparatus and computer storage media encoded with computer programs for receiving an utterance from a user in a multi-user environment, each user having an associated set of available resources, determining that the received utterance includes at least one predetermined word, comparing speaker identification features of the uttered predetermined word with speaker identification features of each of a plurality of previous utterances of the predetermined word, the plurality of previous predetermined word utterances corresponding to different known users in the multi-user environment, attempting to identify the user associated with the uttered predetermined word as matching one of the known users in the multi-user environment, and based on a result of the attempt to identify, selectively providing the user with access to one or more resources associated with a corresponding known user.
1. A method comprising: during a user recognition configuration session for each of a plurality of different users in a multi-user environment: storing, on a voice-based authentication device having access to an associated set of personal resources for each of the plurality of different users in the multi-user environment, audio features of a hotword spoken by the corresponding user in one or more user-identification utterances, the hotword comprising a predetermined fixed term that is common to each of the plurality of different users in the multi-user environment; and associating, by the voice-based authentication device, the audio features of the hotword spoken by the corresponding user with the associated set of personal resources for the corresponding user; receiving, at the voice-based authentication device, a first utterance spoken by one of the plurality of different users in the multi-user environment, the first utterance comprising the hotword and a query; after receiving the first utterance, establishing, by the voice-based authentication device, an identity of the user that spoke the first utterance based on audio features of the portion of the first utterance that corresponds to the hotword; in response to establishing the identity of the user that spoke the first utterance, invoking, by the voice-based authentication device, an automated speech recognizer to process the query following the hotword in the first utterance to identify an operation to perform that requires access by the voice-based authentication device to one of the personal resources of the associated set of personal resources for the user that spoke the first utterance; and accessing, by the voice-based authentication device, the required one of the personal resources of the associated set of personal resources for the user that spoke the first utterance to perform the identified operation. 2. The method of claim 1, wherein the hotword when spoken in an utterance by any of the plurality of different users, triggers the voice-based authentication device to: invoke the automated speech recognizer to process the query following the hotword in the spoken utterance; and perform speaker identification to identify which user of the plurality of different users spoke the utterance based solely on the hotword. 3. The method of claim 1, further comprising, during the user recognition configuration session: receiving, by the voice-based authentication device, a corresponding username for each of the plurality of different users in the multi-user environment; and associating, by the voice-based authentication device, each corresponding username with the audio features of the hotword spoken by the corresponding user in the one or more user-identification utterances. 4. The method of claim 3, wherein: establishing the identity of the user that spoke the first utterance comprises determining the corresponding username of the user that spoke the first utterance; and accessing the required one of the personal resources comprises accessing the required one of the personal resources from the associated set of personal resources for the user that spoke the first utterance based on the corresponding username of the user that spoke the first utterance. 5. The method of claim 1, wherein establishing the identity of the user that spoke the first utterance comprises: comparing the audio features of the portion of the first utterance that corresponds to the hotword to the stored audio features of the hotword spoken by each of the plurality of different users during the user recognition configuration session; and determining, based at least on comparing the audio features of the portion of the first utterance that corresponds to the hotword to the stored audio features of the hotword spoken by each of the plurality of different users, that the audio features of the portion of the first utterance that corresponds to the hotword the match one of stored audio features of the hotword spoken by one of the users during the user recognition configuration session. 6. The method of claim 1, wherein the audio features comprise Mel-Frequency Cestrum Coefficient (MFCC) features. 7. The method of claim 1, wherein the associated set of personal resources for each of the plurality of different users in the multi-user environment comprise at least one of a contact list, calendar, email, voicemail, social networks, biographical information, or financial information. 8. The method of claim 1, wherein at least one personal resource of the associated set of personal resources is distributed across one or more server computer systems in communication with the voice-based authentication device. 9. The method of claim 1, further comprising providing, by the voice-based authentication device, a response to the query in the first utterance based on performing the identified operation. 10. The method of claim 9, wherein providing the response to the query comprises: obtaining, by the voice-based authentication device, a transcription of a portion of the first utterance that corresponds to the query; accessing, by the voice-based authentication device and based at least on the transcription of the portion of the first utterance that corresponds to the query, data from the required one of the personal resources of the associated set of personal resources for the user that spoke the first utterance; and providing, for output by the voice-based authentication device, the data accessed from the required one of the personal resources. 11. A system comprising: data processing hardware of a voice-based authentication device, the voice-based authentication device having access to an associated set of personal resources for each of a plurality of different users in a multi-user environment; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising: during a user recognition configuration session for each of the plurality of different users in the multi-user environment: storing, on the voice-based authentication device, audio features of a hotword spoken by the corresponding user in one or more user-identification utterances, the hotword comprising a predetermined fixed term that is common to each of the plurality of different users in the multi-user environment; and associating the audio features of the hotword spoken by the corresponding user with the associated set of personal resources for the corresponding user; receiving a first utterance spoken by one of the plurality of different users in the multi-user environment, the first utterance comprising the hotword and a query; after receiving the first utterance, establishing an identity of the user that spoke the first utterance based on audio features of the portion of the first utterance that corresponds to the hotword; in response to establishing the identity of the user that spoke the first utterance, invoking an automated speech recognizer to process the query following the hotword in the first utterance to identify an operation to perform that requires access by the voice-based authentication device to one of the personal resources of the associated set of personal resources for the user that spoke the first utterance; and accessing the required one of the personal resources of the associated set of personal resources for the user that spoke the first utterance to perform the identified operation. 12. The system of claim 11, wherein the hotword when spoken in an utterance by any of the plurality of different users, triggers the voice-based authentication device to: invoke the automated speech recognizer to process the query following the hotword in the spoken utterance; and perform speaker identification to identify which user of the plurality of different users spoke the utterance based solely on the hotword. 13. The system of claim 11, wherein the operations further comprise, during the user recognition configuration session: receiving a corresponding username for each of the plurality of different users in the multi-user environment; and associating each corresponding username with the audio features of the hotword spoken by the corresponding user in the one or more user-identification utterances. 14. The system of claim 13, wherein: establishing the identity of the user that spoke the first utterance comprises determining the corresponding username of the user that spoke the first utterance; and accessing the required one of the personal resources comprises accessing the required one of the personal resources from the associated set of personal resources for the user that spoke the first utterance based on the corresponding username of the user that spoke the first utterance. 15. The system of claim 11, wherein establishing the identity of the user that spoke the first utterance comprises: comparing the audio features of the portion of the first utterance that corresponds to the hotword to the stored audio features of the hotword spoken by each of the plurality of different users during the user recognition configuration session; and determining, based at least on comparing the audio features of the portion of the first utterance that corresponds to the hotword to the stored audio features of the hotword spoken by each of the plurality of different users, that the audio features of the portion of the first utterance that corresponds to the hotword the match one of stored audio features of the hotword spoken by one of the users during the user recognition configuration session. 16. The system of claim 11, wherein the audio features comprise Mel-Frequency Cestrum Coefficient (MFCC) features. 17. The system of claim 11, wherein the associated set of personal resources for each of the plurality of different users in the multi-user environment comprise at least one of a contact list, calendar, email, voicemail, social networks, biographical information, or financial information. 18. The system of claim 11, wherein at least one personal resource of the associated set of personal resources is distributed across one or more server computer systems in communication with the voice-based authentication device. 19. The system of claim 11, wherein the operations further comprise providing a response to the query in the first utterance based on performing the identified operation. 20. The system of claim 19, wherein providing the response to the query comprises: obtaining a transcription of a portion of the first utterance that corresponds to the query; accessing, based at least on the transcription of the portion of the first utterance that corresponds to the query, data from the required one of the personal resources of the associated set of personal resources for the user that spoke the first utterance; and providing, for output by the voice-based authentication device, the data accessed from the required one of the personal resources.
Systems, methods performed by data processing apparatus and computer storage media encoded with computer programs for receiving an utterance from a user in a multi-user environment, each user having an associated set of available resources, determining that the received utterance includes at least one predetermined word, comparing speaker identification features of the uttered predetermined word with speaker identification features of each of a plurality of previous utterances of the predetermined word, the plurality of previous predetermined word utterances corresponding to different known users in the multi-user environment, attempting to identify the user associated with the uttered predetermined word as matching one of the known users in the multi-user environment, and based on a result of the attempt to identify, selectively providing the user with access to one or more resources associated with a corresponding known user.1. A method comprising: during a user recognition configuration session for each of a plurality of different users in a multi-user environment: storing, on a voice-based authentication device having access to an associated set of personal resources for each of the plurality of different users in the multi-user environment, audio features of a hotword spoken by the corresponding user in one or more user-identification utterances, the hotword comprising a predetermined fixed term that is common to each of the plurality of different users in the multi-user environment; and associating, by the voice-based authentication device, the audio features of the hotword spoken by the corresponding user with the associated set of personal resources for the corresponding user; receiving, at the voice-based authentication device, a first utterance spoken by one of the plurality of different users in the multi-user environment, the first utterance comprising the hotword and a query; after receiving the first utterance, establishing, by the voice-based authentication device, an identity of the user that spoke the first utterance based on audio features of the portion of the first utterance that corresponds to the hotword; in response to establishing the identity of the user that spoke the first utterance, invoking, by the voice-based authentication device, an automated speech recognizer to process the query following the hotword in the first utterance to identify an operation to perform that requires access by the voice-based authentication device to one of the personal resources of the associated set of personal resources for the user that spoke the first utterance; and accessing, by the voice-based authentication device, the required one of the personal resources of the associated set of personal resources for the user that spoke the first utterance to perform the identified operation. 2. The method of claim 1, wherein the hotword when spoken in an utterance by any of the plurality of different users, triggers the voice-based authentication device to: invoke the automated speech recognizer to process the query following the hotword in the spoken utterance; and perform speaker identification to identify which user of the plurality of different users spoke the utterance based solely on the hotword. 3. The method of claim 1, further comprising, during the user recognition configuration session: receiving, by the voice-based authentication device, a corresponding username for each of the plurality of different users in the multi-user environment; and associating, by the voice-based authentication device, each corresponding username with the audio features of the hotword spoken by the corresponding user in the one or more user-identification utterances. 4. The method of claim 3, wherein: establishing the identity of the user that spoke the first utterance comprises determining the corresponding username of the user that spoke the first utterance; and accessing the required one of the personal resources comprises accessing the required one of the personal resources from the associated set of personal resources for the user that spoke the first utterance based on the corresponding username of the user that spoke the first utterance. 5. The method of claim 1, wherein establishing the identity of the user that spoke the first utterance comprises: comparing the audio features of the portion of the first utterance that corresponds to the hotword to the stored audio features of the hotword spoken by each of the plurality of different users during the user recognition configuration session; and determining, based at least on comparing the audio features of the portion of the first utterance that corresponds to the hotword to the stored audio features of the hotword spoken by each of the plurality of different users, that the audio features of the portion of the first utterance that corresponds to the hotword the match one of stored audio features of the hotword spoken by one of the users during the user recognition configuration session. 6. The method of claim 1, wherein the audio features comprise Mel-Frequency Cestrum Coefficient (MFCC) features. 7. The method of claim 1, wherein the associated set of personal resources for each of the plurality of different users in the multi-user environment comprise at least one of a contact list, calendar, email, voicemail, social networks, biographical information, or financial information. 8. The method of claim 1, wherein at least one personal resource of the associated set of personal resources is distributed across one or more server computer systems in communication with the voice-based authentication device. 9. The method of claim 1, further comprising providing, by the voice-based authentication device, a response to the query in the first utterance based on performing the identified operation. 10. The method of claim 9, wherein providing the response to the query comprises: obtaining, by the voice-based authentication device, a transcription of a portion of the first utterance that corresponds to the query; accessing, by the voice-based authentication device and based at least on the transcription of the portion of the first utterance that corresponds to the query, data from the required one of the personal resources of the associated set of personal resources for the user that spoke the first utterance; and providing, for output by the voice-based authentication device, the data accessed from the required one of the personal resources. 11. A system comprising: data processing hardware of a voice-based authentication device, the voice-based authentication device having access to an associated set of personal resources for each of a plurality of different users in a multi-user environment; and memory hardware in communication with the data processing hardware and storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising: during a user recognition configuration session for each of the plurality of different users in the multi-user environment: storing, on the voice-based authentication device, audio features of a hotword spoken by the corresponding user in one or more user-identification utterances, the hotword comprising a predetermined fixed term that is common to each of the plurality of different users in the multi-user environment; and associating the audio features of the hotword spoken by the corresponding user with the associated set of personal resources for the corresponding user; receiving a first utterance spoken by one of the plurality of different users in the multi-user environment, the first utterance comprising the hotword and a query; after receiving the first utterance, establishing an identity of the user that spoke the first utterance based on audio features of the portion of the first utterance that corresponds to the hotword; in response to establishing the identity of the user that spoke the first utterance, invoking an automated speech recognizer to process the query following the hotword in the first utterance to identify an operation to perform that requires access by the voice-based authentication device to one of the personal resources of the associated set of personal resources for the user that spoke the first utterance; and accessing the required one of the personal resources of the associated set of personal resources for the user that spoke the first utterance to perform the identified operation. 12. The system of claim 11, wherein the hotword when spoken in an utterance by any of the plurality of different users, triggers the voice-based authentication device to: invoke the automated speech recognizer to process the query following the hotword in the spoken utterance; and perform speaker identification to identify which user of the plurality of different users spoke the utterance based solely on the hotword. 13. The system of claim 11, wherein the operations further comprise, during the user recognition configuration session: receiving a corresponding username for each of the plurality of different users in the multi-user environment; and associating each corresponding username with the audio features of the hotword spoken by the corresponding user in the one or more user-identification utterances. 14. The system of claim 13, wherein: establishing the identity of the user that spoke the first utterance comprises determining the corresponding username of the user that spoke the first utterance; and accessing the required one of the personal resources comprises accessing the required one of the personal resources from the associated set of personal resources for the user that spoke the first utterance based on the corresponding username of the user that spoke the first utterance. 15. The system of claim 11, wherein establishing the identity of the user that spoke the first utterance comprises: comparing the audio features of the portion of the first utterance that corresponds to the hotword to the stored audio features of the hotword spoken by each of the plurality of different users during the user recognition configuration session; and determining, based at least on comparing the audio features of the portion of the first utterance that corresponds to the hotword to the stored audio features of the hotword spoken by each of the plurality of different users, that the audio features of the portion of the first utterance that corresponds to the hotword the match one of stored audio features of the hotword spoken by one of the users during the user recognition configuration session. 16. The system of claim 11, wherein the audio features comprise Mel-Frequency Cestrum Coefficient (MFCC) features. 17. The system of claim 11, wherein the associated set of personal resources for each of the plurality of different users in the multi-user environment comprise at least one of a contact list, calendar, email, voicemail, social networks, biographical information, or financial information. 18. The system of claim 11, wherein at least one personal resource of the associated set of personal resources is distributed across one or more server computer systems in communication with the voice-based authentication device. 19. The system of claim 11, wherein the operations further comprise providing a response to the query in the first utterance based on performing the identified operation. 20. The system of claim 19, wherein providing the response to the query comprises: obtaining a transcription of a portion of the first utterance that corresponds to the query; accessing, based at least on the transcription of the portion of the first utterance that corresponds to the query, data from the required one of the personal resources of the associated set of personal resources for the user that spoke the first utterance; and providing, for output by the voice-based authentication device, the data accessed from the required one of the personal resources.
2,600
10,996
10,996
16,122,149
2,689
A wireless sensor assembly includes a housing defining a first aperture for receiving an external sensor, and a second aperture defining a communication port, a wireless power source mounted within the housing, and electronics mounted to the housing and configured to receive power from the wireless power source and to be in electrical communication with the external sensor. The electronics include a wireless communications component, and firmware configured to manage a rate of data transmittal from the wireless communications component to an external device. The communication port is configured to receive a wire harness connected to the external sensor.
1. A wireless sensor assembly comprising: a housing defining a first aperture for receiving an external sensor and a second aperture defining a communication port; a wireless power source mounted within the housing; and electronics mounted to the housing and configured to receive power from the wireless power source and to be in electrical communication with the external sensor, the electronics comprising: a wireless communications component; and firmware configured to manage a rate of data transmittal from the wireless communications component to an external device, wherein the communication port is configured to receive a wire harness. 2. The wireless sensor assembly according to claim 1, wherein the wire harness is configured to be connected to the external device and to transmit data from the external sensor to the external device. 3. The wireless sensor assembly according to claim 1, wherein the wire harness is configured to transmit data from the external sensor to another external device for preventive maintenance purposes. 4. The wireless sensor assembly according to claim 3, wherein the another external device is selected from a group consisting of an Engine Control Unit and a computer. 5. The wireless sensor assembly according to claim 1, wherein the wireless communications component is fixed to the housing. 6. The wireless sensor assembly according to claim 1, wherein the communication port is also configured to receive a removable dongle. 7. The wireless sensor assembly according to claim 6, wherein the wireless communications component is provided in the removable dongle. 8. The wireless sensor assembly according to claim 1, wherein the communication port is configured to be adapted for both the wire harness and a removable dongle. 9. The wireless sensor assembly according to claim 1, wherein the housing includes a main body, a first tubular portion and a second tubular portion extending from the main body along a direction parallel to a longitudinal axis of the main body. 10. The wireless sensor assembly according to claim 9, wherein the first tubular portion and the second tubular portion extending from opposing ends of the main body and being offset from the longitudinal axis of the main body. 11. The wireless sensor assembly according to claim 1, wherein the electronics are powered exclusively by the wireless power source. 12. The wireless sensor assembly according to claim 11, wherein the wireless power source is selected from the group consisting of a self-powering device, a thermoelectric device, and a battery. 13. The wireless sensor assembly according to claim 12, wherein the self-powering device is a vibration device comprising a piezo-electric device mounted to a cantilevered board. 14. The wireless sensor assembly according to claim 1, further comprising a communication connector received in the second aperture, wherein the communications connector is selected from the group consisting of USB, USB-C, Ethernet, CAN, and Aspirated TIP/Ethernet. 15. The wireless sensor assembly according to claim 1, wherein the external sensor is selected from the group consisting of a temperature sensor, a pressure sensor, a gas sensor, and an optical sensor. 16. The wireless sensor assembly according to claim 1, wherein the wireless communications component is selected from the group consisting of a Bluetooth module, a WiFi module, and a LiFi module. 17. The wireless sensor assembly according to claim 1, wherein the external sensor includes an identification code, wherein during installation or replacement of the wireless sensor assembly, calibration information of the external sensor can be accessed through an external device in wireless communication with the wireless sensor assembly. 18. The wireless sensor assembly according to claim 17, wherein the identification code is at least one of an RFID tag and a barcode. 19. The wireless sensor assembly according to claim 18, wherein the wireless communications component sends data to the external device, and the external device performs functions selected from the group consisting of data logging, computations, and re-transmitting the data to a remote device for processing. 20. The wireless sensor assembly according to claim 1, wherein the wireless power source is a battery, and the firmware controls a rate of data transmittal from the wireless communications component as a function of battery life.
A wireless sensor assembly includes a housing defining a first aperture for receiving an external sensor, and a second aperture defining a communication port, a wireless power source mounted within the housing, and electronics mounted to the housing and configured to receive power from the wireless power source and to be in electrical communication with the external sensor. The electronics include a wireless communications component, and firmware configured to manage a rate of data transmittal from the wireless communications component to an external device. The communication port is configured to receive a wire harness connected to the external sensor.1. A wireless sensor assembly comprising: a housing defining a first aperture for receiving an external sensor and a second aperture defining a communication port; a wireless power source mounted within the housing; and electronics mounted to the housing and configured to receive power from the wireless power source and to be in electrical communication with the external sensor, the electronics comprising: a wireless communications component; and firmware configured to manage a rate of data transmittal from the wireless communications component to an external device, wherein the communication port is configured to receive a wire harness. 2. The wireless sensor assembly according to claim 1, wherein the wire harness is configured to be connected to the external device and to transmit data from the external sensor to the external device. 3. The wireless sensor assembly according to claim 1, wherein the wire harness is configured to transmit data from the external sensor to another external device for preventive maintenance purposes. 4. The wireless sensor assembly according to claim 3, wherein the another external device is selected from a group consisting of an Engine Control Unit and a computer. 5. The wireless sensor assembly according to claim 1, wherein the wireless communications component is fixed to the housing. 6. The wireless sensor assembly according to claim 1, wherein the communication port is also configured to receive a removable dongle. 7. The wireless sensor assembly according to claim 6, wherein the wireless communications component is provided in the removable dongle. 8. The wireless sensor assembly according to claim 1, wherein the communication port is configured to be adapted for both the wire harness and a removable dongle. 9. The wireless sensor assembly according to claim 1, wherein the housing includes a main body, a first tubular portion and a second tubular portion extending from the main body along a direction parallel to a longitudinal axis of the main body. 10. The wireless sensor assembly according to claim 9, wherein the first tubular portion and the second tubular portion extending from opposing ends of the main body and being offset from the longitudinal axis of the main body. 11. The wireless sensor assembly according to claim 1, wherein the electronics are powered exclusively by the wireless power source. 12. The wireless sensor assembly according to claim 11, wherein the wireless power source is selected from the group consisting of a self-powering device, a thermoelectric device, and a battery. 13. The wireless sensor assembly according to claim 12, wherein the self-powering device is a vibration device comprising a piezo-electric device mounted to a cantilevered board. 14. The wireless sensor assembly according to claim 1, further comprising a communication connector received in the second aperture, wherein the communications connector is selected from the group consisting of USB, USB-C, Ethernet, CAN, and Aspirated TIP/Ethernet. 15. The wireless sensor assembly according to claim 1, wherein the external sensor is selected from the group consisting of a temperature sensor, a pressure sensor, a gas sensor, and an optical sensor. 16. The wireless sensor assembly according to claim 1, wherein the wireless communications component is selected from the group consisting of a Bluetooth module, a WiFi module, and a LiFi module. 17. The wireless sensor assembly according to claim 1, wherein the external sensor includes an identification code, wherein during installation or replacement of the wireless sensor assembly, calibration information of the external sensor can be accessed through an external device in wireless communication with the wireless sensor assembly. 18. The wireless sensor assembly according to claim 17, wherein the identification code is at least one of an RFID tag and a barcode. 19. The wireless sensor assembly according to claim 18, wherein the wireless communications component sends data to the external device, and the external device performs functions selected from the group consisting of data logging, computations, and re-transmitting the data to a remote device for processing. 20. The wireless sensor assembly according to claim 1, wherein the wireless power source is a battery, and the firmware controls a rate of data transmittal from the wireless communications component as a function of battery life.
2,600
10,997
10,997
16,634,597
2,643
A vehicle occupant messaging system includes a communication tag located in a buckle of a vehicle seatbelt assembly and configured to wirelessly transmit signals, to a mobile device, representing a location of a passenger using the mobile device in a host vehicle. The system further includes a communication transceiver programmed to transmit messages to the mobile device and a system processor programmed to generate the messages to the mobile device, the messages identifying the location of the passenger.
1. A vehicle occupant messaging system comprising: a communication tag located in a buckle of a vehicle seatbelt assembly and configured to wirelessly transmit signals, to a mobile device, representing a location of a passenger using the mobile device in a host vehicle; a communication transceiver programmed to transmit messages to the mobile device; and a system processor programmed to generate the messages to the mobile device, the messages identifying the location of the passenger. 2. The vehicle occupant messaging system of claim 1, wherein the buckle is configured to block the signals transmitted by the communication tag when the seatbelt assembly is unfastened. 3. The vehicle occupant messaging system of claim 2, wherein the buckle includes a metal shield that at least partially encapsulates the communication tag when the seatbelt assembly is unfastened. 4. The vehicle occupant messaging system of claim 2, wherein the vehicle seatbelt assembly includes a latch that moves the metal shield away from the communication tag when the seatbelt assembly is fastened. 5. The vehicle occupant messaging system of claim 1, wherein the system processor is programmed to receive the location of the passenger from the mobile device. 6. The vehicle occupant messaging system of claim 1, wherein the system processor is programmed to select a contextual user interface based on the location of the passenger. 7. The vehicle occupant messaging system of claim 6, wherein the system processor is programmed to command the mobile device to present the contextual user interface. 8. The vehicle occupant messaging system of claim 1, wherein the communication transceiver is programmed to receive commands from the mobile device and wherein the system processor is programmed to process the commands received from the mobile device. 9. The vehicle occupant messaging system of claim 1, wherein the messages generated by the system processor includes at least one of driver-assistance system alerts, messages associated with a vehicle infotainment system, and messages associated with a climate control system. 10. The vehicle occupant messaging system of claim 1, wherein the mobile device is programmed to filter messages according to the location represented by the signal transmitted from the communication tag and only display filtered messages. 11. A method comprising: determining a location of a mobile device in a host vehicle based on signals received from a communication tag located in a buckle of a seatbelt assembly; displaying, via the mobile device, a contextual user interface associated with the location of the mobile device in the host vehicle; receiving messages transmitted from an occupant messaging system; filtering the messages according to the location of the mobile device in the host vehicle; and displaying the filtered messages on the mobile device. 12. The method of claim 11, further comprising transmitting the location of the mobile device in the host vehicle to the occupant messaging system. 13. The method of claim 12, further comprising receiving the contextual user interface from the occupant messaging system. 14. The method of claim 13, wherein receiving the contextual user interface from the occupant messaging system occurs after transmitting the location of the mobile device in the host vehicle to the occupant messaging system. 15. The method of claim 11, wherein displaying the contextual user interface includes presenting prompts for commands for a passenger to control at least one vehicle subsystem via the mobile device. 16. The method of claim 15, further comprising receiving a user input representing a command to control at least one vehicle subsystem as a result of presenting prompts for the passenger to control at least one vehicle subsystem. 17. The method of claim 16, further comprising transmitting the command to the occupant messaging system. 18. The method of claim 15, wherein the prompts presented on the contextual user interface are based at least in part on the location of the mobile device in the host vehicle.
A vehicle occupant messaging system includes a communication tag located in a buckle of a vehicle seatbelt assembly and configured to wirelessly transmit signals, to a mobile device, representing a location of a passenger using the mobile device in a host vehicle. The system further includes a communication transceiver programmed to transmit messages to the mobile device and a system processor programmed to generate the messages to the mobile device, the messages identifying the location of the passenger.1. A vehicle occupant messaging system comprising: a communication tag located in a buckle of a vehicle seatbelt assembly and configured to wirelessly transmit signals, to a mobile device, representing a location of a passenger using the mobile device in a host vehicle; a communication transceiver programmed to transmit messages to the mobile device; and a system processor programmed to generate the messages to the mobile device, the messages identifying the location of the passenger. 2. The vehicle occupant messaging system of claim 1, wherein the buckle is configured to block the signals transmitted by the communication tag when the seatbelt assembly is unfastened. 3. The vehicle occupant messaging system of claim 2, wherein the buckle includes a metal shield that at least partially encapsulates the communication tag when the seatbelt assembly is unfastened. 4. The vehicle occupant messaging system of claim 2, wherein the vehicle seatbelt assembly includes a latch that moves the metal shield away from the communication tag when the seatbelt assembly is fastened. 5. The vehicle occupant messaging system of claim 1, wherein the system processor is programmed to receive the location of the passenger from the mobile device. 6. The vehicle occupant messaging system of claim 1, wherein the system processor is programmed to select a contextual user interface based on the location of the passenger. 7. The vehicle occupant messaging system of claim 6, wherein the system processor is programmed to command the mobile device to present the contextual user interface. 8. The vehicle occupant messaging system of claim 1, wherein the communication transceiver is programmed to receive commands from the mobile device and wherein the system processor is programmed to process the commands received from the mobile device. 9. The vehicle occupant messaging system of claim 1, wherein the messages generated by the system processor includes at least one of driver-assistance system alerts, messages associated with a vehicle infotainment system, and messages associated with a climate control system. 10. The vehicle occupant messaging system of claim 1, wherein the mobile device is programmed to filter messages according to the location represented by the signal transmitted from the communication tag and only display filtered messages. 11. A method comprising: determining a location of a mobile device in a host vehicle based on signals received from a communication tag located in a buckle of a seatbelt assembly; displaying, via the mobile device, a contextual user interface associated with the location of the mobile device in the host vehicle; receiving messages transmitted from an occupant messaging system; filtering the messages according to the location of the mobile device in the host vehicle; and displaying the filtered messages on the mobile device. 12. The method of claim 11, further comprising transmitting the location of the mobile device in the host vehicle to the occupant messaging system. 13. The method of claim 12, further comprising receiving the contextual user interface from the occupant messaging system. 14. The method of claim 13, wherein receiving the contextual user interface from the occupant messaging system occurs after transmitting the location of the mobile device in the host vehicle to the occupant messaging system. 15. The method of claim 11, wherein displaying the contextual user interface includes presenting prompts for commands for a passenger to control at least one vehicle subsystem via the mobile device. 16. The method of claim 15, further comprising receiving a user input representing a command to control at least one vehicle subsystem as a result of presenting prompts for the passenger to control at least one vehicle subsystem. 17. The method of claim 16, further comprising transmitting the command to the occupant messaging system. 18. The method of claim 15, wherein the prompts presented on the contextual user interface are based at least in part on the location of the mobile device in the host vehicle.
2,600
10,998
10,998
16,028,301
2,613
Motion and/or rotation of an input mechanism can be tracked and/or analyzed to determine limits on a user's range of motion and/or a user's range of rotation in three-dimensional space. The user's range of motion and/or the user's range of rotation in three-dimensional space may be limited by a personal restriction for the user (e.g., a broken arm). The user's range of motion and/or the user's range of rotation in three-dimensional space may additionally or alternatively be limited by an environmental restriction (e.g., a physical object in a room). Accordingly, the techniques described herein can take steps to accommodate the personal restriction and/or the environmental restriction thereby optimizing user interactions involving the input mechanism and a virtual object.
1. A device comprising: a display configured to present virtual content; an interface communicatively coupled to an input mechanism that is configured to collect data associated with motion of the input mechanism based on a current location of a user; one or more processors communicatively coupled to the display; and memory having computer-executable instructions stored thereon which, when executed by the one or more processors, cause the device to perform operations comprising: receiving, via the interface and from the input mechanism, first data associated with first motion of the input mechanism; analyzing the first data associated with the first motion of the input mechanism to determine a range of motion of the input mechanism; adjusting, based at least in part on the range of motion, a parameter that correlates an amount of motion of the input mechanism to an amount of motion of a virtual element presented by the display; receiving, via the interface and from the input mechanism, second data associated with second motion of the input mechanism; and converting, using the adjusted parameter, the second motion of the input mechanism into correlated motion of the virtual element. 2. The device of claim 1, wherein: the analyzing reveals that the first motion of the input mechanism is indicative of a number of repeated attempts to perform a same interaction between the virtual element and a virtual object presented by the display; and the operations further comprise: determining that the number of repeated attempts exceeds a threshold number of attempts; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the number of repeated attempts exceeds the threshold number of attempts. 3. The device of claim 1, wherein: the analyzing reveals that the first motion of the input mechanism is associated with a period of time during which the user is attempting to perform an individual interaction; and the operations further comprise: determining that the period of time exceeds a threshold period of time; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the period of time exceeds the threshold period of time. 4. The device of claim 1, wherein: the motion of the input mechanism occurs within a first three-dimensional coordinate space; the motion of the virtual element occurs within a second three-dimensional coordinate space that is larger than the first three-dimensional coordinate space; and the adjusting the parameter increases the amount of motion of the virtual element within the second three-dimensional coordinate space relative to the amount of motion of the input mechanism within the first three-dimensional coordinate space. 5. The device of claim 4, wherein: the first three-dimensional coordinate space is associated with three-dimensional space within reach of an arm and a hand of the user; the second three-dimensional coordinate space is associated with three-dimensional space within a view of the user; and the range of motion of the input mechanism is limited due to a detected impossibility or a detected difficulty for the user to move the input mechanism to a position in the first three-dimensional coordinate space. 6. The device of claim 1, wherein the display comprises a transparent display that presents virtual content in association with real-world content within a view of the user. 7. The device of claim 1, wherein the data associated with the motion of the input mechanism comprises a change in position of the input mechanism over time. 8. The device of claim 1, wherein the virtual element comprises at least one of a cursor element or a pointer element. 9. The device of claim 1, wherein: the input mechanism is further configured to collect data associated with rotation of the input mechanism, the data associated with the rotation of the input mechanism comprising a change in orientation of the input mechanism; and the operations further comprise: receiving, via the interface and from the input mechanism, third data associated with rotation of the input mechanism; analyzing the third data associated with the rotation of the input mechanism to determine a range of rotation of the input mechanism; and adjusting, based at least in part on the range of rotation, another parameter that increases an amount of rotation of the input mechanism relative to an amount of rotation of the virtual element presented by the display. 10. The device of claim 1, wherein the operations further comprise: storing the adjusted parameter in an interaction profile associated with the current location of the user; determining that the user has returned to the current location at a later time; and activating the interaction profile to enable subsequent interactions using the adjusted parameter. 11. A method comprising: receiving, from an input mechanism that is communicatively coupled to a device, first data associated with first motion of the input mechanism; analyzing, by one or more processors, the first data associated with the first motion of the input mechanism to determine a range of motion of the input mechanism; adjusting, based at least in part on the range of motion, a parameter that correlates an amount of motion of the input mechanism to an amount of motion of a virtual element presented by a display; receiving, from the input mechanism, second data associated with second motion of the input mechanism; and converting, using the adjusted parameter, the second motion of the input mechanism into correlated motion of the virtual element. 12. The method of claim 11, wherein: the analyzing reveals that the first motion of the input mechanism is indicative of a number of repeated attempts to perform a same interaction between the virtual element and a virtual object presented by the display; and the method further comprises: determining that the number of repeated attempts exceeds a threshold number of attempts; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the number of repeated attempts exceeds the threshold number of attempts. 13. The method of claim 11, wherein: the analyzing reveals that the first motion of the input mechanism is associated with a period of time during which the user is attempting to perform an individual interaction; and the method further comprises: determining that the period of time exceeds a threshold period of time; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the period of time exceeds the threshold period of time. 14. The method of claim 11, wherein: the motion of the input mechanism occurs within a first three-dimensional coordinate space; the motion of the virtual element occurs within a second three-dimensional coordinate space that is larger than the first three-dimensional coordinate space; and the adjusting the parameter increases the amount of motion of the virtual element within the second three-dimensional coordinate space relative to the amount of motion of the input mechanism within the first three-dimensional coordinate space. 15. The method of claim 14, wherein: the first three-dimensional coordinate space is associated with three-dimensional space within reach of an arm and a hand of a user; the second three-dimensional coordinate space is associated with three-dimensional space within a view of the user; and the range of motion of the input mechanism is limited due to a detected impossibility or a detected difficulty for the user to move the input mechanism to a position in the first three-dimensional coordinate space. 16. The method of claim 11, further comprising: storing the adjusted parameter in an interaction profile associated with a current location of the user; determining that the user has returned to the current location at a later time; and activating the interaction profile to enable subsequent interactions using the adjusted parameter. 17. One or more computer storage media storing instructions that, when executed by one or more processors, cause a device to perform operations comprising: receiving, from an input mechanism that is communicatively coupled to the device, first data associated with first motion of the input mechanism; analyzing the first data associated with the first motion of the input mechanism to determine a range of motion of the input mechanism; adjusting, based at least in part on the range of motion, a parameter that correlates an amount of motion of the input mechanism to an amount of motion of a virtual element presented by a display; receiving, from the input mechanism, second data associated with second motion of the input mechanism; and converting, using the adjusted parameter, the second motion of the input mechanism into correlated motion of the virtual element. 18. The one or more computer storage media of claim 17, wherein: the analyzing reveals that the first motion of the input mechanism is indicative of a number of repeated attempts to perform a same interaction between the virtual element and a virtual object presented by the display; and the operations further comprise: determining that the number of repeated attempts exceeds a threshold number of attempts; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the number of repeated attempts exceeds the threshold number of attempts. 19. The one or more computer storage media of claim 17, wherein: the analyzing reveals that the first motion of the input mechanism is associated with a period of time during which the user is attempting to perform an individual interaction; and the operations further comprise: determining that the period of time exceeds a threshold period of time; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the period of time exceeds the threshold period of time. 20. The one or more computer storage media of claim 17, wherein the operations further comprise: storing the adjusted parameter in an interaction profile associated with a current location of a user; determining that the user has returned to the current location at a later time; and activating the interaction profile to enable subsequent interactions using the adjusted parameter.
Motion and/or rotation of an input mechanism can be tracked and/or analyzed to determine limits on a user's range of motion and/or a user's range of rotation in three-dimensional space. The user's range of motion and/or the user's range of rotation in three-dimensional space may be limited by a personal restriction for the user (e.g., a broken arm). The user's range of motion and/or the user's range of rotation in three-dimensional space may additionally or alternatively be limited by an environmental restriction (e.g., a physical object in a room). Accordingly, the techniques described herein can take steps to accommodate the personal restriction and/or the environmental restriction thereby optimizing user interactions involving the input mechanism and a virtual object.1. A device comprising: a display configured to present virtual content; an interface communicatively coupled to an input mechanism that is configured to collect data associated with motion of the input mechanism based on a current location of a user; one or more processors communicatively coupled to the display; and memory having computer-executable instructions stored thereon which, when executed by the one or more processors, cause the device to perform operations comprising: receiving, via the interface and from the input mechanism, first data associated with first motion of the input mechanism; analyzing the first data associated with the first motion of the input mechanism to determine a range of motion of the input mechanism; adjusting, based at least in part on the range of motion, a parameter that correlates an amount of motion of the input mechanism to an amount of motion of a virtual element presented by the display; receiving, via the interface and from the input mechanism, second data associated with second motion of the input mechanism; and converting, using the adjusted parameter, the second motion of the input mechanism into correlated motion of the virtual element. 2. The device of claim 1, wherein: the analyzing reveals that the first motion of the input mechanism is indicative of a number of repeated attempts to perform a same interaction between the virtual element and a virtual object presented by the display; and the operations further comprise: determining that the number of repeated attempts exceeds a threshold number of attempts; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the number of repeated attempts exceeds the threshold number of attempts. 3. The device of claim 1, wherein: the analyzing reveals that the first motion of the input mechanism is associated with a period of time during which the user is attempting to perform an individual interaction; and the operations further comprise: determining that the period of time exceeds a threshold period of time; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the period of time exceeds the threshold period of time. 4. The device of claim 1, wherein: the motion of the input mechanism occurs within a first three-dimensional coordinate space; the motion of the virtual element occurs within a second three-dimensional coordinate space that is larger than the first three-dimensional coordinate space; and the adjusting the parameter increases the amount of motion of the virtual element within the second three-dimensional coordinate space relative to the amount of motion of the input mechanism within the first three-dimensional coordinate space. 5. The device of claim 4, wherein: the first three-dimensional coordinate space is associated with three-dimensional space within reach of an arm and a hand of the user; the second three-dimensional coordinate space is associated with three-dimensional space within a view of the user; and the range of motion of the input mechanism is limited due to a detected impossibility or a detected difficulty for the user to move the input mechanism to a position in the first three-dimensional coordinate space. 6. The device of claim 1, wherein the display comprises a transparent display that presents virtual content in association with real-world content within a view of the user. 7. The device of claim 1, wherein the data associated with the motion of the input mechanism comprises a change in position of the input mechanism over time. 8. The device of claim 1, wherein the virtual element comprises at least one of a cursor element or a pointer element. 9. The device of claim 1, wherein: the input mechanism is further configured to collect data associated with rotation of the input mechanism, the data associated with the rotation of the input mechanism comprising a change in orientation of the input mechanism; and the operations further comprise: receiving, via the interface and from the input mechanism, third data associated with rotation of the input mechanism; analyzing the third data associated with the rotation of the input mechanism to determine a range of rotation of the input mechanism; and adjusting, based at least in part on the range of rotation, another parameter that increases an amount of rotation of the input mechanism relative to an amount of rotation of the virtual element presented by the display. 10. The device of claim 1, wherein the operations further comprise: storing the adjusted parameter in an interaction profile associated with the current location of the user; determining that the user has returned to the current location at a later time; and activating the interaction profile to enable subsequent interactions using the adjusted parameter. 11. A method comprising: receiving, from an input mechanism that is communicatively coupled to a device, first data associated with first motion of the input mechanism; analyzing, by one or more processors, the first data associated with the first motion of the input mechanism to determine a range of motion of the input mechanism; adjusting, based at least in part on the range of motion, a parameter that correlates an amount of motion of the input mechanism to an amount of motion of a virtual element presented by a display; receiving, from the input mechanism, second data associated with second motion of the input mechanism; and converting, using the adjusted parameter, the second motion of the input mechanism into correlated motion of the virtual element. 12. The method of claim 11, wherein: the analyzing reveals that the first motion of the input mechanism is indicative of a number of repeated attempts to perform a same interaction between the virtual element and a virtual object presented by the display; and the method further comprises: determining that the number of repeated attempts exceeds a threshold number of attempts; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the number of repeated attempts exceeds the threshold number of attempts. 13. The method of claim 11, wherein: the analyzing reveals that the first motion of the input mechanism is associated with a period of time during which the user is attempting to perform an individual interaction; and the method further comprises: determining that the period of time exceeds a threshold period of time; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the period of time exceeds the threshold period of time. 14. The method of claim 11, wherein: the motion of the input mechanism occurs within a first three-dimensional coordinate space; the motion of the virtual element occurs within a second three-dimensional coordinate space that is larger than the first three-dimensional coordinate space; and the adjusting the parameter increases the amount of motion of the virtual element within the second three-dimensional coordinate space relative to the amount of motion of the input mechanism within the first three-dimensional coordinate space. 15. The method of claim 14, wherein: the first three-dimensional coordinate space is associated with three-dimensional space within reach of an arm and a hand of a user; the second three-dimensional coordinate space is associated with three-dimensional space within a view of the user; and the range of motion of the input mechanism is limited due to a detected impossibility or a detected difficulty for the user to move the input mechanism to a position in the first three-dimensional coordinate space. 16. The method of claim 11, further comprising: storing the adjusted parameter in an interaction profile associated with a current location of the user; determining that the user has returned to the current location at a later time; and activating the interaction profile to enable subsequent interactions using the adjusted parameter. 17. One or more computer storage media storing instructions that, when executed by one or more processors, cause a device to perform operations comprising: receiving, from an input mechanism that is communicatively coupled to the device, first data associated with first motion of the input mechanism; analyzing the first data associated with the first motion of the input mechanism to determine a range of motion of the input mechanism; adjusting, based at least in part on the range of motion, a parameter that correlates an amount of motion of the input mechanism to an amount of motion of a virtual element presented by a display; receiving, from the input mechanism, second data associated with second motion of the input mechanism; and converting, using the adjusted parameter, the second motion of the input mechanism into correlated motion of the virtual element. 18. The one or more computer storage media of claim 17, wherein: the analyzing reveals that the first motion of the input mechanism is indicative of a number of repeated attempts to perform a same interaction between the virtual element and a virtual object presented by the display; and the operations further comprise: determining that the number of repeated attempts exceeds a threshold number of attempts; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the number of repeated attempts exceeds the threshold number of attempts. 19. The one or more computer storage media of claim 17, wherein: the analyzing reveals that the first motion of the input mechanism is associated with a period of time during which the user is attempting to perform an individual interaction; and the operations further comprise: determining that the period of time exceeds a threshold period of time; and determining that the range of motion of the input mechanism is limited based at least in part on the determining that the period of time exceeds the threshold period of time. 20. The one or more computer storage media of claim 17, wherein the operations further comprise: storing the adjusted parameter in an interaction profile associated with a current location of a user; determining that the user has returned to the current location at a later time; and activating the interaction profile to enable subsequent interactions using the adjusted parameter.
2,600
10,999
10,999
16,909,191
2,647
The invention concerns a method for establishing a telecommunication connection between a calling subscriber ( 1 ) and a target subscriber ( 5 ), in which a calling terminal (R) of the calling subscriber ( 1 ) and a target terminal (Z) of the target subscriber ( 5 ) are assigned to a switching unit ( 100 ) and the calling subscriber ( 1 ) enters a callback request (CCBS-REQ, CCNR-REQ) in the switching unit ( 100 ) if the target terminal (Z) is busy or cannot be reached when a first call is made from the calling terminal (R) to the target terminal (Z), in which configuration data regarding the accessibility of the target subscriber ( 5 ) for a callback are entered into the switching unit ( 100 ). Then the time is set for the switching unit ( 100 ) to make the pending callback (RC) to the calling subscriber ( 1 ) in order to generate a second call to the target subscriber ( 5 ) by processing the configuration data concerning the accessibility of the target subscriber ( 5 ) for a callback, at a time when the target subscriber ( 5 ) is reachable for a callback. Next, the switching unit ( 100 ) makes the pending callback (RC) to the calling subscriber ( 1 ) at the set time.
1-15. (canceled) 16. A method of telecommunication comprising: receiving, by the switching unit, a first callback request that requests a first callback after a first call attempted from a calling terminal of a calling subscriber to a target terminal of a target subscriber is unanswered; receiving, by the switching unit, configuration data regarding accessibility of the target subscriber for the first callback, the configuration data regarding accessibility of the target subscriber for the first callback comprising data regarding a time period in which the target subscriber is reachable for the first callback and a time period in which the target subscriber is unreachable for the first callback; receiving, by the switching unit, configuration data regarding accessibility of the calling subscriber for the first callback; the switching unit setting a set time for the switching unit to initiate the first callback to generate a second call between the calling terminal and the target terminal based on the configuration data concerning the accessibility of the target subscriber and the configuration data concerning the accessibility of the calling subscriber such that the set time is a time at which both the target subscriber and the calling subscriber are reachable for the first callback; the switching unit transmitting an appointment request to a date planning system or a conference system managed outside of the switching unit, the appointment request identifying the set time for the switching unit to initiate the first callback, the appointment request configured to communicate a meeting appointment for the target subscriber and the calling subscriber at the set time so that the conference system or the date planning system is updateable for the target subscriber and the calling subscriber based on the appointment request; the switching unit initiating the first callback at the set time. 17. The method of claim 16, wherein the first call attempt is unanswered when the first call attempt is a Call Completion on Busy Subscriber (CCBS) event or a Call Completion on No Reply (CCNR) event. 18. The method of claim 16, wherein the first call attempt is unanswered when the first call attempt is a Call Completion on Busy Subscriber (CCBS) event. 19. The method of claim 16, comprising: managing a callback list of the target subscriber such that a priority of the first callback within the callback list is adjusted so that the first callback has a higher priority than an earlier entered second callback on the callback list based on information relating to at least one of the calling subscriber and the first call. 20. The method of claim 19, wherein the managing of the callback list comprises: assigning a higher priority to the first callback over a second callback of the callback list that was entered in the callback list earlier than the first callback. 21. The method of claim 19, wherein the information relating to the first calling subscriber comprises information identifying an importance of the first calling subscriber. 22. The method of claim 19, comprising: blocking the calling terminal and the target terminal for calls other than a second call of the first callback before the first callback is initiated. 23. The method of claim 16, further comprising: prioritizing the first callback within a list of callbacks of the target subscriber. 24. The method of claim 16, further comprising: sending an acknowledgement message from the switching unit to the calling terminal after the first callback request is received. 25. The method of claim 16, comprising: the date planning system or the conference system updating a profile for the target subscriber and updating a profile for the calling subscriber based on the appointment request received from the switching unit. 26. The method of claim 25, wherein the appointment request is configured so the appointment request is enterable as an e-mail or a short message system message by the date planning system or the conference system. 27. The method of claim 16, comprising: entering configuration data regarding accessibility of the calling subscriber into the switching unit via a first interface of the switching unit, the configuration data of the calling subscriber comprising activity status data associated with the calling subscriber; and entering the configuration data regarding accessibility of the target subscriber into the switching unit via the first interface, the configuration data of the target subscriber comprising activity status data associated with the target subscriber; wherein the activity status data associated with the calling subscriber is managed outside the switching unit and wherein the activity status data associated with the target subscriber is managed outside the switching unit. 28. The method of claim 27, wherein the activity status data associated with the calling subscriber comprises data generated by a computer for recording and reporting of activities of the calling subscriber. 29. The method of claim 16, wherein the receiving, by the switching unit, of the configuration data regarding accessibility of the target subscriber comprises the switching unit receiving activity status messages providing presence data relating to the target subscriber from a computer based unit, the computer based unit configured to monitor a presence of the target subscriber and a presence of the calling subscriber. 30. The method of claim 29, wherein at least one of the activity status messages providing presence data relating to the target subscriber indicates one of: that the target subscriber is within an office in which the target terminal is located, and that the target subscriber is within a moving vehicle. 31. The method of claim 30, wherein the switching unit manages a callback list of the target subscriber. 32. A switching unit for establishing a telecommunication connection between a calling terminal of a calling subscriber and a target terminal of a target subscriber, the switching unit comprising: a first storage unit for storing configuration data regarding accessibility of the target subscriber for a callback, and a processing unit communicatively connected to the first storage unit, the processing unit configured to set a set time for the switching unit to initiate a first callback in response to receiving a first call back request from the calling terminal, the processing unit configured to set the set time based on (i) the configuration data regarding accessibility of the target subscriber and (ii) configuration data regarding accessibility of the calling subscriber for the first callback such that the set time is a time at which the target subscriber is reachable for the first callback and the calling subscriber is reachable for the first callback; the switching unit configured to transmit an appointment request to a date planning system or a conference system managed outside of the switching unit, the appointment request identifying the set time for the switching unit to initiate the first callback, the appointment request configured to communicate a meeting appointment for the target subscriber and the calling subscriber at the set time so that the conference system or the date planning system is updateable for the target subscriber and the calling subscriber based on the appointment request. 33. The switching unit of claim 32, comprising: a first interface, the first interface configured to receive the configuration data regarding accessibility of the calling subscriber and the configuration data regarding accessibility of the target subscriber; wherein the configuration data regarding accessibility of the calling subscriber comprises activity status data associated with the calling subscriber and the configuration data regarding accessibility of the target subscriber comprises activity status data associated with the target subscriber; and wherein the activity status data associated with the calling subscriber is managed outside the switching unit and wherein the activity status data associated with the target subscriber is managed outside the switching unit. 34. The switching unit of claim 33, wherein the processing unit is configured to manage a callback list of the target subscriber based on the configuration data regarding accessibility of the target subscriber and based on the configuration data regarding accessibility of the calling subscriber such that a priority of the first callback within the callback list is adjustable so that the first callback has a higher priority than a second callback on the callback list based on availability of the calling subscriber that is identifiable from the configuration data regarding accessibility of the calling subscriber; and wherein the processing unit is configured to manage the callback list of the target subscriber based on the configuration data regarding accessibility of the target subscriber and based on the configuration data regarding accessibility of the calling subscriber such that the priority of the first callback within the callback list is adjustable so that the first callback has a lower priority than a third callback on the callback list based on unavailability of the calling subscriber that is identifiable from the configuration data regarding accessibility of the calling subscriber. 35. A communication system comprising: the switching unit of claim 32; a calling terminal communicatively connectable to the switching unit; and a target terminal communicatively connectable to the switching unit.
The invention concerns a method for establishing a telecommunication connection between a calling subscriber ( 1 ) and a target subscriber ( 5 ), in which a calling terminal (R) of the calling subscriber ( 1 ) and a target terminal (Z) of the target subscriber ( 5 ) are assigned to a switching unit ( 100 ) and the calling subscriber ( 1 ) enters a callback request (CCBS-REQ, CCNR-REQ) in the switching unit ( 100 ) if the target terminal (Z) is busy or cannot be reached when a first call is made from the calling terminal (R) to the target terminal (Z), in which configuration data regarding the accessibility of the target subscriber ( 5 ) for a callback are entered into the switching unit ( 100 ). Then the time is set for the switching unit ( 100 ) to make the pending callback (RC) to the calling subscriber ( 1 ) in order to generate a second call to the target subscriber ( 5 ) by processing the configuration data concerning the accessibility of the target subscriber ( 5 ) for a callback, at a time when the target subscriber ( 5 ) is reachable for a callback. Next, the switching unit ( 100 ) makes the pending callback (RC) to the calling subscriber ( 1 ) at the set time.1-15. (canceled) 16. A method of telecommunication comprising: receiving, by the switching unit, a first callback request that requests a first callback after a first call attempted from a calling terminal of a calling subscriber to a target terminal of a target subscriber is unanswered; receiving, by the switching unit, configuration data regarding accessibility of the target subscriber for the first callback, the configuration data regarding accessibility of the target subscriber for the first callback comprising data regarding a time period in which the target subscriber is reachable for the first callback and a time period in which the target subscriber is unreachable for the first callback; receiving, by the switching unit, configuration data regarding accessibility of the calling subscriber for the first callback; the switching unit setting a set time for the switching unit to initiate the first callback to generate a second call between the calling terminal and the target terminal based on the configuration data concerning the accessibility of the target subscriber and the configuration data concerning the accessibility of the calling subscriber such that the set time is a time at which both the target subscriber and the calling subscriber are reachable for the first callback; the switching unit transmitting an appointment request to a date planning system or a conference system managed outside of the switching unit, the appointment request identifying the set time for the switching unit to initiate the first callback, the appointment request configured to communicate a meeting appointment for the target subscriber and the calling subscriber at the set time so that the conference system or the date planning system is updateable for the target subscriber and the calling subscriber based on the appointment request; the switching unit initiating the first callback at the set time. 17. The method of claim 16, wherein the first call attempt is unanswered when the first call attempt is a Call Completion on Busy Subscriber (CCBS) event or a Call Completion on No Reply (CCNR) event. 18. The method of claim 16, wherein the first call attempt is unanswered when the first call attempt is a Call Completion on Busy Subscriber (CCBS) event. 19. The method of claim 16, comprising: managing a callback list of the target subscriber such that a priority of the first callback within the callback list is adjusted so that the first callback has a higher priority than an earlier entered second callback on the callback list based on information relating to at least one of the calling subscriber and the first call. 20. The method of claim 19, wherein the managing of the callback list comprises: assigning a higher priority to the first callback over a second callback of the callback list that was entered in the callback list earlier than the first callback. 21. The method of claim 19, wherein the information relating to the first calling subscriber comprises information identifying an importance of the first calling subscriber. 22. The method of claim 19, comprising: blocking the calling terminal and the target terminal for calls other than a second call of the first callback before the first callback is initiated. 23. The method of claim 16, further comprising: prioritizing the first callback within a list of callbacks of the target subscriber. 24. The method of claim 16, further comprising: sending an acknowledgement message from the switching unit to the calling terminal after the first callback request is received. 25. The method of claim 16, comprising: the date planning system or the conference system updating a profile for the target subscriber and updating a profile for the calling subscriber based on the appointment request received from the switching unit. 26. The method of claim 25, wherein the appointment request is configured so the appointment request is enterable as an e-mail or a short message system message by the date planning system or the conference system. 27. The method of claim 16, comprising: entering configuration data regarding accessibility of the calling subscriber into the switching unit via a first interface of the switching unit, the configuration data of the calling subscriber comprising activity status data associated with the calling subscriber; and entering the configuration data regarding accessibility of the target subscriber into the switching unit via the first interface, the configuration data of the target subscriber comprising activity status data associated with the target subscriber; wherein the activity status data associated with the calling subscriber is managed outside the switching unit and wherein the activity status data associated with the target subscriber is managed outside the switching unit. 28. The method of claim 27, wherein the activity status data associated with the calling subscriber comprises data generated by a computer for recording and reporting of activities of the calling subscriber. 29. The method of claim 16, wherein the receiving, by the switching unit, of the configuration data regarding accessibility of the target subscriber comprises the switching unit receiving activity status messages providing presence data relating to the target subscriber from a computer based unit, the computer based unit configured to monitor a presence of the target subscriber and a presence of the calling subscriber. 30. The method of claim 29, wherein at least one of the activity status messages providing presence data relating to the target subscriber indicates one of: that the target subscriber is within an office in which the target terminal is located, and that the target subscriber is within a moving vehicle. 31. The method of claim 30, wherein the switching unit manages a callback list of the target subscriber. 32. A switching unit for establishing a telecommunication connection between a calling terminal of a calling subscriber and a target terminal of a target subscriber, the switching unit comprising: a first storage unit for storing configuration data regarding accessibility of the target subscriber for a callback, and a processing unit communicatively connected to the first storage unit, the processing unit configured to set a set time for the switching unit to initiate a first callback in response to receiving a first call back request from the calling terminal, the processing unit configured to set the set time based on (i) the configuration data regarding accessibility of the target subscriber and (ii) configuration data regarding accessibility of the calling subscriber for the first callback such that the set time is a time at which the target subscriber is reachable for the first callback and the calling subscriber is reachable for the first callback; the switching unit configured to transmit an appointment request to a date planning system or a conference system managed outside of the switching unit, the appointment request identifying the set time for the switching unit to initiate the first callback, the appointment request configured to communicate a meeting appointment for the target subscriber and the calling subscriber at the set time so that the conference system or the date planning system is updateable for the target subscriber and the calling subscriber based on the appointment request. 33. The switching unit of claim 32, comprising: a first interface, the first interface configured to receive the configuration data regarding accessibility of the calling subscriber and the configuration data regarding accessibility of the target subscriber; wherein the configuration data regarding accessibility of the calling subscriber comprises activity status data associated with the calling subscriber and the configuration data regarding accessibility of the target subscriber comprises activity status data associated with the target subscriber; and wherein the activity status data associated with the calling subscriber is managed outside the switching unit and wherein the activity status data associated with the target subscriber is managed outside the switching unit. 34. The switching unit of claim 33, wherein the processing unit is configured to manage a callback list of the target subscriber based on the configuration data regarding accessibility of the target subscriber and based on the configuration data regarding accessibility of the calling subscriber such that a priority of the first callback within the callback list is adjustable so that the first callback has a higher priority than a second callback on the callback list based on availability of the calling subscriber that is identifiable from the configuration data regarding accessibility of the calling subscriber; and wherein the processing unit is configured to manage the callback list of the target subscriber based on the configuration data regarding accessibility of the target subscriber and based on the configuration data regarding accessibility of the calling subscriber such that the priority of the first callback within the callback list is adjustable so that the first callback has a lower priority than a third callback on the callback list based on unavailability of the calling subscriber that is identifiable from the configuration data regarding accessibility of the calling subscriber. 35. A communication system comprising: the switching unit of claim 32; a calling terminal communicatively connectable to the switching unit; and a target terminal communicatively connectable to the switching unit.
2,600