Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
10,100
10,100
14,830,401
2,612
A system for augmentation of reality-based components and actions related to controllers. Actions may incorporate augmented reality-based wiring, commissioning and monitoring. An augmented reality based application may run in a smart glass, head mounted display (HMD) or in a smart phone, which can augment, for example, a building management system controller and help in wiring, monitoring and commissioning of the controller.
1. A system having an augmented reality based application for controllers, comprising: a device that captures a real-time image of a building management system controller, and information such as a bar or quick response code or position information to determine a unique identification (ID) of a controller; and wherein: the unique ID is used to request data from a web service hosted as part of a building supervisory or commissioning tool, comprising device details, and data points from a building management system database; upon receipt of data, a real-time image of the building management system controller is presented on a display; an interaction is made with an overlay of objects on the real time image to retrieve additional data about the building management system controller; and the additional data are incorporated in an actuator-in-hand to tune the building management system controller. 2. The system of claim 1, further comprising a job site under supervision of the building management system controller that has at least one plant controller and one or more unitary controllers that control applications in a heating, ventilation and air conditioning (HVAC) system. 3. The system of claim 1, further comprising: a mechanism that converts a voice input to a value change that augments the real-time image on the display; and wherein an augmented reality based application runs in a mechanism selected from a group comprising a smart glass, a head mounted display, a tablet, a notebook, a pad, and a smart phone. 4. The system of claim 1, wherein: the building management system controller is connected to one or more field devices through input and output (I/O) points of the building management system controller and the one or more field devices; and. when a field device is installed and connected to the I/O data points of a controller, proper parameters of the field device are verified by changing values at the I/O data points. 5. The system of claim 1, wherein: the real-time image of a controller comprises a graphical or a textual element positioned in the display adjacent to I/O terminals; and an overlay of building management data fetched with a web service is displayed concurrently with the real-time image of the building management system controller. 6. The system of claim 5, wherein: the overlay of building management data comprises one or more items selected from a group comprising objects, selectable icons, selectable buttons and graphical user interfaces; the one or more items enable a user to retrieve additional building management data; and building management data comprise case-driven information that provides a user the actuator-in-hand to tune a controller of equipment. 7. The system of claim 4, wherein: values of the data points are changed by clicking on values shown on a graphical user interface; or values of the data points are changed with a voice input to a mechanism for converting sound of voice inputs to actions that change values of the data points. 8. A mechanism having augmented reality-based wiring of controllers, comprising: a building management system having a controller; and a portable device; and wherein: the portable device comprises an application to augment a wiring process of the controller in the building management system; a sequence of operation for the wiring process of the controller is displayed through a web service on the portable device; the sequence of operation comprises a download of application files from an engineering tool and the directions for the wiring process; and the wiring process comprises connecting input and output (I/O) terminals of the controller to field devices. 9. The mechanism of claim 8, wherein the application to augment the wiring process can be run in a device selected from a group comprising a smart glass, a head mounted display, a tablet, a notebook, a pad, and a smart phone. 10. The mechanism of claim 8, wherein: the application is run in the device to capture a real-time image of the controller; the real-time image and other information are to determine an identification of the controller; and the other information is from one or more items selected from a group comprising bar codes, quick response codes, near field communications, and web services. 11. The mechanism of claim 10, wherein: the identification of the controller is used to request configuration data of the controller through a web service; and the configuration data are selected from a group comprising device details, one or more field devices, and terminal assignments for the wiring process. 12. A method of augmented reality based activity for a controller, comprising: augmenting a reality-based application; using the reality-based application in a portable device to augment a wiring process of a controller in a building management system; and displaying a workflow with a sequence of operation through a web service on the portable device; and wherein: the workflow with the sequence of operation is based on a configuration or type of the controller; and the workflow with the sequence of operation augments the wiring process of the controller. 13. The method of claim 12, further comprising: downloading application files to the portable device from an engineering tool to provide a basis for the wiring; wiring the controller with connections to one or more field devices through input/output terminals or input/output modules of the controller; and verifying each connection to the one or more field devices from the controller by changing a value of a data point at each input/output terminal the controller. 14. The method of claim 12, wherein the portable device is selected from a group comprising a smart glass, a head mounted display, a smart phone, a pad, a tablet, and a notebook. 15. The method of claim 12, further compromising: operating the reality-based application in the portable device to capture a real-time image of the controller; obtaining other information about one or more items selected from a group comprising bar codes, quick response codes, and near field communication tags; using the real-time image and the other information with the portable device to determine a unique identification of the controller; and using the unique identification to request configuration data about the controller through a web service; and wherein: if the configuration data are not available from the controller, then engineering data are requested from the web service hosted as a part of an engineering tool; the engineering data comprise details of one or more field devices, the portable device, and assignments of the terminals of the controller; and the one or more field devices support a heating, ventilation and air conditioning (HVAC) system. 16. The method of claim 12, further comprising: generating the workflow with the sequence of operation for augmenting the wiring process based on the configuration or type of the controller; and wherein the workflow with the sequence of operation augments the displaying of the portable device concurrently of the real-time image of the controller with a graphical and/or textual element positioned in the display near the input/output terminal providing wiring information. 17. The method of claim 16, wherein: the real-time image of the controller augments the input/output terminals and shows live data point values at the input/output terminals; a live data point value can be selected and be changed; and a live data point value can be changed with voice commands via a speech processor to the portable device. 18. The method of claim 15, wherein: during data point creation and terminal assignment with the engineering tool, details about the field devices which are connected to the data points are entered by a user; and based on the configuration, a table with virtually all required connections is created during an application by the engineering tool and saved as a file in a binary format where the application files are stored. 19. The method of claim 18, wherein: the file with the table is downloaded to the controller as part of a normal download of an application; when the controller has the file with the table, the controller exposes data of the file through a web service over a net; and when the controller cannot expose the data of the file, then the file is exposed by the engineering tool through a sub service. 20. The method of claim 18, wherein: based on the table with virtually all required connections, the work flow can assist a wiring process with step-by-step instructions; and the instructions can be displayed on a screen of the display of the portable device or played as a voice output on the portable device.
A system for augmentation of reality-based components and actions related to controllers. Actions may incorporate augmented reality-based wiring, commissioning and monitoring. An augmented reality based application may run in a smart glass, head mounted display (HMD) or in a smart phone, which can augment, for example, a building management system controller and help in wiring, monitoring and commissioning of the controller.1. A system having an augmented reality based application for controllers, comprising: a device that captures a real-time image of a building management system controller, and information such as a bar or quick response code or position information to determine a unique identification (ID) of a controller; and wherein: the unique ID is used to request data from a web service hosted as part of a building supervisory or commissioning tool, comprising device details, and data points from a building management system database; upon receipt of data, a real-time image of the building management system controller is presented on a display; an interaction is made with an overlay of objects on the real time image to retrieve additional data about the building management system controller; and the additional data are incorporated in an actuator-in-hand to tune the building management system controller. 2. The system of claim 1, further comprising a job site under supervision of the building management system controller that has at least one plant controller and one or more unitary controllers that control applications in a heating, ventilation and air conditioning (HVAC) system. 3. The system of claim 1, further comprising: a mechanism that converts a voice input to a value change that augments the real-time image on the display; and wherein an augmented reality based application runs in a mechanism selected from a group comprising a smart glass, a head mounted display, a tablet, a notebook, a pad, and a smart phone. 4. The system of claim 1, wherein: the building management system controller is connected to one or more field devices through input and output (I/O) points of the building management system controller and the one or more field devices; and. when a field device is installed and connected to the I/O data points of a controller, proper parameters of the field device are verified by changing values at the I/O data points. 5. The system of claim 1, wherein: the real-time image of a controller comprises a graphical or a textual element positioned in the display adjacent to I/O terminals; and an overlay of building management data fetched with a web service is displayed concurrently with the real-time image of the building management system controller. 6. The system of claim 5, wherein: the overlay of building management data comprises one or more items selected from a group comprising objects, selectable icons, selectable buttons and graphical user interfaces; the one or more items enable a user to retrieve additional building management data; and building management data comprise case-driven information that provides a user the actuator-in-hand to tune a controller of equipment. 7. The system of claim 4, wherein: values of the data points are changed by clicking on values shown on a graphical user interface; or values of the data points are changed with a voice input to a mechanism for converting sound of voice inputs to actions that change values of the data points. 8. A mechanism having augmented reality-based wiring of controllers, comprising: a building management system having a controller; and a portable device; and wherein: the portable device comprises an application to augment a wiring process of the controller in the building management system; a sequence of operation for the wiring process of the controller is displayed through a web service on the portable device; the sequence of operation comprises a download of application files from an engineering tool and the directions for the wiring process; and the wiring process comprises connecting input and output (I/O) terminals of the controller to field devices. 9. The mechanism of claim 8, wherein the application to augment the wiring process can be run in a device selected from a group comprising a smart glass, a head mounted display, a tablet, a notebook, a pad, and a smart phone. 10. The mechanism of claim 8, wherein: the application is run in the device to capture a real-time image of the controller; the real-time image and other information are to determine an identification of the controller; and the other information is from one or more items selected from a group comprising bar codes, quick response codes, near field communications, and web services. 11. The mechanism of claim 10, wherein: the identification of the controller is used to request configuration data of the controller through a web service; and the configuration data are selected from a group comprising device details, one or more field devices, and terminal assignments for the wiring process. 12. A method of augmented reality based activity for a controller, comprising: augmenting a reality-based application; using the reality-based application in a portable device to augment a wiring process of a controller in a building management system; and displaying a workflow with a sequence of operation through a web service on the portable device; and wherein: the workflow with the sequence of operation is based on a configuration or type of the controller; and the workflow with the sequence of operation augments the wiring process of the controller. 13. The method of claim 12, further comprising: downloading application files to the portable device from an engineering tool to provide a basis for the wiring; wiring the controller with connections to one or more field devices through input/output terminals or input/output modules of the controller; and verifying each connection to the one or more field devices from the controller by changing a value of a data point at each input/output terminal the controller. 14. The method of claim 12, wherein the portable device is selected from a group comprising a smart glass, a head mounted display, a smart phone, a pad, a tablet, and a notebook. 15. The method of claim 12, further compromising: operating the reality-based application in the portable device to capture a real-time image of the controller; obtaining other information about one or more items selected from a group comprising bar codes, quick response codes, and near field communication tags; using the real-time image and the other information with the portable device to determine a unique identification of the controller; and using the unique identification to request configuration data about the controller through a web service; and wherein: if the configuration data are not available from the controller, then engineering data are requested from the web service hosted as a part of an engineering tool; the engineering data comprise details of one or more field devices, the portable device, and assignments of the terminals of the controller; and the one or more field devices support a heating, ventilation and air conditioning (HVAC) system. 16. The method of claim 12, further comprising: generating the workflow with the sequence of operation for augmenting the wiring process based on the configuration or type of the controller; and wherein the workflow with the sequence of operation augments the displaying of the portable device concurrently of the real-time image of the controller with a graphical and/or textual element positioned in the display near the input/output terminal providing wiring information. 17. The method of claim 16, wherein: the real-time image of the controller augments the input/output terminals and shows live data point values at the input/output terminals; a live data point value can be selected and be changed; and a live data point value can be changed with voice commands via a speech processor to the portable device. 18. The method of claim 15, wherein: during data point creation and terminal assignment with the engineering tool, details about the field devices which are connected to the data points are entered by a user; and based on the configuration, a table with virtually all required connections is created during an application by the engineering tool and saved as a file in a binary format where the application files are stored. 19. The method of claim 18, wherein: the file with the table is downloaded to the controller as part of a normal download of an application; when the controller has the file with the table, the controller exposes data of the file through a web service over a net; and when the controller cannot expose the data of the file, then the file is exposed by the engineering tool through a sub service. 20. The method of claim 18, wherein: based on the table with virtually all required connections, the work flow can assist a wiring process with step-by-step instructions; and the instructions can be displayed on a screen of the display of the portable device or played as a voice output on the portable device.
2,600
10,101
10,101
14,252,414
2,612
Architecture that enables the representation of labels as objects in the 3D (three-dimensional) world, with size, elevation, and orientation. Logical hierarchies in the world are represented by the placement and prominence of labels in the 3D world scene. For example, state labels are positioned higher and larger than city labels. The illusion of the label as a fixed element in the 3D model is maintained during manipulations. Additionally, movement is provided to ensure legibility, but is delayed until the user's input is quiescent. Moreover, labels along roads, for example, can be oriented to stand vertically along a curve.
1. A system, comprising: a drawing component configured to draw a label as a 3D (three-dimensional) label object in a 3D scene and according to a label orientation; a hierarchy component configured to input logical hierarchical information to the drawing component to draw the 3D label object in the 3D scene according to a logical hierarchy; and at least one hardware processor configured to execute computer-executable instructions in a memory associated with the drawing component and the hierarchy component. 2. The system of claim 1, wherein the logical hierarchical information indicates label size of the label object relative to other label objects. 3. The system of claim 1, wherein the 3D label object is drawn to follow a contour of an associated scene object identified with the label object. 4. The system of claim 1, wherein the label orientation of the label object is maintained in response to manipulation of the 3D scene. 5. The system of claim 1, wherein the label object is drawn as oriented vertically on an associated line of a map. 6. The system of claim 1, wherein the label object is re-oriented to a new readable orientation in response to a predetermined delay following a manipulation of the scene. 7. The system of claim 1, wherein the labels are projected in a view plane of the 3D scene. 8. The system of claim 1, wherein the 3D label objects are drawn in the 3D scene based on properties of at least size, elevation, and orientation. 9. A method, comprising acts of: receiving a multi-dimensional scene having scene objects represented as 3D scene objects; assigning and presenting labels in association with the 3D scene objects as 3D label objects, the 3D label objects characterized in the scene with at least one of size, elevation, or orientation; and configuring at least one hardware processor to execute instructions in a memory related to the acts of receiving and assigning. 10. The method of claim 9, further comprising representing size of a 3D label object as different from another label size according to a label hierarchy. 11. The method of claim 9, further comprising drawing a 3D label object in alignment with a contour in the scene, which is a 3D scene. 12. The method of claim 9, further comprising maintaining orientation of a 3D label object in response to a change of the scene. 13. The method of claim 9, further comprising projecting the labels in a view plane of the scene. 14. The method of claim 9, further comprising representing the 3D labels according to graphical emphasis that indicates a logical hierarchy. 15. The method of claim 9, further comprising, in response to a zoom-in operation of the scene from a given elevation and elevated 3D label object, phasing out the elevated 3D label object from view and drawing a new 3D label object associated with a lower elevation. 16. A computer-readable storage medium comprising computer-executable instructions that when executed by a hardware processor, cause the processor to perform acts of: receiving a 3D scene having 3D scene objects; and drawing labels into 3D scene as 3D label objects in association with one or more scene objects, the 3D label objects drawn according to a logical hierarchy characterized by label size, label elevation, and label orientation. 17. The computer-readable storage medium of claim 16, further comprising representing size of a 3D label relative to distance of the 3D label object from a virtual camera. 18. The computer-readable storage medium of claim 16, further comprising drawing a 3D label object in alignment with a contour in the 3D scene and in a readable orientation to a virtual camera from which the 3D scene is viewed. 19. The computer-readable storage medium of claim 16, further comprising re-orienting the 3D label objects to an orientation that ranges between a vertical orientation and a horizontal orientation, the 3D label objects re-oriented according to a stepped movement and relative to an acquiescence state. 20. The computer-readable storage medium of claim 16, further comprising drawing 3D label objects in association with curved 3D scene objects and with curvature that corresponds to curvature of the curved 3D scene objects.
Architecture that enables the representation of labels as objects in the 3D (three-dimensional) world, with size, elevation, and orientation. Logical hierarchies in the world are represented by the placement and prominence of labels in the 3D world scene. For example, state labels are positioned higher and larger than city labels. The illusion of the label as a fixed element in the 3D model is maintained during manipulations. Additionally, movement is provided to ensure legibility, but is delayed until the user's input is quiescent. Moreover, labels along roads, for example, can be oriented to stand vertically along a curve.1. A system, comprising: a drawing component configured to draw a label as a 3D (three-dimensional) label object in a 3D scene and according to a label orientation; a hierarchy component configured to input logical hierarchical information to the drawing component to draw the 3D label object in the 3D scene according to a logical hierarchy; and at least one hardware processor configured to execute computer-executable instructions in a memory associated with the drawing component and the hierarchy component. 2. The system of claim 1, wherein the logical hierarchical information indicates label size of the label object relative to other label objects. 3. The system of claim 1, wherein the 3D label object is drawn to follow a contour of an associated scene object identified with the label object. 4. The system of claim 1, wherein the label orientation of the label object is maintained in response to manipulation of the 3D scene. 5. The system of claim 1, wherein the label object is drawn as oriented vertically on an associated line of a map. 6. The system of claim 1, wherein the label object is re-oriented to a new readable orientation in response to a predetermined delay following a manipulation of the scene. 7. The system of claim 1, wherein the labels are projected in a view plane of the 3D scene. 8. The system of claim 1, wherein the 3D label objects are drawn in the 3D scene based on properties of at least size, elevation, and orientation. 9. A method, comprising acts of: receiving a multi-dimensional scene having scene objects represented as 3D scene objects; assigning and presenting labels in association with the 3D scene objects as 3D label objects, the 3D label objects characterized in the scene with at least one of size, elevation, or orientation; and configuring at least one hardware processor to execute instructions in a memory related to the acts of receiving and assigning. 10. The method of claim 9, further comprising representing size of a 3D label object as different from another label size according to a label hierarchy. 11. The method of claim 9, further comprising drawing a 3D label object in alignment with a contour in the scene, which is a 3D scene. 12. The method of claim 9, further comprising maintaining orientation of a 3D label object in response to a change of the scene. 13. The method of claim 9, further comprising projecting the labels in a view plane of the scene. 14. The method of claim 9, further comprising representing the 3D labels according to graphical emphasis that indicates a logical hierarchy. 15. The method of claim 9, further comprising, in response to a zoom-in operation of the scene from a given elevation and elevated 3D label object, phasing out the elevated 3D label object from view and drawing a new 3D label object associated with a lower elevation. 16. A computer-readable storage medium comprising computer-executable instructions that when executed by a hardware processor, cause the processor to perform acts of: receiving a 3D scene having 3D scene objects; and drawing labels into 3D scene as 3D label objects in association with one or more scene objects, the 3D label objects drawn according to a logical hierarchy characterized by label size, label elevation, and label orientation. 17. The computer-readable storage medium of claim 16, further comprising representing size of a 3D label relative to distance of the 3D label object from a virtual camera. 18. The computer-readable storage medium of claim 16, further comprising drawing a 3D label object in alignment with a contour in the 3D scene and in a readable orientation to a virtual camera from which the 3D scene is viewed. 19. The computer-readable storage medium of claim 16, further comprising re-orienting the 3D label objects to an orientation that ranges between a vertical orientation and a horizontal orientation, the 3D label objects re-oriented according to a stepped movement and relative to an acquiescence state. 20. The computer-readable storage medium of claim 16, further comprising drawing 3D label objects in association with curved 3D scene objects and with curvature that corresponds to curvature of the curved 3D scene objects.
2,600
10,102
10,102
14,539,409
2,697
One embodiment provides a method, including: activating, on a device, a world view camera; obtaining, using the world view camera, world view image data; activating, on the device, a front view camera; obtaining, using the front view camera, front view image data; and providing, on a display of the device, a view displaying the front view image data and the world view image data. Other aspects are described and claimed.
1. A method, comprising: activating, on a device, a world view camera; obtaining, using the world view camera, world view image data; activating, on the device, a front view camera; obtaining, using the front view camera, front view image data; and providing, on a display of the device, a view displaying the front view image data and the world view image data. 2. The method of claim 1, further comprising storing image data comprising both the front view image data and the world view image data. 3. The method of claim 1, wherein the providing comprises identifying one or more foreground objects in the front view image data. 4. The method of claim 3, wherein the identifying comprises one or more of facial identification and edge detection. 5. The method of claim 1, further comprising receiving user input to change the view. 6. The method of claim 5, wherein the user input is selected from the group consisting of contrast settings input, brightness settings input, and color correction input. 7. The method of claim 5, wherein the user input is image manipulation input moving one or more foreground objects relative to the world view image data in the view. 8. The method of claim 7, wherein the image manipulation input comprises touch screen input. 10. The method of claim 1, wherein the view is a preview displaying the front view image data and the world view image data. 11. The method of claim 1, further comprises: identifying two or more foreground objects; and identifying two or more background objects; wherein the providing comprises automatically offsetting one or more of the two or more foreground objects and the two or more background objects from one another. 12. A device, comprising: a front view camera; a world view camera; a display device; a processor; and a memory device that stores instructions executable by the processor to: activate the world view camera; obtain, using the world view camera, world view image data; activate the front view camera; obtain, using the front view camera, front view image data; and provide, on a display of the device, a view displaying the front view image data and the world view image data. 13. The device of claim 12, wherein the instructions are further executable by the processor to store image data comprising both the front view image data and the world view image data. 14. The device of claim 12, wherein to provide a view comprises identifying one or more foreground objects in the front view image data. 15. The device of claim 14, wherein the identifying comprises one or more of facial identification and edge detection. 16. The device of claim 12, wherein the instructions are further executable by the processor to process user input to change the view. 17. The device of claim 16, wherein the user input is selected from the group consisting of contrast settings input, brightness settings input, and color correction input. 18. The device of claim 16, wherein the user input is image manipulation input moving one or more foreground objects relative to the world view image data in the view. 19. The device of claim 17, wherein the image manipulation input comprises touch screen input. 20. The device of claim 12, wherein the view is a preview displaying the front view image data and the world view image data. 21. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that activates, on a device, a world view camera; code that obtains, using the world view camera, world view image data; code that activates, on the device, a front view camera; code that obtains, using the front view camera, front view image data; and code that provides, on a display of the device, a view displaying the front view image data and the world view image data.
One embodiment provides a method, including: activating, on a device, a world view camera; obtaining, using the world view camera, world view image data; activating, on the device, a front view camera; obtaining, using the front view camera, front view image data; and providing, on a display of the device, a view displaying the front view image data and the world view image data. Other aspects are described and claimed.1. A method, comprising: activating, on a device, a world view camera; obtaining, using the world view camera, world view image data; activating, on the device, a front view camera; obtaining, using the front view camera, front view image data; and providing, on a display of the device, a view displaying the front view image data and the world view image data. 2. The method of claim 1, further comprising storing image data comprising both the front view image data and the world view image data. 3. The method of claim 1, wherein the providing comprises identifying one or more foreground objects in the front view image data. 4. The method of claim 3, wherein the identifying comprises one or more of facial identification and edge detection. 5. The method of claim 1, further comprising receiving user input to change the view. 6. The method of claim 5, wherein the user input is selected from the group consisting of contrast settings input, brightness settings input, and color correction input. 7. The method of claim 5, wherein the user input is image manipulation input moving one or more foreground objects relative to the world view image data in the view. 8. The method of claim 7, wherein the image manipulation input comprises touch screen input. 10. The method of claim 1, wherein the view is a preview displaying the front view image data and the world view image data. 11. The method of claim 1, further comprises: identifying two or more foreground objects; and identifying two or more background objects; wherein the providing comprises automatically offsetting one or more of the two or more foreground objects and the two or more background objects from one another. 12. A device, comprising: a front view camera; a world view camera; a display device; a processor; and a memory device that stores instructions executable by the processor to: activate the world view camera; obtain, using the world view camera, world view image data; activate the front view camera; obtain, using the front view camera, front view image data; and provide, on a display of the device, a view displaying the front view image data and the world view image data. 13. The device of claim 12, wherein the instructions are further executable by the processor to store image data comprising both the front view image data and the world view image data. 14. The device of claim 12, wherein to provide a view comprises identifying one or more foreground objects in the front view image data. 15. The device of claim 14, wherein the identifying comprises one or more of facial identification and edge detection. 16. The device of claim 12, wherein the instructions are further executable by the processor to process user input to change the view. 17. The device of claim 16, wherein the user input is selected from the group consisting of contrast settings input, brightness settings input, and color correction input. 18. The device of claim 16, wherein the user input is image manipulation input moving one or more foreground objects relative to the world view image data in the view. 19. The device of claim 17, wherein the image manipulation input comprises touch screen input. 20. The device of claim 12, wherein the view is a preview displaying the front view image data and the world view image data. 21. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that activates, on a device, a world view camera; code that obtains, using the world view camera, world view image data; code that activates, on the device, a front view camera; code that obtains, using the front view camera, front view image data; and code that provides, on a display of the device, a view displaying the front view image data and the world view image data.
2,600
10,103
10,103
15,704,043
2,619
A system, method and apparatus for rapid film pre-visualization are provided, including a motion capture component, a virtual digital rendering component configured to receive data from the motion sensors and to render motion in a three dimensional virtual environment, a controller component configured to allow a director to navigate within the three dimensional virtual environment to control the visual aspects of one or more shots within the three dimensional virtual environment, and a director's station providing a modification point of a data pipeline input to the director's station, the data pipeline input comprising data from data capture through virtual digital rendering.
1. A system for rapid film pre-visualization, comprising: a motion capture component comprising motion capture sensors and a virtual digital rendering component configured to receive data from the motion sensors and to render motion in a three dimensional virtual environment according to said received data; a display component configured to display an output of the virtual digital rendering component; a controller component, configured to interface with the virtual digital rendering component and allow a director to navigate within the three dimensional virtual environment to control the visual aspects of one or more shots within the three dimensional virtual environment; and a director's station, the directors station at least including said display component and said controller component, the director's station providing a modification point of the data pipeline input to the director's station, the data pipeline input comprising data from data capture through virtual digital rendering. 2. A system in accordance with claim 1, wherein said motion capture component is an RF motion capture component that detects accelerometers in a wearable suit within an RF grid. 3. A system in accordance with claim 1, wherein said virtual digital rendering component comprises a MAYA platform. 4. A system in accordance with claim 1, wherein said controller is configured to act as a virtual camera in the three dimensional virtual environment, the controller configured with plural handheld remote components. 5. A system in accordance with claim 4, wherein said virtual camera includes plural handheld remote components configured to navigate within the three dimensional virtual environment to control the virtual camera using film camera controls within the three dimensional virtual space. 6. A system in accordance with claim 1, wherein at least one handheld remote component includes a toggle allowing for at least six degrees of motional control. 7. A system in accordance with claim 1, wherein at least one handheld remote component includes a handheld remote that is sensitive to a reference magnetic field to provide real time positional information about the remote control relative to the reference magnetic field. 8. A system in accordance with claim 1, wherein two handheld remote components are coupled together and wherein said controller component includes film camera controls including pan, tilt and zoom controls. 9. A system in accordance with claim 8, further comprising a view screen attached to said handheld remote components, the view screen configured to act as a virtual viewfinder for the virtual camera. 10. A system in accordance with claim 1, wherein said controller component is configured to navigate as a virtual camera in said three dimensional virtual environment in real time to provide pre-visualization for a film. 11. A system in accordance with claim 10, wherein said display component comprises a plural displays configured to provide the director with immersive image data such that the director can navigate and control said virtual camera therein. 12. A system in accordance with claim 1, further comprising a virtual world component configured to provide the three dimensional virtual environment with realistic world detail. 13. A system in accordance with claim 1, wherein said director's station is configured to provide an export of director pre-visualization to a storage component. 14. A system in accordance with claim 1, wherein said data follows a data pipeline from data capture to data rendering to the director's station, with further manipulation of data responsive to the director's controller. 15. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide varying levels of detail in said three dimensional virtual environment for pre-visualization approval. 16. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide rendering of flat-shaded blasts in said three dimensional virtual environment for pre-visualization approval. 17. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide additional shading and stereoscopic processing to rendered figures derived from said received data in said three dimensional virtual environment for pre-visualization approval. 18. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide additional detail development in said three dimensional virtual environment for pre-visualization approval. 19. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide virtual terrain in said three dimensional virtual environment for pre-visualization approval. 20. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide a level of detail that is representative of actual film production in said three dimensional virtual environment for pre-visualization approval. 21. A method for rapid film pre-visualization, comprising: capturing position and motion data via a motion capture system including plural sensor detectors; digitally rendering, via a virtual digital rendering component, said captured data and re-creating motion of the sensors in a three dimensional virtual environment; displaying an output of the position and motion in the three dimensional virtual environment via a display component; and interfacing with the virtual digital rendering component via a controller component at a director's station with navigation via the controller component within the three dimensional virtual environment to control the visual aspects of one or more shots within the three dimensional virtual environment, the directors station at least including said display component and said controller component, the director's station providing a modification point of a data pipeline input to the director's station, the data pipeline input comprising data from data capture through virtual digital rendering. 22. A method in accordance with claim 21, further comprising navigation control of said control component in six degrees of movement and film camera control including pan, tilt and zoom controls. 23. A method in accordance with claim 21, further comprising export of director pre-visualization to a storage component from said director's station. 24. A method in accordance with claim 21, further comprising controller component navigation as a virtual camera in said three dimensional virtual environment in real time to provide pre-visualization for a film. 25. A method in accordance with claim 24, wherein said display component comprises a plural displays configured to provide the director with immersive image data such that the director can navigate and control said virtual camera therein. 26. A method in accordance with claim 21, further comprising providing the three dimensional virtual environment with realistic world detail via a virtual world component. 27. A method in accordance with claim 21, further comprising providing an export of director pre-visualization to a storage component via said director's station. 28. A method in accordance with claim 21, wherein said data follows a data pipeline from data capture to data rendering to the director's station, with further manipulation of data responsive to the director's controller. 29. A method in accordance with claim 21, further comprising providing varying levels of detail in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval. 30. A method in accordance with claim 21, further comprising rendering of flat-shaded blasts in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval. 31. A method in accordance with claim 21, further comprising providing additional shading and stereoscopic processing to rendered figures derived from said received data in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval. 32. A method in accordance with claim 21, further comprising providing additional detail development in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval. 33. A method in accordance with claim 21, further comprising providing virtual terrain in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval.
A system, method and apparatus for rapid film pre-visualization are provided, including a motion capture component, a virtual digital rendering component configured to receive data from the motion sensors and to render motion in a three dimensional virtual environment, a controller component configured to allow a director to navigate within the three dimensional virtual environment to control the visual aspects of one or more shots within the three dimensional virtual environment, and a director's station providing a modification point of a data pipeline input to the director's station, the data pipeline input comprising data from data capture through virtual digital rendering.1. A system for rapid film pre-visualization, comprising: a motion capture component comprising motion capture sensors and a virtual digital rendering component configured to receive data from the motion sensors and to render motion in a three dimensional virtual environment according to said received data; a display component configured to display an output of the virtual digital rendering component; a controller component, configured to interface with the virtual digital rendering component and allow a director to navigate within the three dimensional virtual environment to control the visual aspects of one or more shots within the three dimensional virtual environment; and a director's station, the directors station at least including said display component and said controller component, the director's station providing a modification point of the data pipeline input to the director's station, the data pipeline input comprising data from data capture through virtual digital rendering. 2. A system in accordance with claim 1, wherein said motion capture component is an RF motion capture component that detects accelerometers in a wearable suit within an RF grid. 3. A system in accordance with claim 1, wherein said virtual digital rendering component comprises a MAYA platform. 4. A system in accordance with claim 1, wherein said controller is configured to act as a virtual camera in the three dimensional virtual environment, the controller configured with plural handheld remote components. 5. A system in accordance with claim 4, wherein said virtual camera includes plural handheld remote components configured to navigate within the three dimensional virtual environment to control the virtual camera using film camera controls within the three dimensional virtual space. 6. A system in accordance with claim 1, wherein at least one handheld remote component includes a toggle allowing for at least six degrees of motional control. 7. A system in accordance with claim 1, wherein at least one handheld remote component includes a handheld remote that is sensitive to a reference magnetic field to provide real time positional information about the remote control relative to the reference magnetic field. 8. A system in accordance with claim 1, wherein two handheld remote components are coupled together and wherein said controller component includes film camera controls including pan, tilt and zoom controls. 9. A system in accordance with claim 8, further comprising a view screen attached to said handheld remote components, the view screen configured to act as a virtual viewfinder for the virtual camera. 10. A system in accordance with claim 1, wherein said controller component is configured to navigate as a virtual camera in said three dimensional virtual environment in real time to provide pre-visualization for a film. 11. A system in accordance with claim 10, wherein said display component comprises a plural displays configured to provide the director with immersive image data such that the director can navigate and control said virtual camera therein. 12. A system in accordance with claim 1, further comprising a virtual world component configured to provide the three dimensional virtual environment with realistic world detail. 13. A system in accordance with claim 1, wherein said director's station is configured to provide an export of director pre-visualization to a storage component. 14. A system in accordance with claim 1, wherein said data follows a data pipeline from data capture to data rendering to the director's station, with further manipulation of data responsive to the director's controller. 15. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide varying levels of detail in said three dimensional virtual environment for pre-visualization approval. 16. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide rendering of flat-shaded blasts in said three dimensional virtual environment for pre-visualization approval. 17. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide additional shading and stereoscopic processing to rendered figures derived from said received data in said three dimensional virtual environment for pre-visualization approval. 18. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide additional detail development in said three dimensional virtual environment for pre-visualization approval. 19. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide virtual terrain in said three dimensional virtual environment for pre-visualization approval. 20. A system in accordance with claim 1, wherein said virtual digital rendering component is configured to provide a level of detail that is representative of actual film production in said three dimensional virtual environment for pre-visualization approval. 21. A method for rapid film pre-visualization, comprising: capturing position and motion data via a motion capture system including plural sensor detectors; digitally rendering, via a virtual digital rendering component, said captured data and re-creating motion of the sensors in a three dimensional virtual environment; displaying an output of the position and motion in the three dimensional virtual environment via a display component; and interfacing with the virtual digital rendering component via a controller component at a director's station with navigation via the controller component within the three dimensional virtual environment to control the visual aspects of one or more shots within the three dimensional virtual environment, the directors station at least including said display component and said controller component, the director's station providing a modification point of a data pipeline input to the director's station, the data pipeline input comprising data from data capture through virtual digital rendering. 22. A method in accordance with claim 21, further comprising navigation control of said control component in six degrees of movement and film camera control including pan, tilt and zoom controls. 23. A method in accordance with claim 21, further comprising export of director pre-visualization to a storage component from said director's station. 24. A method in accordance with claim 21, further comprising controller component navigation as a virtual camera in said three dimensional virtual environment in real time to provide pre-visualization for a film. 25. A method in accordance with claim 24, wherein said display component comprises a plural displays configured to provide the director with immersive image data such that the director can navigate and control said virtual camera therein. 26. A method in accordance with claim 21, further comprising providing the three dimensional virtual environment with realistic world detail via a virtual world component. 27. A method in accordance with claim 21, further comprising providing an export of director pre-visualization to a storage component via said director's station. 28. A method in accordance with claim 21, wherein said data follows a data pipeline from data capture to data rendering to the director's station, with further manipulation of data responsive to the director's controller. 29. A method in accordance with claim 21, further comprising providing varying levels of detail in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval. 30. A method in accordance with claim 21, further comprising rendering of flat-shaded blasts in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval. 31. A method in accordance with claim 21, further comprising providing additional shading and stereoscopic processing to rendered figures derived from said received data in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval. 32. A method in accordance with claim 21, further comprising providing additional detail development in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval. 33. A method in accordance with claim 21, further comprising providing virtual terrain in said three dimensional virtual environment via said virtual digital rendering component for pre-visualization approval.
2,600
10,104
10,104
14,816,135
2,625
An electronic device may include a touchscreen having an array of finger touch sensitive areas, and a controller coupled to the touchscreen. The controller is configured to read touch values from the array of finger touch sensitive areas, and determine when the read touch values define a valid single finger touch pattern having lower touch values within adjacent higher touch values, and, if so, treating the read touch values as being representative of a single finger touch, and, if not, causing a finger separation determination.
1. An electronic device comprising: a touchscreen comprising an array of finger touch sensitive areas; and a controller coupled to said touchscreen and configured to read touch values from the array of finger touch sensitive areas, and determine when the read touch values define a valid single finger touch pattern having lower touch values within adjacent higher touch values, and, if so, treating the read touch values as being representative of a single finger touch, and, if not, causing a finger separation determination. 2. The electronic device of claim 1 wherein said controller is configured to determine when the read touch values define the valid single finger touch pattern by at least determining when the higher touch values define a complete ring. 3. The electronic device of claim 1 wherein said controller is configured to determine when the read touch values define an invalid single finger touch pattern by at least determining when the higher touch values define a partial ring. 4. The electronic device of claim 3 wherein said controller is configured to determine the partial ring based upon a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 5. The electronic device of claim 3 wherein said controller is configured to determine the partial ring based upon corner areas of a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 6. The electronic device of claim 1 wherein said controller is configured to not perform the finger separation determination when the read touch values define the valid single finger touch pattern. 7. The electronic device of claim 1 further comprising a processor and a memory coupled thereto; and wherein the processor is coupled to the controller. 8. The electronic device of claim 1 wherein each finger touch sensitive area of the array thereof comprises a capacitive finger sensing pixel. 9. The electronic device of claim 1 further comprising a housing carrying the touchscreen and the controller. 10. An electronic device comprising: a processor and associated memory coupled thereto; and a display coupled to said processor and comprising a touchscreen comprising an array of finger touch sensitive areas, and a controller coupled to said touchscreen and configured to read touch values from the array of finger touch sensitive areas, and determine when the read touch values define a valid single finger touch pattern having lower touch values within adjacent higher touch values, and, if so, treating the read touch values as being representative of a single finger touch, and, if not, causing a finger separation determination. 11. The electronic device of claim 10 wherein said controller is configured to determine when the read touch values define the valid single finger touch pattern by at least determining when the higher touch values define a complete ring. 12. The electronic device of claim 10 wherein said controller is configured to determine when the read touch values define an invalid single finger touch pattern by at least determining when the higher touch values define a partial ring. 13. The electronic device of claim 12 wherein said controller is configured to determine the partial ring based upon a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 14. The electronic device of claim 12 wherein said controller is configured to determine the partial ring based upon corner areas of a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 15. The electronic device of claim 10 wherein said controller is configured to not perform the finger separation determination when the read touch values define the valid single finger touch pattern. 16. A method for operating an electronic device comprising a touchscreen having an array of finger touch sensitive areas, and a controller coupled thereto, the method comprising: operating the controller to read touch values from the array of finger touch sensitive areas; and operating the controller to determine when the read touch values define a valid single finger touch pattern having lower touch values within adjacent higher touch values, and, if so, treating the read touch values as being representative of a single finger touch, and, if not, causing a finger separation determination. 17. The method of claim 16 further comprising operating the controller to determine when the read touch values define the valid single finger touch pattern by at least determining when the higher touch values define a complete ring. 18. The method of claim 16 further comprising operating the controller to determine when the read touch values define an invalid single finger touch pattern by at least determining when the higher touch values define a partial ring. 19. The method of claim 18 further comprising operating the controller to determine the partial ring based upon a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 20. The method of claim 18 further comprising operating the controller to determine the partial ring based upon corner areas of a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 21. The method of claim 16 further comprising operating the controller to not perform the finger separation determination when the read touch values define the valid single finger touch pattern.
An electronic device may include a touchscreen having an array of finger touch sensitive areas, and a controller coupled to the touchscreen. The controller is configured to read touch values from the array of finger touch sensitive areas, and determine when the read touch values define a valid single finger touch pattern having lower touch values within adjacent higher touch values, and, if so, treating the read touch values as being representative of a single finger touch, and, if not, causing a finger separation determination.1. An electronic device comprising: a touchscreen comprising an array of finger touch sensitive areas; and a controller coupled to said touchscreen and configured to read touch values from the array of finger touch sensitive areas, and determine when the read touch values define a valid single finger touch pattern having lower touch values within adjacent higher touch values, and, if so, treating the read touch values as being representative of a single finger touch, and, if not, causing a finger separation determination. 2. The electronic device of claim 1 wherein said controller is configured to determine when the read touch values define the valid single finger touch pattern by at least determining when the higher touch values define a complete ring. 3. The electronic device of claim 1 wherein said controller is configured to determine when the read touch values define an invalid single finger touch pattern by at least determining when the higher touch values define a partial ring. 4. The electronic device of claim 3 wherein said controller is configured to determine the partial ring based upon a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 5. The electronic device of claim 3 wherein said controller is configured to determine the partial ring based upon corner areas of a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 6. The electronic device of claim 1 wherein said controller is configured to not perform the finger separation determination when the read touch values define the valid single finger touch pattern. 7. The electronic device of claim 1 further comprising a processor and a memory coupled thereto; and wherein the processor is coupled to the controller. 8. The electronic device of claim 1 wherein each finger touch sensitive area of the array thereof comprises a capacitive finger sensing pixel. 9. The electronic device of claim 1 further comprising a housing carrying the touchscreen and the controller. 10. An electronic device comprising: a processor and associated memory coupled thereto; and a display coupled to said processor and comprising a touchscreen comprising an array of finger touch sensitive areas, and a controller coupled to said touchscreen and configured to read touch values from the array of finger touch sensitive areas, and determine when the read touch values define a valid single finger touch pattern having lower touch values within adjacent higher touch values, and, if so, treating the read touch values as being representative of a single finger touch, and, if not, causing a finger separation determination. 11. The electronic device of claim 10 wherein said controller is configured to determine when the read touch values define the valid single finger touch pattern by at least determining when the higher touch values define a complete ring. 12. The electronic device of claim 10 wherein said controller is configured to determine when the read touch values define an invalid single finger touch pattern by at least determining when the higher touch values define a partial ring. 13. The electronic device of claim 12 wherein said controller is configured to determine the partial ring based upon a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 14. The electronic device of claim 12 wherein said controller is configured to determine the partial ring based upon corner areas of a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 15. The electronic device of claim 10 wherein said controller is configured to not perform the finger separation determination when the read touch values define the valid single finger touch pattern. 16. A method for operating an electronic device comprising a touchscreen having an array of finger touch sensitive areas, and a controller coupled thereto, the method comprising: operating the controller to read touch values from the array of finger touch sensitive areas; and operating the controller to determine when the read touch values define a valid single finger touch pattern having lower touch values within adjacent higher touch values, and, if so, treating the read touch values as being representative of a single finger touch, and, if not, causing a finger separation determination. 17. The method of claim 16 further comprising operating the controller to determine when the read touch values define the valid single finger touch pattern by at least determining when the higher touch values define a complete ring. 18. The method of claim 16 further comprising operating the controller to determine when the read touch values define an invalid single finger touch pattern by at least determining when the higher touch values define a partial ring. 19. The method of claim 18 further comprising operating the controller to determine the partial ring based upon a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 20. The method of claim 18 further comprising operating the controller to determine the partial ring based upon corner areas of a first region of the array of finger touch sensitive areas having the lower touch values not lying completely within a second region of the array of finger touch sensitive areas having the higher touch values. 21. The method of claim 16 further comprising operating the controller to not perform the finger separation determination when the read touch values define the valid single finger touch pattern.
2,600
10,105
10,105
14,776,053
2,646
There are provided measures for Wi-Fi support awareness. Such measures exemplarily include dynamically determining capability for a radio access technology while being connected to a communication network, and transmitting, via the connection to the communication network, a message comprising an information element indicative of the capability for the radio access technology.
1. A method comprising dynamically determining capability for a radio access technology while being connected to a communication network, and transmitting, via said connection to said communication network, a message comprising an information element indicative of said capability for said radio access technology only in case of being able to access networks using the radio access technology. 2. The method according to claim 1, wherein said capability is at least one of support for said radio access technology, availability of service subscription for said radio access technology, and support for a network side mechanism in relation to said radio access technology. 3. The method according to claim 1, comprising listening for networks utilizing said radio access technology, and checking whether known and accessible networks utilizing said radio access technology are listened, wherein said capability comprises that known and accessible networks utilizing said radio access technology are listened. 4. The method according to claim 2, wherein said network side mechanism in relation to said radio access technology is an interworking of a radio access network with said radio access technology. 5. The method according to claim 1, wherein said radio access technology is Wi-Fi, or said radio access technology is a 3GPP radio access technology, or said radio access technology is a certain frequency band of a 3GPP radio access technology. 6. The method according to claim 2, wherein in relation to said determining, the method further comprises at least one of receiving, via said connection to said communication network, access network discovery and selection function information as information regarding service subscription for said radio access technology, receiving, via said connection to said communication network, hotspot 2.0 information as information regarding service subscription for said radio access technology, and obtaining settings related to said radio access technology as information regarding service subscription for said radio access technology. 7-9. (canceled) 10. A method comprising receiving a message comprising an information element indicative of capability for a radio access technology associated with a user equipment (UE), wherein the message is transmitted only in case the UE is able to access networks using the radio access technology, and considering a sender of said message for interworking of a radio access network with said radio access technology in terms of said user equipment based on said capability. 11. The method according to claim 10, wherein said capability is at least one of support for said radio access technology, availability of service subscription for said radio access technology, support for a network side mechanism in relation to said radio access technology, and that known and accessible networks utilizing said radio access technology are listened. 12. The method according to claim 11, wherein said network side mechanism in relation to said radio access technology is an interworking of a radio access network with said radio access technology. 13. The method according to claim 10, wherein said radio access technology is Wi-Fi, or said radio access technology is a 3GPP radio access technology, or said radio access technology is a certain frequency band of a 3GPP radio access technology. 14. The method according to claim 10, further comprising at least one of transmitting access network discovery and selection function information as information regarding service subscription for said radio access technology, and transmitting hotspot 2.0 information as information regarding service subscription for said radio access technology. 15-29. (canceled) 30. An apparatus comprising; at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code, with the at least one processor, causing the apparatus at least to dynamically determine capability for a radio access technology while being connected to a communication network, and transmit, via said connection to said communication network, a message comprising an information element indicative of said capability for said radio access technology only in case of being able to access networks using the radio access technology. 31. The apparatus according to claim 30, wherein said capability is at least one of support for said radio access technology, availability of service subscription for said radio access technology, and support for a network side mechanism in relation to said radio access technology. 32. The apparatus according to claim 30, wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the apparatus at least to: listen for networks utilizing said radio access technology, and check whether known and accessible networks utilizing said radio access technology are listened, wherein said capability comprises that known and accessible networks utilizing said radio access technology are listened. 33. (canceled) 34. The apparatus according to claim 30, wherein said radio access technology is Wi-Fi, or said radio access technology is a 3GPP radio access technology, or said radio access technology is a certain frequency band of a 3GPP radio access technology. 35. The apparatus according to claim 31, wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the apparatus at least to: receive, via said connection to said communication network, access network discovery and selection function information as information regarding service subscription for said radio access technology, receive, via said connection to said communication network, hotspot 2.0 information as information regarding service subscription for said radio access technology, and obtain settings related to said radio access technology as information regarding service subscription for said radio access technology. 36-38. (canceled) 39. An apparatus comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code, with the at least one processor, causing the apparatus at least to receive a message comprising an information element indicative of capability for a radio access technology associated with a user equipment (UE), and consider a sender of said message for interworking of a radio access network with said radio access technology in terms of said user equipment based on said capability. 40. The apparatus according to claim 39, wherein said capability is at least one of support for said radio access technology, availability of service subscription for said radio access technology, support for a network side mechanism in relation to said radio access technology, and that known and accessible networks utilizing said radio access technology are listened. 41. (canceled) 42. The apparatus according to claim 39, wherein said radio access technology is Wi-Fi, or said radio access technology is a 3GPP radio access technology, or said radio access technology is a certain frequency band of a 3GPP radio access technology. 43. The apparatus according to claim 39, wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the apparatus at least to: transmit access network discovery and selection function information as information regarding service subscription for said radio access technology, and transmit hotspot 2.0 information as information regarding service subscription for said radio access technology. 44-60. (canceled)
There are provided measures for Wi-Fi support awareness. Such measures exemplarily include dynamically determining capability for a radio access technology while being connected to a communication network, and transmitting, via the connection to the communication network, a message comprising an information element indicative of the capability for the radio access technology.1. A method comprising dynamically determining capability for a radio access technology while being connected to a communication network, and transmitting, via said connection to said communication network, a message comprising an information element indicative of said capability for said radio access technology only in case of being able to access networks using the radio access technology. 2. The method according to claim 1, wherein said capability is at least one of support for said radio access technology, availability of service subscription for said radio access technology, and support for a network side mechanism in relation to said radio access technology. 3. The method according to claim 1, comprising listening for networks utilizing said radio access technology, and checking whether known and accessible networks utilizing said radio access technology are listened, wherein said capability comprises that known and accessible networks utilizing said radio access technology are listened. 4. The method according to claim 2, wherein said network side mechanism in relation to said radio access technology is an interworking of a radio access network with said radio access technology. 5. The method according to claim 1, wherein said radio access technology is Wi-Fi, or said radio access technology is a 3GPP radio access technology, or said radio access technology is a certain frequency band of a 3GPP radio access technology. 6. The method according to claim 2, wherein in relation to said determining, the method further comprises at least one of receiving, via said connection to said communication network, access network discovery and selection function information as information regarding service subscription for said radio access technology, receiving, via said connection to said communication network, hotspot 2.0 information as information regarding service subscription for said radio access technology, and obtaining settings related to said radio access technology as information regarding service subscription for said radio access technology. 7-9. (canceled) 10. A method comprising receiving a message comprising an information element indicative of capability for a radio access technology associated with a user equipment (UE), wherein the message is transmitted only in case the UE is able to access networks using the radio access technology, and considering a sender of said message for interworking of a radio access network with said radio access technology in terms of said user equipment based on said capability. 11. The method according to claim 10, wherein said capability is at least one of support for said radio access technology, availability of service subscription for said radio access technology, support for a network side mechanism in relation to said radio access technology, and that known and accessible networks utilizing said radio access technology are listened. 12. The method according to claim 11, wherein said network side mechanism in relation to said radio access technology is an interworking of a radio access network with said radio access technology. 13. The method according to claim 10, wherein said radio access technology is Wi-Fi, or said radio access technology is a 3GPP radio access technology, or said radio access technology is a certain frequency band of a 3GPP radio access technology. 14. The method according to claim 10, further comprising at least one of transmitting access network discovery and selection function information as information regarding service subscription for said radio access technology, and transmitting hotspot 2.0 information as information regarding service subscription for said radio access technology. 15-29. (canceled) 30. An apparatus comprising; at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code, with the at least one processor, causing the apparatus at least to dynamically determine capability for a radio access technology while being connected to a communication network, and transmit, via said connection to said communication network, a message comprising an information element indicative of said capability for said radio access technology only in case of being able to access networks using the radio access technology. 31. The apparatus according to claim 30, wherein said capability is at least one of support for said radio access technology, availability of service subscription for said radio access technology, and support for a network side mechanism in relation to said radio access technology. 32. The apparatus according to claim 30, wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the apparatus at least to: listen for networks utilizing said radio access technology, and check whether known and accessible networks utilizing said radio access technology are listened, wherein said capability comprises that known and accessible networks utilizing said radio access technology are listened. 33. (canceled) 34. The apparatus according to claim 30, wherein said radio access technology is Wi-Fi, or said radio access technology is a 3GPP radio access technology, or said radio access technology is a certain frequency band of a 3GPP radio access technology. 35. The apparatus according to claim 31, wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the apparatus at least to: receive, via said connection to said communication network, access network discovery and selection function information as information regarding service subscription for said radio access technology, receive, via said connection to said communication network, hotspot 2.0 information as information regarding service subscription for said radio access technology, and obtain settings related to said radio access technology as information regarding service subscription for said radio access technology. 36-38. (canceled) 39. An apparatus comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code, with the at least one processor, causing the apparatus at least to receive a message comprising an information element indicative of capability for a radio access technology associated with a user equipment (UE), and consider a sender of said message for interworking of a radio access network with said radio access technology in terms of said user equipment based on said capability. 40. The apparatus according to claim 39, wherein said capability is at least one of support for said radio access technology, availability of service subscription for said radio access technology, support for a network side mechanism in relation to said radio access technology, and that known and accessible networks utilizing said radio access technology are listened. 41. (canceled) 42. The apparatus according to claim 39, wherein said radio access technology is Wi-Fi, or said radio access technology is a 3GPP radio access technology, or said radio access technology is a certain frequency band of a 3GPP radio access technology. 43. The apparatus according to claim 39, wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the apparatus at least to: transmit access network discovery and selection function information as information regarding service subscription for said radio access technology, and transmit hotspot 2.0 information as information regarding service subscription for said radio access technology. 44-60. (canceled)
2,600
10,106
10,106
15,631,702
2,632
A PLC modem ( 131 - 133 ) is prompted to increase, starting from a predetermined minimum transmit power, a transmit power of data transmission on a PLC channel ( 112 ) at a given time or time period defined with respect to a mutual time reference of a DSL channel ( 111 ) and the PLC channel ( 112 ). A DSL modem ( 121 ) is prompted to measure a signal-to-noise value at the given time or time period defined with respect to the mutual time reference. Mitigation of interference 190 between the PLC channel ( 112 ) and the DSL channel ( 111 ) becomes possible.
1-28. (canceled) 29. An apparatus to mitigate interference between a Digital Subscriber Line, DSL, device and a Power Line Communication, PLC, device, said apparatus configured to: provide, to the PLC device, a first instruction to increase a transmit power of data transmission of the PLC device during a measurement period, provide, to the DSL device, a second instruction to measure a signal-to-noise value during the measurement period, and provide, to the PLC device and to the DSL device, a mutual time reference in which the measurement period is defined. 30. The apparatus of claim 29, wherein the apparatus is further configured to: receive, from the PLC device, an indication of the measurement period and the transmit power. 31. The apparatus of claim 29, wherein the apparatus is part of a network management system, or wherein the apparatus is co-located with either the DSL device or the PLC device. 32. The apparatus of claim 29, wherein the apparatus is configured to provide the first instruction as part of a sensitivity test. 33. The apparatus of claim 29, wherein the apparatus is further configured to: obtain a control message indicating that the DSL device is transitioned from Initialization to Showtime, and provide the first instruction while the DSL device is operating in Showtime. 34. The apparatus of claim 29, wherein the first instruction is to increase the transmit power of the data transmission of the PLC device from a predetermined minimum transmit power. 35. The apparatus of claim 29, wherein the apparatus is further configured to provide, to the DSL device, a third instruction to initiate a crosstalk estimation. 36. The apparatus of claim 29, wherein the apparatus is further configured to provide, to another PLC device, a fourth instruction to reduce a transmit power of data transmission of the another PLC device during the measurement period. 37. The apparatus of claim 29, wherein the apparatus is further configured to provide the first instruction to a plurality of PLC devices depending on an expected interference from the PLC devices. 38. The apparatus of claim 29, wherein the apparatus is further configured to obtain, from the DSL device, an indication of the signal-to-noise value measured during the measurement period, and wherein the apparatus is optionally configured to compute a value of a set transmit power for use by the PLC device in consideration of the measured signal-to-noise value. 39. The apparatus of claim 29, wherein the apparatus is further configured to provide, to the PLC device, a fifth instruction to reduce the transmit power of the data transmission to a set transmit power, at least for frequency bands used by the DSL device. 40. A method of mitigating interference between Digital Subscriber Line, DSL, device and a Power Line Communication, PLC, device, said method comprising: providing, to the PLC device, a first instruction to increase a transmit power of data transmission of the PLC device during a measurement period, providing, to the DSL device, a second instruction to measure a signal-to-noise value during the measurement period, and providing, to the PLC device and to the DSL device, a mutual time reference in which the measurement period is defined. 41. The method of claim 40, further comprising: receiving, from the PLC device, an indication of the measurement period and the transmit power 42. The method of claim 40, further comprising: obtaining a control message indicating that the DSL device is transitioned from Initialization to Showtime, and providing the first instruction while the DSL device is operating in Showtime.
A PLC modem ( 131 - 133 ) is prompted to increase, starting from a predetermined minimum transmit power, a transmit power of data transmission on a PLC channel ( 112 ) at a given time or time period defined with respect to a mutual time reference of a DSL channel ( 111 ) and the PLC channel ( 112 ). A DSL modem ( 121 ) is prompted to measure a signal-to-noise value at the given time or time period defined with respect to the mutual time reference. Mitigation of interference 190 between the PLC channel ( 112 ) and the DSL channel ( 111 ) becomes possible.1-28. (canceled) 29. An apparatus to mitigate interference between a Digital Subscriber Line, DSL, device and a Power Line Communication, PLC, device, said apparatus configured to: provide, to the PLC device, a first instruction to increase a transmit power of data transmission of the PLC device during a measurement period, provide, to the DSL device, a second instruction to measure a signal-to-noise value during the measurement period, and provide, to the PLC device and to the DSL device, a mutual time reference in which the measurement period is defined. 30. The apparatus of claim 29, wherein the apparatus is further configured to: receive, from the PLC device, an indication of the measurement period and the transmit power. 31. The apparatus of claim 29, wherein the apparatus is part of a network management system, or wherein the apparatus is co-located with either the DSL device or the PLC device. 32. The apparatus of claim 29, wherein the apparatus is configured to provide the first instruction as part of a sensitivity test. 33. The apparatus of claim 29, wherein the apparatus is further configured to: obtain a control message indicating that the DSL device is transitioned from Initialization to Showtime, and provide the first instruction while the DSL device is operating in Showtime. 34. The apparatus of claim 29, wherein the first instruction is to increase the transmit power of the data transmission of the PLC device from a predetermined minimum transmit power. 35. The apparatus of claim 29, wherein the apparatus is further configured to provide, to the DSL device, a third instruction to initiate a crosstalk estimation. 36. The apparatus of claim 29, wherein the apparatus is further configured to provide, to another PLC device, a fourth instruction to reduce a transmit power of data transmission of the another PLC device during the measurement period. 37. The apparatus of claim 29, wherein the apparatus is further configured to provide the first instruction to a plurality of PLC devices depending on an expected interference from the PLC devices. 38. The apparatus of claim 29, wherein the apparatus is further configured to obtain, from the DSL device, an indication of the signal-to-noise value measured during the measurement period, and wherein the apparatus is optionally configured to compute a value of a set transmit power for use by the PLC device in consideration of the measured signal-to-noise value. 39. The apparatus of claim 29, wherein the apparatus is further configured to provide, to the PLC device, a fifth instruction to reduce the transmit power of the data transmission to a set transmit power, at least for frequency bands used by the DSL device. 40. A method of mitigating interference between Digital Subscriber Line, DSL, device and a Power Line Communication, PLC, device, said method comprising: providing, to the PLC device, a first instruction to increase a transmit power of data transmission of the PLC device during a measurement period, providing, to the DSL device, a second instruction to measure a signal-to-noise value during the measurement period, and providing, to the PLC device and to the DSL device, a mutual time reference in which the measurement period is defined. 41. The method of claim 40, further comprising: receiving, from the PLC device, an indication of the measurement period and the transmit power 42. The method of claim 40, further comprising: obtaining a control message indicating that the DSL device is transitioned from Initialization to Showtime, and providing the first instruction while the DSL device is operating in Showtime.
2,600
10,107
10,107
14,698,362
2,644
In one example, a method includes determining, by a processor operating in a first power mode and based on first motion data, a first activity of a user, transitioning from operating in the first power mode to operating in a second power mode, wherein the processor consumes less power while operating in the second power mode than in the first power mode, responsive to determining, while the processor is operating in the second power mode and based on second motion data, that a change in an angle relative to gravity satisfies a threshold, transitioning from operating in the second power mode to operating in the first power mode, determining, by the processor and based on second motion data, a second activity of the user, and, responsive to determining that the second activity is different from the first activity, performing an action.
1. A method comprising: determining, by a processor of a mobile computing device operating in a first power mode and based on first motion data generated by a motion sensor of the mobile computing device, a first activity of a user associated with the mobile computing device, the first motion data indicating movement of the mobile computing device during a first time period; transitioning, by the processor, from operating in the first power mode to operating in a second power mode, wherein the processor consumes less power while operating in the second power mode than while operating in the first power mode; while the processor is operating in the second power mode, determining, by a motion module of the mobile computing device and based on second motion data generated by the motion sensor, that a change in an angle of the mobile computing device relative to gravity satisfies a threshold amount of change; responsive to determining that the change in the angle satisfies the threshold amount of change, transitioning, by the processor, from operating in the second power mode to operating in the first power mode; determining, by the processor and based on second motion data generated by the motion sensor during a second time period, a second activity of the user of the mobile computing device; and responsive to determining that the second activity is different from the first activity, performing, by the mobile computing device, an action determined based on the determining that the second activity is different from the first activity. 2. The method of claim 1, wherein the performing the action comprises storing an indication of a current location of the mobile computing device. 3. The method of claim 1, wherein the change in the angle of the mobile computing device relative to gravity is a second change in the angle of the mobile computing device, the method further comprising: prior to determining, by the processor, the first activity of the user, determining, by the motion module based on third motion data generated by the motion sensor, that a first change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change, wherein performing the action comprises: determining, by the processor, a plurality of previously determined activities, wherein each previously determined activity was determined during a time period occurring between the determining that the first change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change and determining that the second change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change; and responsive to determining, by the processor, that at least one previously determined activity from the plurality of previously determined activities is incorrect, correcting the at least one previously determined activity. 4. The method of claim 3, wherein correcting the at least one previously determined activity comprises one or more of: removing the at least one previously determined activity from the plurality of previously determined activity; and changing the at least one previously determined activity to correspond to an activity of a majority of the plurality of previously determined activities. 5. The method of claim 1, wherein the change in the angle of the mobile computing device is a first change in the angle of the mobile computing device, the method further comprising: storing a series of locations of the mobile computing device indicating a route of the user; determining, by the motion module and based on third motion data generated by the motion sensor, that a second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change; and responsive to determining that the second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change, outputting, by the mobile computing device and for display, an indication of the route. 6. The method of claim 1, wherein the change in the angle of the mobile computing device is a first change in the angle of the mobile computing device, the method further comprising: determining, by the motion module and based on third motion data generated by the motion sensor, that a second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change; determining, by the processor and based on fourth motion data generated by the motion sensor during a third time period, a third activity of a user of the mobile computing device; and responsive to determining that the third activity is different from the second activity, outputting, by the mobile computing device and for display, an indication of the second activity. 7. The method of claim 1, wherein: performing the action comprises determining a current location of the mobile computing device, the second activity is running or bicycling, and the current location corresponds to the start of a run or bicycle ride. 8. The method of claim 1, wherein: performing the action comprises determining a current location of the mobile computing device, the first activity is riding in a vehicle, the second activity is walking, and the indication of the current location of the mobile computing device indicates a location at which the vehicle is parked. 9. The method of claim 1, wherein: the motion module includes the motion sensor and a first processor, the processor is an application processor, and the first processor and the application processor are different processors. 10. The method of claim 1, wherein the motion sensor is an accelerometer. 11. A mobile computing device comprising: one or more processors; a motion sensor; and a motion module, wherein at least one processor of the one or more processors determines, while the at least one processor is operating in a first power mode and based on first motion data generated by the motion sensor, a first activity of a user associated with the mobile computing device, the first motion data indicating movement of the mobile computing device during a first time period, and transitions the mobile computing device from operating in the first power mode to operating in a second power mode, wherein the one or more processors consume less power while operating in the second power mode than while operating in the first power mode, wherein the motion module determines, while the mobile computing device is operating in the second power mode and based on second motion data generated by the motion sensor, that a change in an angle of the mobile computing device relative to gravity satisfies a threshold amount of change, and wherein the at least one processor of the one or more processors, responsive to the motion module determining that the change in the angle satisfies the threshold amount of change, transitions the mobile computing device from operating in the second power mode to operating in the first power mode, determines, based on second motion data generated by the motion sensor during a second time period, a second activity of the user of the mobile computing device, and, responsive to determining that the second activity is different from the first activity, performs an action determined based on the determining that the second activity is different from the first activity. 12. The mobile computing device of claim 11, wherein the action comprises storing an indication of a current location of the mobile computing device. 13. The mobile computing device of claim 11, wherein: the change in the angle of the mobile computing device relative to gravity is a second change in the angle of the mobile computing device, the motion module, prior to the at least one processor determining the first activity of the user, determines, based on third motion data generated by the motion sensor, that a first change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change, and the at least one processors performs the action by at least: determining, by the processor, a plurality of previously determined activities, wherein each previously determined activity was determined during a time period occurring between the determining that the first change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change and determining that the second change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change; and responsive to determining, by the processor, that at least one previously determined activity from the plurality of previously determined activities is incorrect, correcting the at least one previously determined activity. 14. The mobile computing device of claim 11, wherein the at least one processor corrects the at least one previously determined activity by at least performing one or more of: removing the at least one previously determined activity from the plurality of previously determined activity; and changing the at least one previously determined activity to correspond to an activity of a majority of the plurality of previously determined activities. 15. The mobile computing device of claim 11, wherein: the change in the angle of the mobile computing device is a first change in the angle of the mobile computing device, the at least one of the one or more processors stores a series of locations of the mobile computing device indicating a route of the user, the motion module determines, based on third motion data generated by the motion sensor, that a second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change, and the at least one of the one or more processors, responsive to determining that the second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change, outputs, for display, an indication of the route. 16. The mobile computing device of claim 11, wherein: the change in the angle of the mobile computing device is a first change in the angle of the mobile computing device, the motion module determines, based on third motion data generated by the motion sensor, that a second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change, and the at least one of the one or more processors determines, based on fourth motion data generated by the motion sensor during a third time period, a third activity of a user of the mobile computing device, and, responsive to determining that the third activity is different from the second activity, outputs, for display, an indication of the second activity. 17. The mobile computing device of claim 11, wherein performing the action comprises determining a current location of the mobile computing device, wherein the second activity is running or bicycling, and wherein the current location corresponds to the start of a run or bicycle ride. 18. The mobile computing device of claim 11, wherein: performing the action comprises determining a current location of the mobile computing device, the first activity is riding in a vehicle, the second activity is walking, and the indication of the current location of the mobile computing device indicates a location at which the vehicle is parked. 19. The mobile computing device of claim 11, wherein: the motion module includes the motion sensor and a first processor, the processor is an application processor, the first processor and the application processor are different processors, and the motion sensor is an accelerometer. 20. A non-transitory computer-readable storage medium encoded with instructions that, when executed, cause at least one of a plurality of processors of a mobile computing device to: determine, while the at least one processor is operating in a first power mode and based on first motion data generated by a motion sensor of the mobile computing device, a first activity of a user associated with the mobile computing device, the first motion data indicating movement of the mobile computing device during a first time period; transition, by the at least one processor, from operating in the first power mode to operating in a second power mode, wherein the at least one processor consumes less power while operating in the second power mode than while operating in the first power mode; while the at least one processor is operating in the second power mode, determine, by a motion module and based on second motion data generated by the motion sensor, that a change in an angle of the mobile computing device relative to gravity satisfies a threshold amount of change; responsive to determining that the change in the angle satisfies the threshold amount of change, transition, by the at least one processor, from operating in the second power mode to operating in the first power mode; determine, based on second motion data generated by the motion sensor during a second time period, a second activity of the user of the mobile computing device; and responsive to determining that the second activity is different from the first activity, perform an action determined based on the determining that the second activity is different from the first activity.
In one example, a method includes determining, by a processor operating in a first power mode and based on first motion data, a first activity of a user, transitioning from operating in the first power mode to operating in a second power mode, wherein the processor consumes less power while operating in the second power mode than in the first power mode, responsive to determining, while the processor is operating in the second power mode and based on second motion data, that a change in an angle relative to gravity satisfies a threshold, transitioning from operating in the second power mode to operating in the first power mode, determining, by the processor and based on second motion data, a second activity of the user, and, responsive to determining that the second activity is different from the first activity, performing an action.1. A method comprising: determining, by a processor of a mobile computing device operating in a first power mode and based on first motion data generated by a motion sensor of the mobile computing device, a first activity of a user associated with the mobile computing device, the first motion data indicating movement of the mobile computing device during a first time period; transitioning, by the processor, from operating in the first power mode to operating in a second power mode, wherein the processor consumes less power while operating in the second power mode than while operating in the first power mode; while the processor is operating in the second power mode, determining, by a motion module of the mobile computing device and based on second motion data generated by the motion sensor, that a change in an angle of the mobile computing device relative to gravity satisfies a threshold amount of change; responsive to determining that the change in the angle satisfies the threshold amount of change, transitioning, by the processor, from operating in the second power mode to operating in the first power mode; determining, by the processor and based on second motion data generated by the motion sensor during a second time period, a second activity of the user of the mobile computing device; and responsive to determining that the second activity is different from the first activity, performing, by the mobile computing device, an action determined based on the determining that the second activity is different from the first activity. 2. The method of claim 1, wherein the performing the action comprises storing an indication of a current location of the mobile computing device. 3. The method of claim 1, wherein the change in the angle of the mobile computing device relative to gravity is a second change in the angle of the mobile computing device, the method further comprising: prior to determining, by the processor, the first activity of the user, determining, by the motion module based on third motion data generated by the motion sensor, that a first change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change, wherein performing the action comprises: determining, by the processor, a plurality of previously determined activities, wherein each previously determined activity was determined during a time period occurring between the determining that the first change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change and determining that the second change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change; and responsive to determining, by the processor, that at least one previously determined activity from the plurality of previously determined activities is incorrect, correcting the at least one previously determined activity. 4. The method of claim 3, wherein correcting the at least one previously determined activity comprises one or more of: removing the at least one previously determined activity from the plurality of previously determined activity; and changing the at least one previously determined activity to correspond to an activity of a majority of the plurality of previously determined activities. 5. The method of claim 1, wherein the change in the angle of the mobile computing device is a first change in the angle of the mobile computing device, the method further comprising: storing a series of locations of the mobile computing device indicating a route of the user; determining, by the motion module and based on third motion data generated by the motion sensor, that a second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change; and responsive to determining that the second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change, outputting, by the mobile computing device and for display, an indication of the route. 6. The method of claim 1, wherein the change in the angle of the mobile computing device is a first change in the angle of the mobile computing device, the method further comprising: determining, by the motion module and based on third motion data generated by the motion sensor, that a second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change; determining, by the processor and based on fourth motion data generated by the motion sensor during a third time period, a third activity of a user of the mobile computing device; and responsive to determining that the third activity is different from the second activity, outputting, by the mobile computing device and for display, an indication of the second activity. 7. The method of claim 1, wherein: performing the action comprises determining a current location of the mobile computing device, the second activity is running or bicycling, and the current location corresponds to the start of a run or bicycle ride. 8. The method of claim 1, wherein: performing the action comprises determining a current location of the mobile computing device, the first activity is riding in a vehicle, the second activity is walking, and the indication of the current location of the mobile computing device indicates a location at which the vehicle is parked. 9. The method of claim 1, wherein: the motion module includes the motion sensor and a first processor, the processor is an application processor, and the first processor and the application processor are different processors. 10. The method of claim 1, wherein the motion sensor is an accelerometer. 11. A mobile computing device comprising: one or more processors; a motion sensor; and a motion module, wherein at least one processor of the one or more processors determines, while the at least one processor is operating in a first power mode and based on first motion data generated by the motion sensor, a first activity of a user associated with the mobile computing device, the first motion data indicating movement of the mobile computing device during a first time period, and transitions the mobile computing device from operating in the first power mode to operating in a second power mode, wherein the one or more processors consume less power while operating in the second power mode than while operating in the first power mode, wherein the motion module determines, while the mobile computing device is operating in the second power mode and based on second motion data generated by the motion sensor, that a change in an angle of the mobile computing device relative to gravity satisfies a threshold amount of change, and wherein the at least one processor of the one or more processors, responsive to the motion module determining that the change in the angle satisfies the threshold amount of change, transitions the mobile computing device from operating in the second power mode to operating in the first power mode, determines, based on second motion data generated by the motion sensor during a second time period, a second activity of the user of the mobile computing device, and, responsive to determining that the second activity is different from the first activity, performs an action determined based on the determining that the second activity is different from the first activity. 12. The mobile computing device of claim 11, wherein the action comprises storing an indication of a current location of the mobile computing device. 13. The mobile computing device of claim 11, wherein: the change in the angle of the mobile computing device relative to gravity is a second change in the angle of the mobile computing device, the motion module, prior to the at least one processor determining the first activity of the user, determines, based on third motion data generated by the motion sensor, that a first change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change, and the at least one processors performs the action by at least: determining, by the processor, a plurality of previously determined activities, wherein each previously determined activity was determined during a time period occurring between the determining that the first change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change and determining that the second change in an angle of the mobile computing device relative to gravity satisfies the threshold amount of change; and responsive to determining, by the processor, that at least one previously determined activity from the plurality of previously determined activities is incorrect, correcting the at least one previously determined activity. 14. The mobile computing device of claim 11, wherein the at least one processor corrects the at least one previously determined activity by at least performing one or more of: removing the at least one previously determined activity from the plurality of previously determined activity; and changing the at least one previously determined activity to correspond to an activity of a majority of the plurality of previously determined activities. 15. The mobile computing device of claim 11, wherein: the change in the angle of the mobile computing device is a first change in the angle of the mobile computing device, the at least one of the one or more processors stores a series of locations of the mobile computing device indicating a route of the user, the motion module determines, based on third motion data generated by the motion sensor, that a second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change, and the at least one of the one or more processors, responsive to determining that the second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change, outputs, for display, an indication of the route. 16. The mobile computing device of claim 11, wherein: the change in the angle of the mobile computing device is a first change in the angle of the mobile computing device, the motion module determines, based on third motion data generated by the motion sensor, that a second change in the angle of the mobile computing device relative to gravity satisfies the threshold amount of change, and the at least one of the one or more processors determines, based on fourth motion data generated by the motion sensor during a third time period, a third activity of a user of the mobile computing device, and, responsive to determining that the third activity is different from the second activity, outputs, for display, an indication of the second activity. 17. The mobile computing device of claim 11, wherein performing the action comprises determining a current location of the mobile computing device, wherein the second activity is running or bicycling, and wherein the current location corresponds to the start of a run or bicycle ride. 18. The mobile computing device of claim 11, wherein: performing the action comprises determining a current location of the mobile computing device, the first activity is riding in a vehicle, the second activity is walking, and the indication of the current location of the mobile computing device indicates a location at which the vehicle is parked. 19. The mobile computing device of claim 11, wherein: the motion module includes the motion sensor and a first processor, the processor is an application processor, the first processor and the application processor are different processors, and the motion sensor is an accelerometer. 20. A non-transitory computer-readable storage medium encoded with instructions that, when executed, cause at least one of a plurality of processors of a mobile computing device to: determine, while the at least one processor is operating in a first power mode and based on first motion data generated by a motion sensor of the mobile computing device, a first activity of a user associated with the mobile computing device, the first motion data indicating movement of the mobile computing device during a first time period; transition, by the at least one processor, from operating in the first power mode to operating in a second power mode, wherein the at least one processor consumes less power while operating in the second power mode than while operating in the first power mode; while the at least one processor is operating in the second power mode, determine, by a motion module and based on second motion data generated by the motion sensor, that a change in an angle of the mobile computing device relative to gravity satisfies a threshold amount of change; responsive to determining that the change in the angle satisfies the threshold amount of change, transition, by the at least one processor, from operating in the second power mode to operating in the first power mode; determine, based on second motion data generated by the motion sensor during a second time period, a second activity of the user of the mobile computing device; and responsive to determining that the second activity is different from the first activity, perform an action determined based on the determining that the second activity is different from the first activity.
2,600
10,108
10,108
16,146,883
2,659
Systems and processes for accelerating task performance are provided. An example method includes, at an electronic device including a display and one or more input devices, displaying, on the display, a user interface including a suggestion affordance associated with a task, detecting, via the one or more input devices, a first user input corresponding to a selection of the suggestion affordance, in response to detecting the first user input: in accordance with a determination that the task is a task of a first type, performing the task, and in accordance with a determination that the task is a task of a second type different than the first type, displaying a confirmation interface including a confirmation affordance.
1. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first electronic device, cause the first electronic device to: receive, with a digital assistant, a natural-language speech input; determine a voice shortcut associated with the natural-language speech input, wherein the voice shortcut is a user-generated phrase customized by a user of the electronic device; determine a task corresponding to the voice shortcut; cause an application to initiate performance of the task, wherein the application is preloaded with one or more customized parameters associated with the voice shortcut, the one or more customized parameters defined by the user prior to receiving the speech input; receive a response from the application, wherein the response is associated with the task; determine, based on the response, whether the task was successfully performed; and provide an output indicating whether the task was successfully performed. 2. The non-transitory computer-readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: after receiving the response, display an application user interface associated with the application. 3. The non-transitory computer-readable storage medium of claim 1, wherein providing an output indicating whether the task was successfully performed includes: in accordance with a determination that the task was performed successfully, displaying an indication that the task was performed successfully; and in accordance with a determination that the task was not performed successfully, displaying an indication that the task was not performed successfully. 4. The non-transitory computer-readable storage medium of claim 3, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: further in accordance with a determination that the task was not performed successfully, display a failure user interface. 5. The non-transitory computer-readable storage medium of claim 4, wherein the failure user interface includes a retry affordance, and wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: detect a user input corresponding to a selection of the retry affordance; and in response to detecting the user input corresponding to a selection of the retry affordance, cause the application to initiate performance of the task. 6. The non-transitory computer-readable storage medium of claim 4, wherein the failure user interface includes a cancel affordance, and wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: detect a user input corresponding to a selection of the cancel affordance; and in response to the user input corresponding to a selection of the cancel affordance, cease display of the failure user interface. 7. The non-transitory computer-readable storage medium of claim 4, wherein the failure user interface includes an application launch affordance, and wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: detect a user input corresponding to a selection of the application launch affordance; and in response to the user input corresponding to a selection of the application launch affordance, launch the application. 8. The non-transitory computer-readable storage medium of claim 3, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: further in accordance with a determination that the task was performed successfully, display a task success animation. 9. The non-transitory computer-readable storage medium of claim 1, wherein causing an application to initiate performance of the task includes displaying a task performance animation. 10. The non-transitory computer-readable storage medium of claim 1, wherein causing an application to initiate performance of the task includes: prompting the user to confirm performance of the task. 11. The non-transitory computer-readable storage medium of claim 1, wherein providing an output includes: generating a natural-language output based on the response; and providing, with the digital assistant, the natural-language output. 12. The non-transitory computer-readable storage medium of claim 2, wherein providing, with the digital assistant, the natural-language output includes providing an audio speech output. 13. The non-transitory computer-readable storage medium of claim 11, wherein providing, with the digital assistant, the natural-language output includes displaying the natural-language output. 14. The non-transitory computer-readable storage medium of claim 11, wherein the natural-language output includes a reference to the application. 15. The non-transitory computer-readable storage medium of claim 11, wherein the response includes a natural-language expression and the natural-language output includes at least a portion of the natural-language expression. 16. The non-transitory computer-readable storage medium of claim 11, wherein the natural-language output indicates that the task was performed successfully by the application. 17. The non-transitory computer-readable storage medium of claim 1, wherein the task is a request for information from a third-party service. 18. The non-transitory computer-readable storage medium of claim 1, wherein the natural-language output indicates that the task was not performed successfully by the application. 19. A method, comprising: at an electronic device with a display and a touch-sensitive surface: receiving, with a digital assistant, a natural-language speech input; determining a voice shortcut associated with the natural-language speech input, wherein the voice shortcut is a user-generated phrase customized by a user of the electronic device; determining a task corresponding to the voice shortcut; causing an application to initiate performance of the task, wherein the application is preloaded with one or more customized parameters associated with the voice shortcut, the one or more customized parameters defined by the user prior to receiving the speech input; receiving a response from the application, wherein the response is associated with the task; determining, based on the response, whether the task was successfully performed; and providing an output indicating whether the task was successfully performed. 20. The method of claim 19, further comprising: after receiving the response, displaying an application user interface associated with the application. 21. The method of claim 19, wherein providing an output indicating whether the task was successfully performed includes: in accordance with a determination that the task was performed successfully, displaying an indication that the task was performed successfully; and in accordance with a determination that the task was not performed successfully, displaying an indication that the task was not performed successfully. 22. An electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, with a digital assistant, a natural-language speech input; determining a voice shortcut associated with the natural-language speech input, wherein the voice shortcut is a user-generated phrase customized by a user of the electronic device; determining a task corresponding to the voice shortcut; causing an application to initiate performance of the task, wherein the application is preloaded with one or more customized parameters associated with the voice shortcut, the one or more customized parameters defined by the user prior to receiving the speech input; receiving a response from the application, wherein the response is associated with the task; determining, based on the response, whether the task was successfully performed; and providing an output indicating whether the task was successfully performed. 23. The electronic device of claim 22, further comprising: after receiving the response, displaying an application user interface associated with the application. 24. The electronic device of claim 22, wherein providing an output indicating whether the task was successfully performed includes: in accordance with a determination that the task was performed successfully, displaying an indication that the task was performed successfully; and in accordance with a determination that the task was not performed successfully, displaying an indication that the task was not performed successfully.
Systems and processes for accelerating task performance are provided. An example method includes, at an electronic device including a display and one or more input devices, displaying, on the display, a user interface including a suggestion affordance associated with a task, detecting, via the one or more input devices, a first user input corresponding to a selection of the suggestion affordance, in response to detecting the first user input: in accordance with a determination that the task is a task of a first type, performing the task, and in accordance with a determination that the task is a task of a second type different than the first type, displaying a confirmation interface including a confirmation affordance.1. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first electronic device, cause the first electronic device to: receive, with a digital assistant, a natural-language speech input; determine a voice shortcut associated with the natural-language speech input, wherein the voice shortcut is a user-generated phrase customized by a user of the electronic device; determine a task corresponding to the voice shortcut; cause an application to initiate performance of the task, wherein the application is preloaded with one or more customized parameters associated with the voice shortcut, the one or more customized parameters defined by the user prior to receiving the speech input; receive a response from the application, wherein the response is associated with the task; determine, based on the response, whether the task was successfully performed; and provide an output indicating whether the task was successfully performed. 2. The non-transitory computer-readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: after receiving the response, display an application user interface associated with the application. 3. The non-transitory computer-readable storage medium of claim 1, wherein providing an output indicating whether the task was successfully performed includes: in accordance with a determination that the task was performed successfully, displaying an indication that the task was performed successfully; and in accordance with a determination that the task was not performed successfully, displaying an indication that the task was not performed successfully. 4. The non-transitory computer-readable storage medium of claim 3, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: further in accordance with a determination that the task was not performed successfully, display a failure user interface. 5. The non-transitory computer-readable storage medium of claim 4, wherein the failure user interface includes a retry affordance, and wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: detect a user input corresponding to a selection of the retry affordance; and in response to detecting the user input corresponding to a selection of the retry affordance, cause the application to initiate performance of the task. 6. The non-transitory computer-readable storage medium of claim 4, wherein the failure user interface includes a cancel affordance, and wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: detect a user input corresponding to a selection of the cancel affordance; and in response to the user input corresponding to a selection of the cancel affordance, cease display of the failure user interface. 7. The non-transitory computer-readable storage medium of claim 4, wherein the failure user interface includes an application launch affordance, and wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: detect a user input corresponding to a selection of the application launch affordance; and in response to the user input corresponding to a selection of the application launch affordance, launch the application. 8. The non-transitory computer-readable storage medium of claim 3, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the device to: further in accordance with a determination that the task was performed successfully, display a task success animation. 9. The non-transitory computer-readable storage medium of claim 1, wherein causing an application to initiate performance of the task includes displaying a task performance animation. 10. The non-transitory computer-readable storage medium of claim 1, wherein causing an application to initiate performance of the task includes: prompting the user to confirm performance of the task. 11. The non-transitory computer-readable storage medium of claim 1, wherein providing an output includes: generating a natural-language output based on the response; and providing, with the digital assistant, the natural-language output. 12. The non-transitory computer-readable storage medium of claim 2, wherein providing, with the digital assistant, the natural-language output includes providing an audio speech output. 13. The non-transitory computer-readable storage medium of claim 11, wherein providing, with the digital assistant, the natural-language output includes displaying the natural-language output. 14. The non-transitory computer-readable storage medium of claim 11, wherein the natural-language output includes a reference to the application. 15. The non-transitory computer-readable storage medium of claim 11, wherein the response includes a natural-language expression and the natural-language output includes at least a portion of the natural-language expression. 16. The non-transitory computer-readable storage medium of claim 11, wherein the natural-language output indicates that the task was performed successfully by the application. 17. The non-transitory computer-readable storage medium of claim 1, wherein the task is a request for information from a third-party service. 18. The non-transitory computer-readable storage medium of claim 1, wherein the natural-language output indicates that the task was not performed successfully by the application. 19. A method, comprising: at an electronic device with a display and a touch-sensitive surface: receiving, with a digital assistant, a natural-language speech input; determining a voice shortcut associated with the natural-language speech input, wherein the voice shortcut is a user-generated phrase customized by a user of the electronic device; determining a task corresponding to the voice shortcut; causing an application to initiate performance of the task, wherein the application is preloaded with one or more customized parameters associated with the voice shortcut, the one or more customized parameters defined by the user prior to receiving the speech input; receiving a response from the application, wherein the response is associated with the task; determining, based on the response, whether the task was successfully performed; and providing an output indicating whether the task was successfully performed. 20. The method of claim 19, further comprising: after receiving the response, displaying an application user interface associated with the application. 21. The method of claim 19, wherein providing an output indicating whether the task was successfully performed includes: in accordance with a determination that the task was performed successfully, displaying an indication that the task was performed successfully; and in accordance with a determination that the task was not performed successfully, displaying an indication that the task was not performed successfully. 22. An electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, with a digital assistant, a natural-language speech input; determining a voice shortcut associated with the natural-language speech input, wherein the voice shortcut is a user-generated phrase customized by a user of the electronic device; determining a task corresponding to the voice shortcut; causing an application to initiate performance of the task, wherein the application is preloaded with one or more customized parameters associated with the voice shortcut, the one or more customized parameters defined by the user prior to receiving the speech input; receiving a response from the application, wherein the response is associated with the task; determining, based on the response, whether the task was successfully performed; and providing an output indicating whether the task was successfully performed. 23. The electronic device of claim 22, further comprising: after receiving the response, displaying an application user interface associated with the application. 24. The electronic device of claim 22, wherein providing an output indicating whether the task was successfully performed includes: in accordance with a determination that the task was performed successfully, displaying an indication that the task was performed successfully; and in accordance with a determination that the task was not performed successfully, displaying an indication that the task was not performed successfully.
2,600
10,109
10,109
12,588,758
2,628
A display device includes: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and signal processing means. The area includes first and second pixel groups including at least one pixel and plural pixels other than the first pixel group, respectively. The signal processing means includes arithmetic means for outputting an arithmetic signal according to arithmetic operation of an offset value and a light reception value; converting means for outputting digital data according to the arithmetic signal; and correcting means for correcting the video signal according to the digital data and supplying the corrected video signal to the first pixel group.
1. A display device comprising: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and signal processing means for applying processing to the light reception signal, wherein the area includes a first pixel group including at least one pixel; and a second pixel group including plural pixels other than the first pixel group, and the signal processing means sets, as an offset value, a light reception signal obtained when the first pixel group and the second pixel group are caused to emit lights at predetermined light emission luminance and sets, as a light reception value, a light reception signal obtained when the second pixel group is caused to emit light at the predetermined light emission luminance and light emission luminance of the first pixel group is changed, and includes arithmetic means for outputting an arithmetic signal according to arithmetic operation of the offset value and the light reception value; converting means for outputting digital data according to the arithmetic signal; and correcting means for correcting the video signal according to the digital data and supplying the corrected video signal to the first pixel group. 2. A display device according to claim 1, wherein the offset value is a light reception signal obtained when the first pixel group and the second pixel group are caused to uniformly emit lights at a predetermined gradation. 3. A display device according to claim 1, wherein the second pixel group includes all the pixels other than the first pixel group in the area. 4. A display device according to claim 1, wherein the second pixel group includes a part of pixels other than the first pixel group in the area. 5. A display device according to claim 1, wherein the light reception value is a light reception signal obtained when light emission luminance of the second pixel group is maintained and light emission luminance of the first pixel group is reduced. 6. A display device according to claim 1, wherein the light reception value is a light reception signal obtained when light emission luminance of the second pixel group is maintained and light emission luminance of the first pixel group is increased. 7. A display device according to claim 1, wherein the pixels emit lights with self-emitting elements. 8. A display device according to claim 1, wherein the converting means is A/D conversion processing. 9. A display device according to claim 1, wherein the arithmetic operation is processing for calculating a difference. 10. A display device according to claim 1, wherein the light reception sensor includes a light receiving element and a resistor, the arithmetic means includes a first switch, a second switch, a third switch, a first capacitor, and a second capacitor, the first switch is connected between an input terminal and an output terminal of the arithmetic means, the second switch is connected between the output terminal and the third switch, the third switch is connected between the second switch and a first power supply line, the first capacitor is connected between the second capacitor and a second power supply line, the second capacitor is connected between the first capacitor and the output terminal, a section between the second switch and the third switch is connected between the first capacitor and the second capacitor, and the input terminal is connected between the light receiving element and the resistor of the light reception sensor. 11. A display device according to claim 10, wherein the arithmetic means causes the first pixel group and the second pixel group to emit lights at predetermined light emission luminance in a state in which the first switch is on, the second switch is on, and the third switch is off, causes the second pixel group to emit light at the predetermined light emission luminance and changes light emission luminance of the first pixel group in a state in which the first switch is on, the second switch is off, and the third switch is off, and turns off the first switch, turns off the second switch, and turns on the third switch, and outputs the digital data as the arithmetic signal. 12. A display device comprising: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and signal processing means for applying processing to the light reception signal, wherein the area includes a first pixel group including at least one pixel; and a second pixel group including plural pixels other than the first pixel group, and the signal processing means sets, as an offset value, a light reception signal obtained when first signal potential is supplied to the first pixel group and the second pixel group and sets, as a light reception value, a light reception signal obtained when the first signal potential is supplied to the second pixel group and second signal potential is supplied to the first pixel group, and includes arithmetic means for outputting an arithmetic signal according to arithmetic operation of the offset value and the light reception value; converting means for outputting digital data according to the arithmetic signal; and correcting means for correcting the video signal according to the digital data and supplying the corrected video signal to the first pixel group. 13. A display device according to claim 12, wherein the second pixel group includes all the pixels other than the first pixel group in the area. 14. A display device according to claim 12, wherein the second pixel group includes a part of pixels other than the first pixel group in the area. 15. A display device according to claim 12, wherein the second signal potential is higher than the first signal potential. 16. A display device according to claim 12, wherein the second signal potential is lower than the first signal potential. 17. A display device according to claim 12, wherein the pixels emit lights with self-emitting elements. 18. A display device according to claim 12, wherein the converting means is A/D conversion processing. 19. A display device according to claim 12, wherein the arithmetic operation is processing for calculating a difference. 20. A display device comprising: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and a signal processing unit configured to apply processing to the light reception signal, wherein the area includes a first pixel group including at least one pixel; and a second pixel group including plural pixels other than the first pixel group, and the signal processing unit sets, as an offset value, a light reception signal obtained when the first pixel group and the second pixel group are caused to emit lights at predetermined light emission luminance and sets, as a light reception value, a light reception signal obtained when the second pixel group is caused to emit light at the predetermined light emission luminance and light emission luminance of the first pixel group is changed, and includes an arithmetic unit configured to output an arithmetic signal according to arithmetic operation of the offset value and the light reception value; a converting unit configured to output digital data according to the arithmetic signal; and a correcting unit configured to correct the video signal according to the digital data and supplying the corrected video signal to the first pixel group. 21. A display device comprising: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and a signal processing unit configured to apply processing to the light reception signal, wherein the area includes a first pixel group including at least one pixel; and a second pixel group including plural pixels other than the first pixel group, and the signal processing unit sets, as an offset value, light reception signal obtained when first signal potential is supplied to the first pixel group and the second pixel group and sets, as a light reception value, a light reception signal obtained when the first signal potential is supplied to the second pixel group and second signal potential is supplied to the first pixel group, and includes an arithmetic unit configured to output an arithmetic signal according to arithmetic operation of the offset value and the light reception value; a converting unit configured to output digital data according to the arithmetic signal; and a correcting unit configured to correct the video signal according to the digital data and supplying the corrected video signal to the first pixel group.
A display device includes: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and signal processing means. The area includes first and second pixel groups including at least one pixel and plural pixels other than the first pixel group, respectively. The signal processing means includes arithmetic means for outputting an arithmetic signal according to arithmetic operation of an offset value and a light reception value; converting means for outputting digital data according to the arithmetic signal; and correcting means for correcting the video signal according to the digital data and supplying the corrected video signal to the first pixel group.1. A display device comprising: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and signal processing means for applying processing to the light reception signal, wherein the area includes a first pixel group including at least one pixel; and a second pixel group including plural pixels other than the first pixel group, and the signal processing means sets, as an offset value, a light reception signal obtained when the first pixel group and the second pixel group are caused to emit lights at predetermined light emission luminance and sets, as a light reception value, a light reception signal obtained when the second pixel group is caused to emit light at the predetermined light emission luminance and light emission luminance of the first pixel group is changed, and includes arithmetic means for outputting an arithmetic signal according to arithmetic operation of the offset value and the light reception value; converting means for outputting digital data according to the arithmetic signal; and correcting means for correcting the video signal according to the digital data and supplying the corrected video signal to the first pixel group. 2. A display device according to claim 1, wherein the offset value is a light reception signal obtained when the first pixel group and the second pixel group are caused to uniformly emit lights at a predetermined gradation. 3. A display device according to claim 1, wherein the second pixel group includes all the pixels other than the first pixel group in the area. 4. A display device according to claim 1, wherein the second pixel group includes a part of pixels other than the first pixel group in the area. 5. A display device according to claim 1, wherein the light reception value is a light reception signal obtained when light emission luminance of the second pixel group is maintained and light emission luminance of the first pixel group is reduced. 6. A display device according to claim 1, wherein the light reception value is a light reception signal obtained when light emission luminance of the second pixel group is maintained and light emission luminance of the first pixel group is increased. 7. A display device according to claim 1, wherein the pixels emit lights with self-emitting elements. 8. A display device according to claim 1, wherein the converting means is A/D conversion processing. 9. A display device according to claim 1, wherein the arithmetic operation is processing for calculating a difference. 10. A display device according to claim 1, wherein the light reception sensor includes a light receiving element and a resistor, the arithmetic means includes a first switch, a second switch, a third switch, a first capacitor, and a second capacitor, the first switch is connected between an input terminal and an output terminal of the arithmetic means, the second switch is connected between the output terminal and the third switch, the third switch is connected between the second switch and a first power supply line, the first capacitor is connected between the second capacitor and a second power supply line, the second capacitor is connected between the first capacitor and the output terminal, a section between the second switch and the third switch is connected between the first capacitor and the second capacitor, and the input terminal is connected between the light receiving element and the resistor of the light reception sensor. 11. A display device according to claim 10, wherein the arithmetic means causes the first pixel group and the second pixel group to emit lights at predetermined light emission luminance in a state in which the first switch is on, the second switch is on, and the third switch is off, causes the second pixel group to emit light at the predetermined light emission luminance and changes light emission luminance of the first pixel group in a state in which the first switch is on, the second switch is off, and the third switch is off, and turns off the first switch, turns off the second switch, and turns on the third switch, and outputs the digital data as the arithmetic signal. 12. A display device comprising: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and signal processing means for applying processing to the light reception signal, wherein the area includes a first pixel group including at least one pixel; and a second pixel group including plural pixels other than the first pixel group, and the signal processing means sets, as an offset value, a light reception signal obtained when first signal potential is supplied to the first pixel group and the second pixel group and sets, as a light reception value, a light reception signal obtained when the first signal potential is supplied to the second pixel group and second signal potential is supplied to the first pixel group, and includes arithmetic means for outputting an arithmetic signal according to arithmetic operation of the offset value and the light reception value; converting means for outputting digital data according to the arithmetic signal; and correcting means for correcting the video signal according to the digital data and supplying the corrected video signal to the first pixel group. 13. A display device according to claim 12, wherein the second pixel group includes all the pixels other than the first pixel group in the area. 14. A display device according to claim 12, wherein the second pixel group includes a part of pixels other than the first pixel group in the area. 15. A display device according to claim 12, wherein the second signal potential is higher than the first signal potential. 16. A display device according to claim 12, wherein the second signal potential is lower than the first signal potential. 17. A display device according to claim 12, wherein the pixels emit lights with self-emitting elements. 18. A display device according to claim 12, wherein the converting means is A/D conversion processing. 19. A display device according to claim 12, wherein the arithmetic operation is processing for calculating a difference. 20. A display device comprising: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and a signal processing unit configured to apply processing to the light reception signal, wherein the area includes a first pixel group including at least one pixel; and a second pixel group including plural pixels other than the first pixel group, and the signal processing unit sets, as an offset value, a light reception signal obtained when the first pixel group and the second pixel group are caused to emit lights at predetermined light emission luminance and sets, as a light reception value, a light reception signal obtained when the second pixel group is caused to emit light at the predetermined light emission luminance and light emission luminance of the first pixel group is changed, and includes an arithmetic unit configured to output an arithmetic signal according to arithmetic operation of the offset value and the light reception value; a converting unit configured to output digital data according to the arithmetic signal; and a correcting unit configured to correct the video signal according to the digital data and supplying the corrected video signal to the first pixel group. 21. A display device comprising: a panel in which plural pixels that emit lights according to a video signal are sectioned into plural areas; a light reception sensor that is arranged in each of the areas and outputs a light reception signal according to light emission luminance; and a signal processing unit configured to apply processing to the light reception signal, wherein the area includes a first pixel group including at least one pixel; and a second pixel group including plural pixels other than the first pixel group, and the signal processing unit sets, as an offset value, light reception signal obtained when first signal potential is supplied to the first pixel group and the second pixel group and sets, as a light reception value, a light reception signal obtained when the first signal potential is supplied to the second pixel group and second signal potential is supplied to the first pixel group, and includes an arithmetic unit configured to output an arithmetic signal according to arithmetic operation of the offset value and the light reception value; a converting unit configured to output digital data according to the arithmetic signal; and a correcting unit configured to correct the video signal according to the digital data and supplying the corrected video signal to the first pixel group.
2,600
10,110
10,110
14,866,992
2,626
An electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface displays a first user interface of a first software application, detects an input on the touch-sensitive surface while displaying the first user interface, and, in response to detecting the input while displaying the first user interface, performs a first operation in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold during a first predefined time period, and performs a second operation in accordance with a determination that the input satisfies long press criteria including that the input remains below the first intensity threshold during the first predefined time period.
1. A method, comprising: at an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface: displaying a first user interface; while displaying the first user interface, detecting an input on the touch-sensitive surface; and, in response to detecting the input while displaying the first user interface: in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during a first predefined time period, performing a first operation; in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold during a second predefined time period that is longer than the first predefined time period, while the input is maintained on the touch-sensitive surface, performing a second operation that is distinct from the first operation as a result of the input satisfying the first intensity threshold, even if the second predefined time period has not yet been met; and in accordance with a determination that the input satisfies long press criteria including that the input remains below the first intensity threshold during the second predefined time period, performing a third operation that is distinct from the first operation and the second operation. 2. The method of claim 1, wherein: detecting the input on the touch-sensitive surface includes detecting a first portion of the input and a second portion of the input that is subsequent to the first portion of the input; and the method includes: in response to detecting the first portion of the input on the touch-sensitive surface, identifying a first set of gesture recognizers that correspond to at least the first portion of the input as candidate gesture recognizers, the first set of gesture recognizers including a first gesture recognizer and a second gesture recognizer; and, in response to detecting the second portion of the input on the touch-sensitive surface: in accordance with the determination that the input satisfies the intensity input criteria, performing the second operation including processing the input with the first gesture recognizer; and, in accordance with the determination that the input satisfies the long press criteria, performing the third operation including processing the input with the second gesture recognizer. 3. The method of claim 2, wherein the first gesture recognizer is an intensity-based gesture recognizer and the second gesture recognizer is a long press gesture recognizer. 4. The method of claim 2, wherein the input includes a third portion of the input that is subsequent to the second portion of the input, and the method includes processing the third portion of the input with the first gesture recognizer. 5. The method of claim 2, wherein the first set of gesture recognizers includes a third gesture recognizer. 6. The method of claim 2, including: in response to determining that the input satisfies a second intensity threshold, processing the input with the first gesture recognizer, including replacing display of the first user interface with a second user interface. 7. The method of claim 2, wherein: the first set of gesture recognizers includes a fourth gesture recognizer; and the method includes, in response to determining that the input satisfies a second intensity threshold, processing the input with the fourth gesture recognizer. 8. The method of claim 2, including: in response to detecting the first portion of the input, performing a fourth operation that is distinct from the first operation, the second operation, and the third operation. 9. The method of claim 8, including, subsequent to performing the fourth operation: in accordance with the determination that the input satisfies the intensity input criteria, performing the second operation; and, in accordance with the determination that the input satisfies the long press criteria, performing the third operation. 10. The method of claim 1, wherein: performing the second operation includes displaying a preview area. 11. The method of claim 1, wherein: performing the third operation includes displaying a menu view. 12. The method of claim 1, wherein the first intensity threshold is satisfied in response to multiple contacts in the input satisfying the first intensity threshold. 13. The method of claim 1, wherein the first intensity threshold is satisfied in response to a combination of the intensity applied by a plurality of contacts in the input satisfying the first intensity threshold. 14. The method of claim 1, wherein the first intensity threshold is adjustable. 15. The method of claim 2, including updating the first gesture recognizer to be activated in response to the input satisfying a third intensity threshold that is distinct from the first intensity threshold. 16. The method of claim 1, including: in accordance with a determination that the input does not satisfy the intensity input criteria and does not satisfy the long press criteria, forgoing the second operation and the third operation. 17. The method of claim 1, wherein: the intensity input criteria include that the input does not move across the touch-sensitive surface by more than a predefined distance; and the long press criteria include that the contact in the input does not move across the touch-sensitive surface by more than the predefined distance. 18. The method of claim 2, including: detecting a second input on the touch-sensitive surface, including detecting a first portion of the second input and a second portion of the second input that is subsequent to the first portion of the second input; in response to detecting the first portion of the second input on the touch-sensitive surface, identifying a second set of gesture recognizers that correspond to at least the first portion of the second input, the second set of gesture recognizers including the second gesture recognizer without the first gesture recognizer; and, in response to detecting the second portion of the second input on the touch-sensitive surface, in accordance with a determination that the second input satisfies second long press criteria including that the second input remains on the touch-sensitive surface for a third predefined time period that has a different duration from the second predefined time period, processing the second input with the second gesture recognizer. 19. An electronic device, comprising: a display; a touch-sensitive surface; one or more sensors to detect intensity of contacts with the touch-sensitive surface; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a first user interface; while displaying the first user interface, detecting an input on the touch-sensitive surface; and, in response to detecting the input while displaying the first user interface: in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during a first predefined time period, performing a first operation; in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold during a first intensity threshold during a second predefined time period that is longer than the first predefined time period, while the input is maintained on the touch-sensitive surface, performing a second operation that is distinct from the first operation as a result of the input satisfying the first intensity threshold, even if the second predefined time period has not yet been met; and in accordance with a determination that the input satisfies long press criteria including that the input remains below the first intensity threshold during the second predefined time period, performing a third operation that is distinct from the first operation and the second operation. 20. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface cause the device to: display a first user interface; while displaying the first user interface, detect an input on the touch-sensitive surface; and, in response to detecting the input while displaying the first user interface: in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during a first predefined time period, performing a first operation; in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold during a second predefined time period that is longer than the first predefined time period, while the input is maintained on the touch-sensitive surface, perform a first operation as a result of the input satisfying the first intensity threshold, even if the second predefined time period has not yet been met; and in accordance with a determination that the input satisfies long press criteria including that the input remains below the first intensity threshold during the second predefined time period, perform a third operation that is distinct from the first operation and the second operation. 21. The method of claim 1, wherein: the intensity input criteria further require that: the input satisfy the first intensity threshold after the first predefined time period. 22. The electronic device of claim 19, wherein: the intensity input criteria further require that: the input satisfy the first intensity threshold after the first predefined time period. 23. The computer readable storage medium of claim 20, wherein: the intensity input criteria further require that: the input satisfy the first intensity threshold after the first predefined time period.
An electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface displays a first user interface of a first software application, detects an input on the touch-sensitive surface while displaying the first user interface, and, in response to detecting the input while displaying the first user interface, performs a first operation in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold during a first predefined time period, and performs a second operation in accordance with a determination that the input satisfies long press criteria including that the input remains below the first intensity threshold during the first predefined time period.1. A method, comprising: at an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface: displaying a first user interface; while displaying the first user interface, detecting an input on the touch-sensitive surface; and, in response to detecting the input while displaying the first user interface: in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during a first predefined time period, performing a first operation; in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold during a second predefined time period that is longer than the first predefined time period, while the input is maintained on the touch-sensitive surface, performing a second operation that is distinct from the first operation as a result of the input satisfying the first intensity threshold, even if the second predefined time period has not yet been met; and in accordance with a determination that the input satisfies long press criteria including that the input remains below the first intensity threshold during the second predefined time period, performing a third operation that is distinct from the first operation and the second operation. 2. The method of claim 1, wherein: detecting the input on the touch-sensitive surface includes detecting a first portion of the input and a second portion of the input that is subsequent to the first portion of the input; and the method includes: in response to detecting the first portion of the input on the touch-sensitive surface, identifying a first set of gesture recognizers that correspond to at least the first portion of the input as candidate gesture recognizers, the first set of gesture recognizers including a first gesture recognizer and a second gesture recognizer; and, in response to detecting the second portion of the input on the touch-sensitive surface: in accordance with the determination that the input satisfies the intensity input criteria, performing the second operation including processing the input with the first gesture recognizer; and, in accordance with the determination that the input satisfies the long press criteria, performing the third operation including processing the input with the second gesture recognizer. 3. The method of claim 2, wherein the first gesture recognizer is an intensity-based gesture recognizer and the second gesture recognizer is a long press gesture recognizer. 4. The method of claim 2, wherein the input includes a third portion of the input that is subsequent to the second portion of the input, and the method includes processing the third portion of the input with the first gesture recognizer. 5. The method of claim 2, wherein the first set of gesture recognizers includes a third gesture recognizer. 6. The method of claim 2, including: in response to determining that the input satisfies a second intensity threshold, processing the input with the first gesture recognizer, including replacing display of the first user interface with a second user interface. 7. The method of claim 2, wherein: the first set of gesture recognizers includes a fourth gesture recognizer; and the method includes, in response to determining that the input satisfies a second intensity threshold, processing the input with the fourth gesture recognizer. 8. The method of claim 2, including: in response to detecting the first portion of the input, performing a fourth operation that is distinct from the first operation, the second operation, and the third operation. 9. The method of claim 8, including, subsequent to performing the fourth operation: in accordance with the determination that the input satisfies the intensity input criteria, performing the second operation; and, in accordance with the determination that the input satisfies the long press criteria, performing the third operation. 10. The method of claim 1, wherein: performing the second operation includes displaying a preview area. 11. The method of claim 1, wherein: performing the third operation includes displaying a menu view. 12. The method of claim 1, wherein the first intensity threshold is satisfied in response to multiple contacts in the input satisfying the first intensity threshold. 13. The method of claim 1, wherein the first intensity threshold is satisfied in response to a combination of the intensity applied by a plurality of contacts in the input satisfying the first intensity threshold. 14. The method of claim 1, wherein the first intensity threshold is adjustable. 15. The method of claim 2, including updating the first gesture recognizer to be activated in response to the input satisfying a third intensity threshold that is distinct from the first intensity threshold. 16. The method of claim 1, including: in accordance with a determination that the input does not satisfy the intensity input criteria and does not satisfy the long press criteria, forgoing the second operation and the third operation. 17. The method of claim 1, wherein: the intensity input criteria include that the input does not move across the touch-sensitive surface by more than a predefined distance; and the long press criteria include that the contact in the input does not move across the touch-sensitive surface by more than the predefined distance. 18. The method of claim 2, including: detecting a second input on the touch-sensitive surface, including detecting a first portion of the second input and a second portion of the second input that is subsequent to the first portion of the second input; in response to detecting the first portion of the second input on the touch-sensitive surface, identifying a second set of gesture recognizers that correspond to at least the first portion of the second input, the second set of gesture recognizers including the second gesture recognizer without the first gesture recognizer; and, in response to detecting the second portion of the second input on the touch-sensitive surface, in accordance with a determination that the second input satisfies second long press criteria including that the second input remains on the touch-sensitive surface for a third predefined time period that has a different duration from the second predefined time period, processing the second input with the second gesture recognizer. 19. An electronic device, comprising: a display; a touch-sensitive surface; one or more sensors to detect intensity of contacts with the touch-sensitive surface; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a first user interface; while displaying the first user interface, detecting an input on the touch-sensitive surface; and, in response to detecting the input while displaying the first user interface: in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during a first predefined time period, performing a first operation; in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold during a first intensity threshold during a second predefined time period that is longer than the first predefined time period, while the input is maintained on the touch-sensitive surface, performing a second operation that is distinct from the first operation as a result of the input satisfying the first intensity threshold, even if the second predefined time period has not yet been met; and in accordance with a determination that the input satisfies long press criteria including that the input remains below the first intensity threshold during the second predefined time period, performing a third operation that is distinct from the first operation and the second operation. 20. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface cause the device to: display a first user interface; while displaying the first user interface, detect an input on the touch-sensitive surface; and, in response to detecting the input while displaying the first user interface: in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during a first predefined time period, performing a first operation; in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold during a second predefined time period that is longer than the first predefined time period, while the input is maintained on the touch-sensitive surface, perform a first operation as a result of the input satisfying the first intensity threshold, even if the second predefined time period has not yet been met; and in accordance with a determination that the input satisfies long press criteria including that the input remains below the first intensity threshold during the second predefined time period, perform a third operation that is distinct from the first operation and the second operation. 21. The method of claim 1, wherein: the intensity input criteria further require that: the input satisfy the first intensity threshold after the first predefined time period. 22. The electronic device of claim 19, wherein: the intensity input criteria further require that: the input satisfy the first intensity threshold after the first predefined time period. 23. The computer readable storage medium of claim 20, wherein: the intensity input criteria further require that: the input satisfy the first intensity threshold after the first predefined time period.
2,600
10,111
10,111
15,795,721
2,651
The disclosure includes a headset including one or more earphones and a connector configured to couple data and charge between the headset and a user equipment (UE). The headset also includes a charge node. The charge node includes a charge port for receiving UE charge from a charge source. The charge node also includes a downstream port for coupling audio data toward the earphones. The charge node further includes an upstream port for coupling the audio data toward the earphones via the downstream port and coupling UE charge from the charge port toward the UE via the connector.
1. A headset comprising: one or more earphones; a connector configured to couple data and charge between the headset and a user equipment (UE); and a charge node including: a charge port for receiving UE charge from a charge source, a downstream port for coupling audio data toward the earphones, and an upstream port for coupling the audio data toward the earphones via the downstream port and coupling UE charge from the charge port toward the UE via the connector. 2. The headset of claim 1, further comprising a control node coupled to the charge node and the earphones, the control node to convert audio data from a digital domain to an analog domain for use by the earphones. 3. The headset of claim 2, wherein the upstream port further couples headset charge from the UE toward the control node via the downstream port. 4. The headset of claim 3, wherein the charge node is configured to simultaneously couple the UE charge toward the UE and headset charge and from the UE toward the control node. 5. The headset of claim 2, wherein the charge node is configured to simultaneously couple audio data from the UE toward the control node via the upstream port and the downstream port and program data between the charge source and the UE via the charge port and the upstream port. 6. The headset of claim 2, wherein the control node includes a programmable button configured transmit user selected control data toward the UE to control the UE upon activation. 7. The headset of claim 1, further comprising a charge controller coupled to the charge port and the upstream port, the charge controller configured to manage charge amplitude between the charge source and the UE. 8. The headset of claim 1, wherein the connector is a Lightning connector, a universal serial bus (USB) version A (USB-A) connector, a USB version B (USB-B) connector, a USB version C (USB-C) connector, a USB version D (USB-D) connector, a USB micro connector, a USB mini connector, or combinations thereof. 9. A charge node comprising: a downstream port to couple audio data toward one or more earphones; an upstream coupled to the downstream port, the upstream port to couple the audio data to the downstream port; a charge port coupled to the upstream port, the charge port to couple user equipment (UE) charge toward a UE via the upstream port. 10. The charge node of claim 9, wherein the upstream port is further to couple headset charge from the UE toward a control node via the downstream port. 11. The charge node of claim 10, wherein the upstream port simultaneously couples the headset charge from the UE and the UE charge toward the UE. 12. The charge node of claim 9, wherein the charge port is further to couple program data toward the UE via the upstream port. 13. The charge node of claim 12, wherein the upstream port simultaneously communicates audio data to the downstream port and communicates program data with the charge port. 14. The charge node of claim 9, wherein the upstream port is configured to couple to the UE via a Lightning connector, a universal serial bus (USB) version A (USB-A) connector, a USB version B (USB-B) connector, a USB version C (USB-C) connector, a USB version D (USB-D) connector, a USB micro connector, a USB mini connector, or combinations thereof. 15. A method comprising: coupling, via an upstream port, audio data toward a downstream port; coupling, via the downstream port, audio data toward one or more earphones; and coupling, via a charge port and the upstream port, user equipment (UE) charge from a charge source to a UE. 16. The method of claim 15, further comprising coupling headset charge from the UE toward a control node via the upstream port and the downstream port while coupling the UE charge from the charge source to the UE. 17. The method of claim 15, further comprising coupling program data from the charge source to the UE via the charge port and the upstream port while coupling the audio data toward the earphones. 18. The method of claim 15, further comprising converting the audio data from a digital domain to an analog domain by a control node. 19. The method of claim 18, further comprising transmitting user selected control data toward the UE to control the UE upon activation of a programmable button at the control node. 20. The method of claim 15, wherein the UE charge and the audio data are coupled from the UE via a Lightning connector, a universal serial bus (USB) version A (USB-A) connector, a USB version B (USB-B) connector, a USB version C (USB-C) connector, a USB version D (USB-D) connector, a USB micro connector, a USB mini connector, or combinations thereof.
The disclosure includes a headset including one or more earphones and a connector configured to couple data and charge between the headset and a user equipment (UE). The headset also includes a charge node. The charge node includes a charge port for receiving UE charge from a charge source. The charge node also includes a downstream port for coupling audio data toward the earphones. The charge node further includes an upstream port for coupling the audio data toward the earphones via the downstream port and coupling UE charge from the charge port toward the UE via the connector.1. A headset comprising: one or more earphones; a connector configured to couple data and charge between the headset and a user equipment (UE); and a charge node including: a charge port for receiving UE charge from a charge source, a downstream port for coupling audio data toward the earphones, and an upstream port for coupling the audio data toward the earphones via the downstream port and coupling UE charge from the charge port toward the UE via the connector. 2. The headset of claim 1, further comprising a control node coupled to the charge node and the earphones, the control node to convert audio data from a digital domain to an analog domain for use by the earphones. 3. The headset of claim 2, wherein the upstream port further couples headset charge from the UE toward the control node via the downstream port. 4. The headset of claim 3, wherein the charge node is configured to simultaneously couple the UE charge toward the UE and headset charge and from the UE toward the control node. 5. The headset of claim 2, wherein the charge node is configured to simultaneously couple audio data from the UE toward the control node via the upstream port and the downstream port and program data between the charge source and the UE via the charge port and the upstream port. 6. The headset of claim 2, wherein the control node includes a programmable button configured transmit user selected control data toward the UE to control the UE upon activation. 7. The headset of claim 1, further comprising a charge controller coupled to the charge port and the upstream port, the charge controller configured to manage charge amplitude between the charge source and the UE. 8. The headset of claim 1, wherein the connector is a Lightning connector, a universal serial bus (USB) version A (USB-A) connector, a USB version B (USB-B) connector, a USB version C (USB-C) connector, a USB version D (USB-D) connector, a USB micro connector, a USB mini connector, or combinations thereof. 9. A charge node comprising: a downstream port to couple audio data toward one or more earphones; an upstream coupled to the downstream port, the upstream port to couple the audio data to the downstream port; a charge port coupled to the upstream port, the charge port to couple user equipment (UE) charge toward a UE via the upstream port. 10. The charge node of claim 9, wherein the upstream port is further to couple headset charge from the UE toward a control node via the downstream port. 11. The charge node of claim 10, wherein the upstream port simultaneously couples the headset charge from the UE and the UE charge toward the UE. 12. The charge node of claim 9, wherein the charge port is further to couple program data toward the UE via the upstream port. 13. The charge node of claim 12, wherein the upstream port simultaneously communicates audio data to the downstream port and communicates program data with the charge port. 14. The charge node of claim 9, wherein the upstream port is configured to couple to the UE via a Lightning connector, a universal serial bus (USB) version A (USB-A) connector, a USB version B (USB-B) connector, a USB version C (USB-C) connector, a USB version D (USB-D) connector, a USB micro connector, a USB mini connector, or combinations thereof. 15. A method comprising: coupling, via an upstream port, audio data toward a downstream port; coupling, via the downstream port, audio data toward one or more earphones; and coupling, via a charge port and the upstream port, user equipment (UE) charge from a charge source to a UE. 16. The method of claim 15, further comprising coupling headset charge from the UE toward a control node via the upstream port and the downstream port while coupling the UE charge from the charge source to the UE. 17. The method of claim 15, further comprising coupling program data from the charge source to the UE via the charge port and the upstream port while coupling the audio data toward the earphones. 18. The method of claim 15, further comprising converting the audio data from a digital domain to an analog domain by a control node. 19. The method of claim 18, further comprising transmitting user selected control data toward the UE to control the UE upon activation of a programmable button at the control node. 20. The method of claim 15, wherein the UE charge and the audio data are coupled from the UE via a Lightning connector, a universal serial bus (USB) version A (USB-A) connector, a USB version B (USB-B) connector, a USB version C (USB-C) connector, a USB version D (USB-D) connector, a USB micro connector, a USB mini connector, or combinations thereof.
2,600
10,112
10,112
14,303,287
2,643
A mobile testing system for optimizing wireless coverage in a distributed antenna system is disclosed. In some aspects, the mobile testing system includes a measurement receiver that can determine signal levels for a respective signals communicated via the distributed antenna system. A processing device of the mobile testing system can identify a subset of the signals by decoding a respective identifier encoded in each of the subset of signals. The identifiers specify that the subset of signals is targeted to at least one coverage zone in which the mobile testing system is located. A subset of signal levels is obtained by the measurement receiver that corresponds to each of the subset of signals. The processing device can generate coverage contour data based on the subset of signal levels that describes signal coverage for at least one coverage zone.
1. A mobile testing system for optimizing wireless coverage in a distributed antenna system, the mobile testing system comprising: a measurement receiver configured to determine a plurality of signal levels for a plurality of signals communicated via the distributed antenna system; a processing device communicatively coupled to the measurement receiver and configured to: identify a subset of signals from the plurality of signals by decoding a respective identifier encoded in each of the subset of signals, wherein the respective identifiers specify that the subset of signals is targeted to at least one coverage zone in which the mobile testing system is located, obtain, from the measurement receiver, a subset of signal levels from the plurality of signal levels, wherein the subset of signal levels corresponds to a respective one of the subset of signals, and generate coverage contour data describing signal coverage throughout the at least one coverage zone, wherein the coverage contour data is generated based on the subset of signal levels. 2. The mobile testing system of claim 1, wherein the processing device is configured to generate the coverage contour data by: determining a plurality of locations within the at least one coverage zone at which the subset of signals are determined by the measurement receiver; correlating each of the plurality of locations to a respective one of the subset of signal levels; and generating the coverage contour data based on correlating the plurality of locations and the subset of signal levels. 3. The mobile testing system of claim 2, further comprising an accelerometer communicatively coupled to the processing device and configured to determine a direction of movement and a velocity of movement of the mobile testing system, wherein the processing device is configured to determine the plurality of locations based on the direction of movement and the velocity of movement determined by the accelerometer. 4. The mobile testing system of claim 2, further comprising a global positioning system communicatively coupled to the processing device and configured to generate location data identifying the plurality of locations of the mobile testing system, wherein the processing device is configured to determine the plurality of locations from the location data generated by the global positioning system. 5. The mobile testing system of claim 2, further comprising an input device communicatively coupled to the processing device and configured to receive input data identifying a plurality of positions on a map of the at least one coverage zone; wherein the processing device is configured to determine the plurality of locations based on the input data identifying the plurality of positions on the map. 6. The mobile testing system of claim 1, wherein the processing device is further configured to: identify an additional subset of signals from the plurality of signals by decoding a respective additional identifier encoded in each of the additional subset of signals, wherein the additional identifiers specify that the additional subset of signals is targeted to at least one additional coverage zone overlapping the at least one coverage zone; obtain, from the measurement receiver, an additional subset of signal levels from the plurality of signal levels, wherein the additional subset of signal levels corresponds to a respective one of the additional subset of signals; and generate handover contour data describing signal coverage throughout an overlapping area of the at least one coverage zone and the at least one additional coverage zone, wherein the coverage contour data is generated based on the subset of signal levels and the additional subset of signal levels. 7. The mobile testing system of claim 1, wherein the processing device is further configured to: determine at least one modification to at least one remote unit of the distributed antenna system based on the coverage contour data; generate control information specifying the at least one modification. 8. The mobile testing system of claim 7, wherein the control information comprises at least one control signal and wherein the processing device is further configured to modify the at least one remote unit by causing the at least one control signal to be transmitted to the at least one remote unit. 9. The mobile testing system of claim 7, wherein the at least one modification comprises at least one of: a modification to an output power of the at least one remote unit; a modification to a pattern of a beamformer for the at least one remote unit; and a modification to an antenna tilt of the at least one remote unit. 10. The mobile testing system of claim 1, further comprising a transceiver device configured to communicate with other telecommunication devices via the distributed antenna system. 11. The mobile testing system of claim 1, wherein the processing device is configured to: identify a time interval, wherein the coverage contour data is generated during the time interval; and display the coverage contour data in real time for the time interval. 12. The mobile testing system of claim 1, wherein the mobile testing system is configured to communicate with a remote electrical tilt module communicatively coupled to a remote unit, where the remote electrical tilt module is configured to receive control signals instructing the remote electrical tilt module to modify an antenna tilt of the remote unit using one or more remote actuators and position sensors, where the control signals are based on the coverage contour data. 13. The mobile testing system of claim 10, wherein the mobile testing system is configured to communicate with a band translator configured to translate a first frequency band to a second frequency band and further configured to measure the plurality of signal levels and simultaneously decode the plurality of signals in multiple bands. 14. An optimization system for optimizing wireless coverage in a distributed antenna system, the optimization system comprising: a data transceiver disposed in at least one unit of the distributed antenna system, wherein the data transceiver is configured to: encode each of a plurality of test signals with a respective identifier, wherein each identifier identifies at least one coverage zone to which the plurality of test signals is to be transmitted; provide the plurality of test signals to at least one remote unit of the distributed antenna system that is positioned in the at least one coverage zone, wherein the at least one remote unit is configured to transmit the plurality of test signals in the at least one coverage zone; a mobile testing system positioned in the at least one coverage zone, the mobile testing system configured to: determine a plurality of signal levels for a plurality of signals, wherein the plurality of signals includes the plurality of test signals; identify the plurality of test signals based on decoding the identifiers from the plurality of test signals; and generate coverage contour data describing signal coverage throughout the at least one coverage zone, wherein the coverage contour data is generated based on the plurality of signal levels. 15. A method comprising: transmitting a plurality of signals associated with a respective identifier via a distributed antenna system; determining a plurality of signal levels for the plurality of signals; identifying a subset of signals from the plurality of signals by decoding the respective identifier encoded in each of the subset of signals, wherein the respective identifiers specify that the subset of signals is targeted to at least one coverage zone in which the mobile testing system is located; obtaining a subset of signal levels from the plurality of signal levels, wherein the subset of signal levels corresponds to a respective one of the subset of signals; and generating coverage contour data describing signal coverage throughout the at least one coverage zone, wherein the coverage contour data is generated based on the subset of signal levels. 16. The method of claim 15, further comprising: transmitting, by a mobile testing system in communication with at least one remote unit of a distributed antenna system, a plurality of uplink signals to the at least one remote unit; monitoring, by the mobile testing system, a gain and an uplink spectrum of the at least one remote unit during the transmitting of the plurality of uplink signals; identifying, by the mobile testing system, a variation in at least one of the gain and the uplink spectrum by the remote unit; and generating, by the mobile testing system, coverage contour data describing the variation in the at least one of the gain and the uplink spectrum. 17. The method of claim 15, further comprising: identifying an additional subset of signals from the plurality of signals by decoding a respective additional identifier encoded in each of the additional subset of signals, wherein the additional identifiers specify that the additional subset of signals is targeted to at least one additional coverage zone overlapping the at least one coverage zone; obtaining an additional subset of signal levels from the plurality of signal levels, wherein the additional subset of signal levels corresponds to a respective one of the additional subset of signals; and generating handover contour data describing signal coverage throughout an overlapping area of the at least one coverage zone and the at least one additional coverage zone, wherein the coverage contour data is generated based on the subset of signal levels and the additional subset of signal levels. 18. The method of claim 15, further comprising: modulating a test signal using at least one of identifier data, a frequency shift, or an amplitude of the test signal; decoding identifier data in the test signal; and generating an acknowledgement message in response to receiving the test signal and successfully decoding the identifier data. 19. The method of claim 15, further comprising tuning a propagation model based on a characterization of radio wave propagation as a function of frequency or distance, where the propagation model is associated with the distributed antenna system and is tuned to reduce a number of test routes associated with locations at which the gain and the uplink spectrum are monitored. 20. The method of claim 15, further comprising comparing a prediction of the signal level to the signal level measurements and generating a prediction confidence metric based on the comparison.
A mobile testing system for optimizing wireless coverage in a distributed antenna system is disclosed. In some aspects, the mobile testing system includes a measurement receiver that can determine signal levels for a respective signals communicated via the distributed antenna system. A processing device of the mobile testing system can identify a subset of the signals by decoding a respective identifier encoded in each of the subset of signals. The identifiers specify that the subset of signals is targeted to at least one coverage zone in which the mobile testing system is located. A subset of signal levels is obtained by the measurement receiver that corresponds to each of the subset of signals. The processing device can generate coverage contour data based on the subset of signal levels that describes signal coverage for at least one coverage zone.1. A mobile testing system for optimizing wireless coverage in a distributed antenna system, the mobile testing system comprising: a measurement receiver configured to determine a plurality of signal levels for a plurality of signals communicated via the distributed antenna system; a processing device communicatively coupled to the measurement receiver and configured to: identify a subset of signals from the plurality of signals by decoding a respective identifier encoded in each of the subset of signals, wherein the respective identifiers specify that the subset of signals is targeted to at least one coverage zone in which the mobile testing system is located, obtain, from the measurement receiver, a subset of signal levels from the plurality of signal levels, wherein the subset of signal levels corresponds to a respective one of the subset of signals, and generate coverage contour data describing signal coverage throughout the at least one coverage zone, wherein the coverage contour data is generated based on the subset of signal levels. 2. The mobile testing system of claim 1, wherein the processing device is configured to generate the coverage contour data by: determining a plurality of locations within the at least one coverage zone at which the subset of signals are determined by the measurement receiver; correlating each of the plurality of locations to a respective one of the subset of signal levels; and generating the coverage contour data based on correlating the plurality of locations and the subset of signal levels. 3. The mobile testing system of claim 2, further comprising an accelerometer communicatively coupled to the processing device and configured to determine a direction of movement and a velocity of movement of the mobile testing system, wherein the processing device is configured to determine the plurality of locations based on the direction of movement and the velocity of movement determined by the accelerometer. 4. The mobile testing system of claim 2, further comprising a global positioning system communicatively coupled to the processing device and configured to generate location data identifying the plurality of locations of the mobile testing system, wherein the processing device is configured to determine the plurality of locations from the location data generated by the global positioning system. 5. The mobile testing system of claim 2, further comprising an input device communicatively coupled to the processing device and configured to receive input data identifying a plurality of positions on a map of the at least one coverage zone; wherein the processing device is configured to determine the plurality of locations based on the input data identifying the plurality of positions on the map. 6. The mobile testing system of claim 1, wherein the processing device is further configured to: identify an additional subset of signals from the plurality of signals by decoding a respective additional identifier encoded in each of the additional subset of signals, wherein the additional identifiers specify that the additional subset of signals is targeted to at least one additional coverage zone overlapping the at least one coverage zone; obtain, from the measurement receiver, an additional subset of signal levels from the plurality of signal levels, wherein the additional subset of signal levels corresponds to a respective one of the additional subset of signals; and generate handover contour data describing signal coverage throughout an overlapping area of the at least one coverage zone and the at least one additional coverage zone, wherein the coverage contour data is generated based on the subset of signal levels and the additional subset of signal levels. 7. The mobile testing system of claim 1, wherein the processing device is further configured to: determine at least one modification to at least one remote unit of the distributed antenna system based on the coverage contour data; generate control information specifying the at least one modification. 8. The mobile testing system of claim 7, wherein the control information comprises at least one control signal and wherein the processing device is further configured to modify the at least one remote unit by causing the at least one control signal to be transmitted to the at least one remote unit. 9. The mobile testing system of claim 7, wherein the at least one modification comprises at least one of: a modification to an output power of the at least one remote unit; a modification to a pattern of a beamformer for the at least one remote unit; and a modification to an antenna tilt of the at least one remote unit. 10. The mobile testing system of claim 1, further comprising a transceiver device configured to communicate with other telecommunication devices via the distributed antenna system. 11. The mobile testing system of claim 1, wherein the processing device is configured to: identify a time interval, wherein the coverage contour data is generated during the time interval; and display the coverage contour data in real time for the time interval. 12. The mobile testing system of claim 1, wherein the mobile testing system is configured to communicate with a remote electrical tilt module communicatively coupled to a remote unit, where the remote electrical tilt module is configured to receive control signals instructing the remote electrical tilt module to modify an antenna tilt of the remote unit using one or more remote actuators and position sensors, where the control signals are based on the coverage contour data. 13. The mobile testing system of claim 10, wherein the mobile testing system is configured to communicate with a band translator configured to translate a first frequency band to a second frequency band and further configured to measure the plurality of signal levels and simultaneously decode the plurality of signals in multiple bands. 14. An optimization system for optimizing wireless coverage in a distributed antenna system, the optimization system comprising: a data transceiver disposed in at least one unit of the distributed antenna system, wherein the data transceiver is configured to: encode each of a plurality of test signals with a respective identifier, wherein each identifier identifies at least one coverage zone to which the plurality of test signals is to be transmitted; provide the plurality of test signals to at least one remote unit of the distributed antenna system that is positioned in the at least one coverage zone, wherein the at least one remote unit is configured to transmit the plurality of test signals in the at least one coverage zone; a mobile testing system positioned in the at least one coverage zone, the mobile testing system configured to: determine a plurality of signal levels for a plurality of signals, wherein the plurality of signals includes the plurality of test signals; identify the plurality of test signals based on decoding the identifiers from the plurality of test signals; and generate coverage contour data describing signal coverage throughout the at least one coverage zone, wherein the coverage contour data is generated based on the plurality of signal levels. 15. A method comprising: transmitting a plurality of signals associated with a respective identifier via a distributed antenna system; determining a plurality of signal levels for the plurality of signals; identifying a subset of signals from the plurality of signals by decoding the respective identifier encoded in each of the subset of signals, wherein the respective identifiers specify that the subset of signals is targeted to at least one coverage zone in which the mobile testing system is located; obtaining a subset of signal levels from the plurality of signal levels, wherein the subset of signal levels corresponds to a respective one of the subset of signals; and generating coverage contour data describing signal coverage throughout the at least one coverage zone, wherein the coverage contour data is generated based on the subset of signal levels. 16. The method of claim 15, further comprising: transmitting, by a mobile testing system in communication with at least one remote unit of a distributed antenna system, a plurality of uplink signals to the at least one remote unit; monitoring, by the mobile testing system, a gain and an uplink spectrum of the at least one remote unit during the transmitting of the plurality of uplink signals; identifying, by the mobile testing system, a variation in at least one of the gain and the uplink spectrum by the remote unit; and generating, by the mobile testing system, coverage contour data describing the variation in the at least one of the gain and the uplink spectrum. 17. The method of claim 15, further comprising: identifying an additional subset of signals from the plurality of signals by decoding a respective additional identifier encoded in each of the additional subset of signals, wherein the additional identifiers specify that the additional subset of signals is targeted to at least one additional coverage zone overlapping the at least one coverage zone; obtaining an additional subset of signal levels from the plurality of signal levels, wherein the additional subset of signal levels corresponds to a respective one of the additional subset of signals; and generating handover contour data describing signal coverage throughout an overlapping area of the at least one coverage zone and the at least one additional coverage zone, wherein the coverage contour data is generated based on the subset of signal levels and the additional subset of signal levels. 18. The method of claim 15, further comprising: modulating a test signal using at least one of identifier data, a frequency shift, or an amplitude of the test signal; decoding identifier data in the test signal; and generating an acknowledgement message in response to receiving the test signal and successfully decoding the identifier data. 19. The method of claim 15, further comprising tuning a propagation model based on a characterization of radio wave propagation as a function of frequency or distance, where the propagation model is associated with the distributed antenna system and is tuned to reduce a number of test routes associated with locations at which the gain and the uplink spectrum are monitored. 20. The method of claim 15, further comprising comparing a prediction of the signal level to the signal level measurements and generating a prediction confidence metric based on the comparison.
2,600
10,113
10,113
15,275,645
2,611
In an aspect, an update unit can evaluate condition(s) in an update request and update one or more memory locations based on the condition evaluation. The update unit can operate atomically to determine whether to effect the update and to make the update. Updates can include one or more of incrementing and swapping values. An update request may specify one of a pre-determined set of update types. Some update types may be conditional and others unconditional. The update unit can be coupled to receive update requests from a plurality of computation units. The computation units may not have privileges to directly generate write requests to be effected on at least some of the locations in memory. The computation units can be fixed function circuitry operating on inputs received from programmable computation elements. The update unit may include a buffer to hold received update requests.
1. A method of graphics processing of a 3-D scene using ray tracing, comprising: executing a thread of computation in a programmable computation unit, wherein the executing of the thread comprises executing an instruction, from an instruction set defining instructions that can be used to program the programmable computation unit, the instruction causing issuance of an operation code including data that identifies a ray, one or more shapes, and an operation to be performed for the ray with respect to the one or more shapes, wherein the operation to be performed is selected from a pre-determined set of operations; buffering the operation code in a non-transitory memory; and reading the operation code and performing the operation specified by the operation code for the ray, within a logic module that executes independently of the programmable computation unit and is capable of performing operations consisting of the operations from the pre-determined set of operations. 2. The machine-implemented method of graphics processing of claim 1, wherein the operation to be performed for the ray comprises multiple steps, which are performed atomically with respect to the executing thread of computation. 3. The machine-implemented method of graphics processing of claim 1, wherein the pre-determined set of operations comprises an intersection testing operation for both a primitive forming geometry located in a 3-D scene being ray traced and an element of an acceleration structure located in the 3-D scene. 4. The machine-implemented method of graphics processing of claim 1, wherein the pre-determined set of operations comprises an operation to identify elements of a dataset that are associated with 3-D positions and which satisfy at least one additional constraint specified in the operation code. 5. The machine-implemented method of graphics processing of claim 4, wherein the at least one additional constraint comprises at least one of: a maximum distance to a locus specified by the operation code; and requiring return of a maximum of k nearest elements to a locus specified by the operation code. 6. The machine-implemented method of graphics processing of claim 1, wherein the pre-determined set of operations comprises an operation to test a set of geometry elements with a ray. 7. The machine-implemented method of graphics processing of claim 6, wherein the set of geometry elements are elements of an acceleration structure abstracting geometry located in the 3-D scene. 8. The machine-implemented method of graphics processing of claim 1, further comprising outputting a result to a selected destination of the result according to a type of the shape that was tested. 9. The machine-implemented method of graphics processing of claim 1, wherein the shape is one of an element of scene geometry and an acceleration structure shape, and the method further comprises outputting a result to an update unit, and in the update unit, accessing a memory location storing a current closest intersection distance for a shape identified in the result, and to compare that current closest intersection distance with an intersection distance provided with the result, and to update the current closest intersection, in response to the intersection distance provided with the result being closer than the current closest intersection. 10. The machine-implemented method of graphics processing of claim 9, wherein the result is outputted in a format comprising a first memory address at which the current closest intersection is stored and a second memory address storing an identifier of the shape corresponding to that current closest intersection. 11. The machine-implemented method of graphics processing of claim 1, wherein types of the shapes comprise a geometry primitive and an acceleration structure shape, and the method further comprises selecting a destination for a result, from destinations comprising a collector unit and an update unit, and in the collector unit, updating status of traversal of the ray indicated in the result within an acceleration structure comprising an organization of acceleration structure shapes that subdivide a 3-D scene from which an image is being rendered. 12. The machine-implemented method of graphics processing of claim 1, further comprising determining, in a scheduler, a set of rays to be tested for intersection with a shape, and setting up executing threads of computation in a programmable computation cluster by distributing rays of the set of rays among a plurality of computation units that will execute the threads and in so doing issue operation codes that are buffered and provided to the logic module, wherein each respective operation code specifies an operation selected from the pre-determined set of operations. 13. An apparatus for rendering images from descriptions of 3-D scenes, comprising: a programmable computation unit configured to execute a thread of instructions, the instructions being from an instruction set defining instructions that can be used to program the programmable computation unit, the thread of instructions comprising an instruction capable of causing issuance of an operation code including data that identifies a ray, one or more shapes, and an operation to be performed for the ray with respect to the one or more shapes, wherein the operation to be performed is selected from a pre-determined set of operations; an interconnect configured to receive the operation code from the programmable computation unit and buffer the operation code in a non-transitory memory; and a logic module that executes independently of the programmable computation unit and is capable of performing operations consisting of the operations from the pre-determined set of operations, the logic module configured for reading the buffered operation code and performing the operation specified by the operation code for the ray and the one or more shapes. 14. The apparatus for rendering images from descriptions of 3-D scenes of claim 13, wherein the logic module is configured for outputting a result of the operation to an update unit configured to atomically perform a plurality of steps, the steps comprising evaluating a condition and altering a memory location in dependence on the evaluation. 15. The apparatus for rendering images from descriptions of 3-D scenes of claim 13, wherein the logic module is further configured to output update requests, each defining a potential change to be made to a specified shared memory location, to an update unit that atomically decides whether any change is to be made to data stored in the shared memory location and if so, then changes the data in the shared memory location accordingly. 16. The apparatus for rendering images from descriptions of 3-D scenes of claim 15, wherein the update unit is further configured, in order to decide whether any change is to be made to data stored in the shared memory location, to atomically read from a first memory location, and use data read from the first memory location to make the decision whether any change is to be made to the data stored in the shared memory location, and responsively to change the data stored in the specified shared memory location. 17. The apparatus for rendering images from descriptions of 3-D scenes of claim 13, further comprising a collector unit configured to identify a group of operations, from the pre-determined set of operations, which can be executed by the logic module and to submit the operations of that group to a buffering element coupled to the logic module. 18. The apparatus for rendering images from descriptions of 3-D scenes of claim 17, wherein the collector unit is further configured to identify a data element stored in a memory to be used during executing of the group of operations, to generate a pre-fetch read request and submit the pre-fetch read request to a memory controller, to bring the data element from the memory to a memory closer to the logic module. 19. The apparatus for rendering images from descriptions of 3-D scenes of claim 18, wherein the pre-fetch read request includes data indicative of a number of read requests to be expected for the data element, and further comprising eviction logic that evicts the data element according to a process that incorporates the indicated number of read requests. 20. An apparatus for performing computation, comprising: a plurality of computation units, each capable of executing a thread of programmatic control on data and of producing outputs, the outputs comprising an update request pertaining to a memory location that is readable by the plurality of computation units, wherein the update request indicates a condition; and an update unit coupled with the plurality of computation units and configured to receive data indicative of the update request, and to evaluate the condition to determine whether data in the memory location is to be changed to effect the update request, and if so, then to change contents of the memory location atomically with the evaluation of the condition.
In an aspect, an update unit can evaluate condition(s) in an update request and update one or more memory locations based on the condition evaluation. The update unit can operate atomically to determine whether to effect the update and to make the update. Updates can include one or more of incrementing and swapping values. An update request may specify one of a pre-determined set of update types. Some update types may be conditional and others unconditional. The update unit can be coupled to receive update requests from a plurality of computation units. The computation units may not have privileges to directly generate write requests to be effected on at least some of the locations in memory. The computation units can be fixed function circuitry operating on inputs received from programmable computation elements. The update unit may include a buffer to hold received update requests.1. A method of graphics processing of a 3-D scene using ray tracing, comprising: executing a thread of computation in a programmable computation unit, wherein the executing of the thread comprises executing an instruction, from an instruction set defining instructions that can be used to program the programmable computation unit, the instruction causing issuance of an operation code including data that identifies a ray, one or more shapes, and an operation to be performed for the ray with respect to the one or more shapes, wherein the operation to be performed is selected from a pre-determined set of operations; buffering the operation code in a non-transitory memory; and reading the operation code and performing the operation specified by the operation code for the ray, within a logic module that executes independently of the programmable computation unit and is capable of performing operations consisting of the operations from the pre-determined set of operations. 2. The machine-implemented method of graphics processing of claim 1, wherein the operation to be performed for the ray comprises multiple steps, which are performed atomically with respect to the executing thread of computation. 3. The machine-implemented method of graphics processing of claim 1, wherein the pre-determined set of operations comprises an intersection testing operation for both a primitive forming geometry located in a 3-D scene being ray traced and an element of an acceleration structure located in the 3-D scene. 4. The machine-implemented method of graphics processing of claim 1, wherein the pre-determined set of operations comprises an operation to identify elements of a dataset that are associated with 3-D positions and which satisfy at least one additional constraint specified in the operation code. 5. The machine-implemented method of graphics processing of claim 4, wherein the at least one additional constraint comprises at least one of: a maximum distance to a locus specified by the operation code; and requiring return of a maximum of k nearest elements to a locus specified by the operation code. 6. The machine-implemented method of graphics processing of claim 1, wherein the pre-determined set of operations comprises an operation to test a set of geometry elements with a ray. 7. The machine-implemented method of graphics processing of claim 6, wherein the set of geometry elements are elements of an acceleration structure abstracting geometry located in the 3-D scene. 8. The machine-implemented method of graphics processing of claim 1, further comprising outputting a result to a selected destination of the result according to a type of the shape that was tested. 9. The machine-implemented method of graphics processing of claim 1, wherein the shape is one of an element of scene geometry and an acceleration structure shape, and the method further comprises outputting a result to an update unit, and in the update unit, accessing a memory location storing a current closest intersection distance for a shape identified in the result, and to compare that current closest intersection distance with an intersection distance provided with the result, and to update the current closest intersection, in response to the intersection distance provided with the result being closer than the current closest intersection. 10. The machine-implemented method of graphics processing of claim 9, wherein the result is outputted in a format comprising a first memory address at which the current closest intersection is stored and a second memory address storing an identifier of the shape corresponding to that current closest intersection. 11. The machine-implemented method of graphics processing of claim 1, wherein types of the shapes comprise a geometry primitive and an acceleration structure shape, and the method further comprises selecting a destination for a result, from destinations comprising a collector unit and an update unit, and in the collector unit, updating status of traversal of the ray indicated in the result within an acceleration structure comprising an organization of acceleration structure shapes that subdivide a 3-D scene from which an image is being rendered. 12. The machine-implemented method of graphics processing of claim 1, further comprising determining, in a scheduler, a set of rays to be tested for intersection with a shape, and setting up executing threads of computation in a programmable computation cluster by distributing rays of the set of rays among a plurality of computation units that will execute the threads and in so doing issue operation codes that are buffered and provided to the logic module, wherein each respective operation code specifies an operation selected from the pre-determined set of operations. 13. An apparatus for rendering images from descriptions of 3-D scenes, comprising: a programmable computation unit configured to execute a thread of instructions, the instructions being from an instruction set defining instructions that can be used to program the programmable computation unit, the thread of instructions comprising an instruction capable of causing issuance of an operation code including data that identifies a ray, one or more shapes, and an operation to be performed for the ray with respect to the one or more shapes, wherein the operation to be performed is selected from a pre-determined set of operations; an interconnect configured to receive the operation code from the programmable computation unit and buffer the operation code in a non-transitory memory; and a logic module that executes independently of the programmable computation unit and is capable of performing operations consisting of the operations from the pre-determined set of operations, the logic module configured for reading the buffered operation code and performing the operation specified by the operation code for the ray and the one or more shapes. 14. The apparatus for rendering images from descriptions of 3-D scenes of claim 13, wherein the logic module is configured for outputting a result of the operation to an update unit configured to atomically perform a plurality of steps, the steps comprising evaluating a condition and altering a memory location in dependence on the evaluation. 15. The apparatus for rendering images from descriptions of 3-D scenes of claim 13, wherein the logic module is further configured to output update requests, each defining a potential change to be made to a specified shared memory location, to an update unit that atomically decides whether any change is to be made to data stored in the shared memory location and if so, then changes the data in the shared memory location accordingly. 16. The apparatus for rendering images from descriptions of 3-D scenes of claim 15, wherein the update unit is further configured, in order to decide whether any change is to be made to data stored in the shared memory location, to atomically read from a first memory location, and use data read from the first memory location to make the decision whether any change is to be made to the data stored in the shared memory location, and responsively to change the data stored in the specified shared memory location. 17. The apparatus for rendering images from descriptions of 3-D scenes of claim 13, further comprising a collector unit configured to identify a group of operations, from the pre-determined set of operations, which can be executed by the logic module and to submit the operations of that group to a buffering element coupled to the logic module. 18. The apparatus for rendering images from descriptions of 3-D scenes of claim 17, wherein the collector unit is further configured to identify a data element stored in a memory to be used during executing of the group of operations, to generate a pre-fetch read request and submit the pre-fetch read request to a memory controller, to bring the data element from the memory to a memory closer to the logic module. 19. The apparatus for rendering images from descriptions of 3-D scenes of claim 18, wherein the pre-fetch read request includes data indicative of a number of read requests to be expected for the data element, and further comprising eviction logic that evicts the data element according to a process that incorporates the indicated number of read requests. 20. An apparatus for performing computation, comprising: a plurality of computation units, each capable of executing a thread of programmatic control on data and of producing outputs, the outputs comprising an update request pertaining to a memory location that is readable by the plurality of computation units, wherein the update request indicates a condition; and an update unit coupled with the plurality of computation units and configured to receive data indicative of the update request, and to evaluate the condition to determine whether data in the memory location is to be changed to effect the update request, and if so, then to change contents of the memory location atomically with the evaluation of the condition.
2,600
10,114
10,114
15,600,067
2,641
A device for determining at least one position of a mobile terminal includes at least one memory apparatus, a magnetometer sensor unit, a classification unit, and a position-determining unit to determine the position of the mobile terminal. The classification unit is configured to determine states, in particular operating states, of at least one electric motor and/or a vehicle driven by means of at least one electric motor using the magnetometer sensor data. The classification unit is also configured to store the determined states in the at least one memory apparatus. The position-determining unit reads out the states from the at least one memory apparatus and determines the at least one position of the mobile terminal with the help of the states.
1. A device for determining at least one position of a mobile terminal, in particular a smart phone, comprising: at least one memory apparatus; a magnetometer sensor unit to output magnetometer sensor data; a classification unit; a position-determining unit to determine the position of the mobile terminal, wherein the classification unit is configured to a) determine states, in particular operating states, of at least one electric motor and/or a vehicle driven by means of at least one electric motor using the magnetometer sensor data, and b) store the determined states in the at least one memory apparatus; and the position-determining unit reads out the states from the at least one memory apparatus and determines the at least one position of the mobile terminal with the help of the states. 2. The device according to claim 1, wherein the position-determining unit is developed to use additional signals to determine the at least one position of the mobile terminal, in particular signals of GSM towers and signals of WiFi access points. 3. The device according to claim 2, wherein the position-determining unit is designed to calculate from the additional signal data a degree for the quality of the determined position. 4. The device according to claim 3, wherein the position-determining unit determines the position using a sequential Monte Carlo method and/or a Dynamic Bayes Network and/or a Kalman filter. 5. The device according to claim 4, wherein the classification unit determines the state of the electric motor and/or the vehicle driven by means of an electric motor using a support vector machine and/or a linear discriminant analysis for classification. 6. The device according to claim 5, wherein the classification unit determines the states from a finite quantity of states with a cardinality of less than 5. 7. The device according to claim 6, wherein the classification unit is developed to determine at least one first motor state and at least one second motor state, with the first state indicating that a drive voltage is applied at the electric motor and/or the second state indicating that no drive voltage is applied at the electric motor. 8. The device according to claim 7, wherein the classification unit is developed to determine at least one first field state and at least one second field state, with the first field state indicating that the measured values are below a threshold value, and the second state indicating that the values are above a threshold value. 9. The device according to claim 8, wherein the classification unit is developed to determine at least one first vehicle state and at least one second vehicle state, with the first vehicle state indicating that the vehicle is accelerating and/or the second indicating that the vehicle is standing still. 10. The device according to claim 9, wherein the mobile terminal is developed to store magnetometer sensor data and meta-information, in particular timestamps, wherein the meta-information is to be related to the magnetometer sensor data. 11. The device according to claim 10, wherein the position-determining unit for determining the at least one position of the mobile terminal receives network data that represent a network plan, comprising at least a multitude of stations; connections between the stations; and optionally: distances between the stations; and travel time between the stations. 12. The device according to claim 11, wherein the at least one memory apparatus stores data that provide a characteristic signal path, and that the classification unit compares said data to the magnetometer sensor data to determine the states. 13. The device according to claim 12, wherein the at least one memory apparatus stores polynomial coefficients for the representation of characteristic signal paths. 14. A method for determining the position of a mobile terminal, comprising the acts of: detecting of magnetic and/or electric field data of an electric motor; storing the magnetic and/or electric field data in at least one memory apparatus; classifying a state of the electric motor or a vehicle driven by means of an electric motor; storing said states in the at least one memory apparatus; and determining a position of the mobile terminal with the help of the states stored in the at least one memory apparatus. 15. A computer-readable storage medium storing executable instructions that when executed prompt a computer to implement the method according to claim 14.
A device for determining at least one position of a mobile terminal includes at least one memory apparatus, a magnetometer sensor unit, a classification unit, and a position-determining unit to determine the position of the mobile terminal. The classification unit is configured to determine states, in particular operating states, of at least one electric motor and/or a vehicle driven by means of at least one electric motor using the magnetometer sensor data. The classification unit is also configured to store the determined states in the at least one memory apparatus. The position-determining unit reads out the states from the at least one memory apparatus and determines the at least one position of the mobile terminal with the help of the states.1. A device for determining at least one position of a mobile terminal, in particular a smart phone, comprising: at least one memory apparatus; a magnetometer sensor unit to output magnetometer sensor data; a classification unit; a position-determining unit to determine the position of the mobile terminal, wherein the classification unit is configured to a) determine states, in particular operating states, of at least one electric motor and/or a vehicle driven by means of at least one electric motor using the magnetometer sensor data, and b) store the determined states in the at least one memory apparatus; and the position-determining unit reads out the states from the at least one memory apparatus and determines the at least one position of the mobile terminal with the help of the states. 2. The device according to claim 1, wherein the position-determining unit is developed to use additional signals to determine the at least one position of the mobile terminal, in particular signals of GSM towers and signals of WiFi access points. 3. The device according to claim 2, wherein the position-determining unit is designed to calculate from the additional signal data a degree for the quality of the determined position. 4. The device according to claim 3, wherein the position-determining unit determines the position using a sequential Monte Carlo method and/or a Dynamic Bayes Network and/or a Kalman filter. 5. The device according to claim 4, wherein the classification unit determines the state of the electric motor and/or the vehicle driven by means of an electric motor using a support vector machine and/or a linear discriminant analysis for classification. 6. The device according to claim 5, wherein the classification unit determines the states from a finite quantity of states with a cardinality of less than 5. 7. The device according to claim 6, wherein the classification unit is developed to determine at least one first motor state and at least one second motor state, with the first state indicating that a drive voltage is applied at the electric motor and/or the second state indicating that no drive voltage is applied at the electric motor. 8. The device according to claim 7, wherein the classification unit is developed to determine at least one first field state and at least one second field state, with the first field state indicating that the measured values are below a threshold value, and the second state indicating that the values are above a threshold value. 9. The device according to claim 8, wherein the classification unit is developed to determine at least one first vehicle state and at least one second vehicle state, with the first vehicle state indicating that the vehicle is accelerating and/or the second indicating that the vehicle is standing still. 10. The device according to claim 9, wherein the mobile terminal is developed to store magnetometer sensor data and meta-information, in particular timestamps, wherein the meta-information is to be related to the magnetometer sensor data. 11. The device according to claim 10, wherein the position-determining unit for determining the at least one position of the mobile terminal receives network data that represent a network plan, comprising at least a multitude of stations; connections between the stations; and optionally: distances between the stations; and travel time between the stations. 12. The device according to claim 11, wherein the at least one memory apparatus stores data that provide a characteristic signal path, and that the classification unit compares said data to the magnetometer sensor data to determine the states. 13. The device according to claim 12, wherein the at least one memory apparatus stores polynomial coefficients for the representation of characteristic signal paths. 14. A method for determining the position of a mobile terminal, comprising the acts of: detecting of magnetic and/or electric field data of an electric motor; storing the magnetic and/or electric field data in at least one memory apparatus; classifying a state of the electric motor or a vehicle driven by means of an electric motor; storing said states in the at least one memory apparatus; and determining a position of the mobile terminal with the help of the states stored in the at least one memory apparatus. 15. A computer-readable storage medium storing executable instructions that when executed prompt a computer to implement the method according to claim 14.
2,600
10,115
10,115
15,433,083
2,632
In a method for reading from an RFID-tagged article and an RFID system, information is accurately read from an RFID tag while interference with other devices is prevented by use of a compact and simple configuration. An article is conveyed on a conveyor belt. Also, an RFID tag is attached to the article. Information on the RFID tag is read by a leaky coaxial cable that is a stationary read/write antenna in a vicinity of the conveyor belt. The leaky coaxial cable is above the conveyor belt and at least a portion of the cable traverses the conveyor belt.
1. A method for reading information comprising: reading information from an RFID tag attached to an article conveyed in one direction by using a stationary read-write antenna located in a vicinity thereof, the stationary read-write antenna being a cable-shaped traveling-wave antenna; and during the reading, locating the traveling-wave antenna such that at least a portion of the traveling-wave antenna traverses a conveying direction of the article so as to read information from the RFID tag by using an electromagnetic field around the traveling-wave antenna. 2. The method according to claim 1, wherein the cable-shaped traveling-wave antenna wraps around the article. 3. The method according to claim 1, wherein the locating is performed such that at least two positions of the cable-shaped traveling-wave antenna traverses the conveying direction of the article. 4. The method according to claim 3, wherein the locating is performed such that the cable-shaped traveling-wave antenna meanders with respect to the conveying direction of the article or is helically disposed around the conveying direction of the article. 5. The method according to claim 1, wherein the locating is performed such that a leading end portion side of the traveling-wave antenna is located downstream along the conveying direction of the article. 6. The method according to claim 1, wherein the locating is performed such that a downstream side of the cable-shaped traveling-wave antenna is brought closer to the article as compared to an upstream side along the conveying direction of the article. 7. The method according to claim 1, wherein the stationary read-write antenna includes a leaky coaxial cable. 8. The method according to claim 1, wherein the leaky coaxial cable includes a center conductor, an insulator, an outer conductor and a sheath, and the center conductor is exposed to outside at portions along a length of the leaky coaxial cable. 9. The method according to claim 8, wherein the center conductor is disposed continuously along an entire length of the leaky coaxial cable, and each of the insulator, the outer conductor and the sheath are disposed discontinuously along the entire length of the leaky coaxial cable to define missing portions such that a signal propagating through the leaky coaxial cable is leaked from the missing portions to outside. 10. The method according to claim 1, further comprising conveying a plurality of articles together such that the stationary read-write antenna reads the information of each of the plurality of articles. 11. An RFID system comprising: a conveyor platform conveying an article to which an RFID tag is attached in one direction; and a stationary read-write antenna in a vicinity of the conveyor platform to read information from the RFID tag attached to the article; wherein the stationary read-write antenna is a cable-shaped traveling-wave antenna; and at least a portion of the traveling-wave antenna traverses a conveying direction of the article. 12. The RFID system according to claim 11, wherein the cable-shaped traveling-wave antenna wraps around the conveyor platform. 13. The RFID system according to claim 11, wherein at least two portions of the cable-shaped traveling-wave antenna traverse the conveying direction of the article. 14. The RFID system according to claim 13, wherein the cable-shaped traveling-wave antenna meanders with respect to the conveying direction of the article or is helically disposed around the conveyor platform. 15. The RFID system according to claim 11, wherein a leading end portion side of the cable-shaped traveling-wave antenna is located downstream along the conveying direction of the article. 16. The RFID system according to claim 11, wherein a downstream side of the cable-shaped traveling-wave antenna is brought closer to the article as compared to an upstream side along the conveying direction of the article. 17. The RFID system according to claim 11, wherein the stationary read-write antenna includes a leaky coaxial cable. 18. The RFID system according to claim 11, wherein the leaky coaxial cable includes a center conductor, an insulator, an outer conductor and a sheath, and the center conductor is exposed to outside at portions along a length of the leaky coaxial cable. 19. The RFID system according to claim 18, wherein the center conductor is disposed continuously along an entire length of the leaky coaxial cable, and each of the insulator, the outer conductor and the sheath are disposed discontinuously along the entire length of the leaky coaxial cable to define missing portions such that a signal propagating through the leaky coaxial cable is leaked from the missing portions to outside. 20. RFID system according to claim 11, wherein the stationary read-write antenna reads the information of each of a plurality of articles conveyed together.
In a method for reading from an RFID-tagged article and an RFID system, information is accurately read from an RFID tag while interference with other devices is prevented by use of a compact and simple configuration. An article is conveyed on a conveyor belt. Also, an RFID tag is attached to the article. Information on the RFID tag is read by a leaky coaxial cable that is a stationary read/write antenna in a vicinity of the conveyor belt. The leaky coaxial cable is above the conveyor belt and at least a portion of the cable traverses the conveyor belt.1. A method for reading information comprising: reading information from an RFID tag attached to an article conveyed in one direction by using a stationary read-write antenna located in a vicinity thereof, the stationary read-write antenna being a cable-shaped traveling-wave antenna; and during the reading, locating the traveling-wave antenna such that at least a portion of the traveling-wave antenna traverses a conveying direction of the article so as to read information from the RFID tag by using an electromagnetic field around the traveling-wave antenna. 2. The method according to claim 1, wherein the cable-shaped traveling-wave antenna wraps around the article. 3. The method according to claim 1, wherein the locating is performed such that at least two positions of the cable-shaped traveling-wave antenna traverses the conveying direction of the article. 4. The method according to claim 3, wherein the locating is performed such that the cable-shaped traveling-wave antenna meanders with respect to the conveying direction of the article or is helically disposed around the conveying direction of the article. 5. The method according to claim 1, wherein the locating is performed such that a leading end portion side of the traveling-wave antenna is located downstream along the conveying direction of the article. 6. The method according to claim 1, wherein the locating is performed such that a downstream side of the cable-shaped traveling-wave antenna is brought closer to the article as compared to an upstream side along the conveying direction of the article. 7. The method according to claim 1, wherein the stationary read-write antenna includes a leaky coaxial cable. 8. The method according to claim 1, wherein the leaky coaxial cable includes a center conductor, an insulator, an outer conductor and a sheath, and the center conductor is exposed to outside at portions along a length of the leaky coaxial cable. 9. The method according to claim 8, wherein the center conductor is disposed continuously along an entire length of the leaky coaxial cable, and each of the insulator, the outer conductor and the sheath are disposed discontinuously along the entire length of the leaky coaxial cable to define missing portions such that a signal propagating through the leaky coaxial cable is leaked from the missing portions to outside. 10. The method according to claim 1, further comprising conveying a plurality of articles together such that the stationary read-write antenna reads the information of each of the plurality of articles. 11. An RFID system comprising: a conveyor platform conveying an article to which an RFID tag is attached in one direction; and a stationary read-write antenna in a vicinity of the conveyor platform to read information from the RFID tag attached to the article; wherein the stationary read-write antenna is a cable-shaped traveling-wave antenna; and at least a portion of the traveling-wave antenna traverses a conveying direction of the article. 12. The RFID system according to claim 11, wherein the cable-shaped traveling-wave antenna wraps around the conveyor platform. 13. The RFID system according to claim 11, wherein at least two portions of the cable-shaped traveling-wave antenna traverse the conveying direction of the article. 14. The RFID system according to claim 13, wherein the cable-shaped traveling-wave antenna meanders with respect to the conveying direction of the article or is helically disposed around the conveyor platform. 15. The RFID system according to claim 11, wherein a leading end portion side of the cable-shaped traveling-wave antenna is located downstream along the conveying direction of the article. 16. The RFID system according to claim 11, wherein a downstream side of the cable-shaped traveling-wave antenna is brought closer to the article as compared to an upstream side along the conveying direction of the article. 17. The RFID system according to claim 11, wherein the stationary read-write antenna includes a leaky coaxial cable. 18. The RFID system according to claim 11, wherein the leaky coaxial cable includes a center conductor, an insulator, an outer conductor and a sheath, and the center conductor is exposed to outside at portions along a length of the leaky coaxial cable. 19. The RFID system according to claim 18, wherein the center conductor is disposed continuously along an entire length of the leaky coaxial cable, and each of the insulator, the outer conductor and the sheath are disposed discontinuously along the entire length of the leaky coaxial cable to define missing portions such that a signal propagating through the leaky coaxial cable is leaked from the missing portions to outside. 20. RFID system according to claim 11, wherein the stationary read-write antenna reads the information of each of a plurality of articles conveyed together.
2,600
10,116
10,116
15,018,366
2,613
In a virtual reality system, a user may travel from a first virtual location to a second virtual location. During travel, a dynamic virtual animation may be displayed within a portal in the field of view by the user, allowing the user to experience a sensation of traveling from the first virtual location to the second virtual location. A fixed feature may be displayed in the field of view, surrounding the portal. The arrangement and position of the fixed feature may remain fixed while the dynamic virtual animation is displayed within the portal, to provide a stable frame of reference while experiencing the sensation of traveling. The stable frame of reference provided by the fixed feature may mitigate a feeling of disorientation and/or motion sickness during travel due to a mismatch between the dynamic visual experience and the stationary physical experience.
1. A method, comprising: displaying a first virtual scene corresponding to a first virtual location; detecting a first command to move to a second virtual location; and moving from the first virtual location to the second virtual location in response to the first command, including: displaying a portal; displaying a fixed feature surrounding the portal; and displaying a dynamic animation of movement from the first virtual location to the second virtual location within the portal, the fixed feature remaining fixed surrounding at least a portion of the portal. 2. The method of claim 1, wherein displaying a dynamic animation of travel from the first virtual location to the second virtual location includes displaying the dynamic animation within the portal until arriving at the second virtual location. 3. The method of claim 2, further comprising: displaying a second virtual scene corresponding to the second virtual location after arriving at the second virtual location, including no longer displaying the portal and the fixed feature. 4. The method of claim 1, wherein displaying a dynamic animation of movement from the first virtual location to the second virtual location includes displaying the dynamic animation based on at least one mode of movement, of a plurality of modes of movement, from the first virtual location to the second virtual location, the plurality of modes including movement through air, terrestrial movement, or movement along water. 5. The method of claim 1, wherein displaying a portal includes: displaying the portal at a fixed position within a user field of view, the position of the portal remaining fixed within the field of view until arriving at the second virtual location. 6. The method of claim 5, wherein displaying a fixed feature includes: displaying the fixed feature in an area of the field of view surrounding the portal, the fixed feature occupying a remaining area of the field of view not occupied by the portal. 7. The method of claim 1, wherein displaying a portal includes: displaying a closed curve defining the portal within a user field of view, the closed curve occupying a preset area of a user field of view, the dynamic animation being displayed only within the closed curve. 8. The method of claim 7, wherein displaying a fixed feature includes: displaying a grid in a remaining area of the user field of view not occupied by the closed curve, the preset area occupied by the closed curve defining the portal and the remaining area occupied by the grid filling the user field of view; and maintaining the grid in a fixed arrangement and a fixed orientation with respect to the closed curve as the dynamic animation is displayed within the closed curve. 9. The method of claim 1, further comprising: detecting a second command while displaying the dynamic animation within the portal; and shifting a perspective of the dynamic animation displayed within the portal in response to the second command. 10. The method of claim 9, further comprising: maintaining a fixed position of the portal within a user field of view and a fixed position and arrangement of the fixed feature in response to the second command. 11. A method, including: generating an immersive virtual environment; detecting a first command to move from a first virtual location to a second virtual location in the virtual environment; and in response to the first command: displaying a portal in a first portion of a user field of view and a fixed feature in a second portion of the user field of view, the fixed feature surrounding the portal; displaying a dynamic animation of travel from the first virtual location to the second virtual location within the portal until detecting arrival at the second virtual location, a position of the portal and an arrangement and a position of the fixed feature remaining fixed while the dynamic animation is displayed within the portal; replacing the display of the portal and the fixed feature with a scene corresponding to the second virtual location after detecting arrival at the second virtual location. 12. The method of claim 11, further comprising: detecting a second command while displaying the dynamic animation within the portal; shifting a perspective of the dynamic animation displayed within the portal in response to the second command; and maintaining the fixed position of the portal and the fixed position and arrangement of the fixed feature in response to the second command. 13. A system, comprising: a computing device configured to generate an immersive virtual environment, the computing device including: a memory storing executable instructions; and a processor configured to execute the instructions to cause the computing device to: generate a virtual environment; detect a first command to move from a first virtual location to a second virtual location in the virtual environment; and in response to the first command, replace a first scene corresponding to the first virtual location displayed in a user field of view with a portal and a fixed feature surrounding the portal in the user field of view; and display a dynamic animation of travel from the first virtual location to the second virtual location within the portal, the fixed feature remaining fixed surrounding the portal as the dynamic animation is displayed within the portal. 14. The system of claim 13, wherein an area of the user field of view is defined by a first portion occupied by the portal and a second portion occupied by the fixed feature surrounding the portal. 15. The system of claim 14, wherein the portal is defined by a closed curve positioned at a fixed location in the user field of view. 16. The system of claim 14, wherein the fixed feature includes a grid displayed in the second portion of the user field of view, surrounding the closed curve. 17. The system of claim 14, wherein the fixed feature includes at least one of a plurality of intersecting lines, a plurality of corners, or a plurality of geometric features displayed in the second portion of the user field of view. 18. The system of claim 13, wherein processor is further configured to execute the instructions to cause the computing device to display the dynamic animation of movement from the first virtual location to the second virtual location within the portal until arrival at the second virtual location is detected. 19. The system of claim 18, wherein processor is further configured to execute the instructions to cause the computing device to replace the display of the portal and the fixed feature with a second scene corresponding to the second virtual location displayed in the user field of view after arrival at the second virtual location is detected. 20. The system of claim 13, wherein processor is further configured to execute the instructions to cause the computing device to: detect a second command while the dynamic animation is displayed within the portal; shift a perspective of the dynamic animation displayed within the portal in response to the second command; and maintain the fixed position of the portal within the user field of view and the fixed position and arrangement of the fixed feature in response to the second command.
In a virtual reality system, a user may travel from a first virtual location to a second virtual location. During travel, a dynamic virtual animation may be displayed within a portal in the field of view by the user, allowing the user to experience a sensation of traveling from the first virtual location to the second virtual location. A fixed feature may be displayed in the field of view, surrounding the portal. The arrangement and position of the fixed feature may remain fixed while the dynamic virtual animation is displayed within the portal, to provide a stable frame of reference while experiencing the sensation of traveling. The stable frame of reference provided by the fixed feature may mitigate a feeling of disorientation and/or motion sickness during travel due to a mismatch between the dynamic visual experience and the stationary physical experience.1. A method, comprising: displaying a first virtual scene corresponding to a first virtual location; detecting a first command to move to a second virtual location; and moving from the first virtual location to the second virtual location in response to the first command, including: displaying a portal; displaying a fixed feature surrounding the portal; and displaying a dynamic animation of movement from the first virtual location to the second virtual location within the portal, the fixed feature remaining fixed surrounding at least a portion of the portal. 2. The method of claim 1, wherein displaying a dynamic animation of travel from the first virtual location to the second virtual location includes displaying the dynamic animation within the portal until arriving at the second virtual location. 3. The method of claim 2, further comprising: displaying a second virtual scene corresponding to the second virtual location after arriving at the second virtual location, including no longer displaying the portal and the fixed feature. 4. The method of claim 1, wherein displaying a dynamic animation of movement from the first virtual location to the second virtual location includes displaying the dynamic animation based on at least one mode of movement, of a plurality of modes of movement, from the first virtual location to the second virtual location, the plurality of modes including movement through air, terrestrial movement, or movement along water. 5. The method of claim 1, wherein displaying a portal includes: displaying the portal at a fixed position within a user field of view, the position of the portal remaining fixed within the field of view until arriving at the second virtual location. 6. The method of claim 5, wherein displaying a fixed feature includes: displaying the fixed feature in an area of the field of view surrounding the portal, the fixed feature occupying a remaining area of the field of view not occupied by the portal. 7. The method of claim 1, wherein displaying a portal includes: displaying a closed curve defining the portal within a user field of view, the closed curve occupying a preset area of a user field of view, the dynamic animation being displayed only within the closed curve. 8. The method of claim 7, wherein displaying a fixed feature includes: displaying a grid in a remaining area of the user field of view not occupied by the closed curve, the preset area occupied by the closed curve defining the portal and the remaining area occupied by the grid filling the user field of view; and maintaining the grid in a fixed arrangement and a fixed orientation with respect to the closed curve as the dynamic animation is displayed within the closed curve. 9. The method of claim 1, further comprising: detecting a second command while displaying the dynamic animation within the portal; and shifting a perspective of the dynamic animation displayed within the portal in response to the second command. 10. The method of claim 9, further comprising: maintaining a fixed position of the portal within a user field of view and a fixed position and arrangement of the fixed feature in response to the second command. 11. A method, including: generating an immersive virtual environment; detecting a first command to move from a first virtual location to a second virtual location in the virtual environment; and in response to the first command: displaying a portal in a first portion of a user field of view and a fixed feature in a second portion of the user field of view, the fixed feature surrounding the portal; displaying a dynamic animation of travel from the first virtual location to the second virtual location within the portal until detecting arrival at the second virtual location, a position of the portal and an arrangement and a position of the fixed feature remaining fixed while the dynamic animation is displayed within the portal; replacing the display of the portal and the fixed feature with a scene corresponding to the second virtual location after detecting arrival at the second virtual location. 12. The method of claim 11, further comprising: detecting a second command while displaying the dynamic animation within the portal; shifting a perspective of the dynamic animation displayed within the portal in response to the second command; and maintaining the fixed position of the portal and the fixed position and arrangement of the fixed feature in response to the second command. 13. A system, comprising: a computing device configured to generate an immersive virtual environment, the computing device including: a memory storing executable instructions; and a processor configured to execute the instructions to cause the computing device to: generate a virtual environment; detect a first command to move from a first virtual location to a second virtual location in the virtual environment; and in response to the first command, replace a first scene corresponding to the first virtual location displayed in a user field of view with a portal and a fixed feature surrounding the portal in the user field of view; and display a dynamic animation of travel from the first virtual location to the second virtual location within the portal, the fixed feature remaining fixed surrounding the portal as the dynamic animation is displayed within the portal. 14. The system of claim 13, wherein an area of the user field of view is defined by a first portion occupied by the portal and a second portion occupied by the fixed feature surrounding the portal. 15. The system of claim 14, wherein the portal is defined by a closed curve positioned at a fixed location in the user field of view. 16. The system of claim 14, wherein the fixed feature includes a grid displayed in the second portion of the user field of view, surrounding the closed curve. 17. The system of claim 14, wherein the fixed feature includes at least one of a plurality of intersecting lines, a plurality of corners, or a plurality of geometric features displayed in the second portion of the user field of view. 18. The system of claim 13, wherein processor is further configured to execute the instructions to cause the computing device to display the dynamic animation of movement from the first virtual location to the second virtual location within the portal until arrival at the second virtual location is detected. 19. The system of claim 18, wherein processor is further configured to execute the instructions to cause the computing device to replace the display of the portal and the fixed feature with a second scene corresponding to the second virtual location displayed in the user field of view after arrival at the second virtual location is detected. 20. The system of claim 13, wherein processor is further configured to execute the instructions to cause the computing device to: detect a second command while the dynamic animation is displayed within the portal; shift a perspective of the dynamic animation displayed within the portal in response to the second command; and maintain the fixed position of the portal within the user field of view and the fixed position and arrangement of the fixed feature in response to the second command.
2,600
10,117
10,117
15,911,056
2,624
An organic light emitting diode (OLED) display is disclosed. In one aspect the display includes a display panel having first through fourth pixels and a scan driving unit that outputs a scan signal to the display panel. The display also includes a data driving unit that alternately outputs a first data signal for the first pixels and a second data signal for the second pixels to the display panel, alternately outputs a third data signal for the third pixels and a fourth data signal for the fourth pixels to the display panel, and begins outputting the first and third data signals before one horizontal period begins The display further includes a demultiplexing unit that alternately applies the first and second data signals to the first and second pixels and the third and fourth data signals to the third and fourth pixels.
1-7. (canceled) 8. An organic light emitting diode (OLED) display comprising: a display panel comprising a plurality of first pixels configured to emit a first color light, a plurality of second pixels configured to emit a second color light, and a plurality of third pixels configured to emit a third color light, the first through third pixels being arranged at locations corresponding to crossing points of a plurality of scan-lines and a plurality of data-lines; a scan driver configured to sequentially output a scan signal to the display panel; a data driver configured to alternately output a first data signal for the first pixels, a second data signal for the second pixels, and a third data signal for the third pixels to the display panel, and configured to begin outputting the first data signal before one horizontal period begins; a demultiplexing unit configured to alternately apply the first to third data signals to the first to third pixels, respectively, the demultiplexing unit being placed between the display panel and the data driver; and a timing control unit configured to control the scan driver, the data driver, and the demultiplexing unit. 9. The OLED display of claim 8, wherein the display panel is configured to be manufactured based on an RGB-OLED technology. 10. The OLED display of claim 9, wherein each of the first to third color lights is one of the following: a blue color light, a red color light, and a green color light. 11. The OLED display of claim 8, wherein the demultiplexing unit includes: a plurality of demultiplexers configured to apply the first data signal to the first pixels while the data driver outputs the first data signal, configured to apply the second data signal to the second pixels while the data driver outputs the second data signal, and configured to apply the third data signal to the third pixels while the data driver outputs the third data signal. 12. The OLED display of claim 11, wherein each of the demultiplexers includes: a first switch configured to control a coupling between a first data-line electrically connected to the first pixels and an output-line of the data driver; a second switch configured to control a coupling between a second data-line electrically connected to the second pixels and the output-line of the data driver; and a third switch configured to control a coupling between a third data-line electrically connected to the third pixels and the output-line of the data driver. 13. The OLED display of claim 12, wherein the second and third switches are configured to be turned off when the first switch is turned on, wherein the first and third switches are configured to be turned off when the second switch is turned on, and wherein the first and second switches are configured to be turned off when the third switch is turned on. 14-23. (canceled)
An organic light emitting diode (OLED) display is disclosed. In one aspect the display includes a display panel having first through fourth pixels and a scan driving unit that outputs a scan signal to the display panel. The display also includes a data driving unit that alternately outputs a first data signal for the first pixels and a second data signal for the second pixels to the display panel, alternately outputs a third data signal for the third pixels and a fourth data signal for the fourth pixels to the display panel, and begins outputting the first and third data signals before one horizontal period begins The display further includes a demultiplexing unit that alternately applies the first and second data signals to the first and second pixels and the third and fourth data signals to the third and fourth pixels.1-7. (canceled) 8. An organic light emitting diode (OLED) display comprising: a display panel comprising a plurality of first pixels configured to emit a first color light, a plurality of second pixels configured to emit a second color light, and a plurality of third pixels configured to emit a third color light, the first through third pixels being arranged at locations corresponding to crossing points of a plurality of scan-lines and a plurality of data-lines; a scan driver configured to sequentially output a scan signal to the display panel; a data driver configured to alternately output a first data signal for the first pixels, a second data signal for the second pixels, and a third data signal for the third pixels to the display panel, and configured to begin outputting the first data signal before one horizontal period begins; a demultiplexing unit configured to alternately apply the first to third data signals to the first to third pixels, respectively, the demultiplexing unit being placed between the display panel and the data driver; and a timing control unit configured to control the scan driver, the data driver, and the demultiplexing unit. 9. The OLED display of claim 8, wherein the display panel is configured to be manufactured based on an RGB-OLED technology. 10. The OLED display of claim 9, wherein each of the first to third color lights is one of the following: a blue color light, a red color light, and a green color light. 11. The OLED display of claim 8, wherein the demultiplexing unit includes: a plurality of demultiplexers configured to apply the first data signal to the first pixels while the data driver outputs the first data signal, configured to apply the second data signal to the second pixels while the data driver outputs the second data signal, and configured to apply the third data signal to the third pixels while the data driver outputs the third data signal. 12. The OLED display of claim 11, wherein each of the demultiplexers includes: a first switch configured to control a coupling between a first data-line electrically connected to the first pixels and an output-line of the data driver; a second switch configured to control a coupling between a second data-line electrically connected to the second pixels and the output-line of the data driver; and a third switch configured to control a coupling between a third data-line electrically connected to the third pixels and the output-line of the data driver. 13. The OLED display of claim 12, wherein the second and third switches are configured to be turned off when the first switch is turned on, wherein the first and third switches are configured to be turned off when the second switch is turned on, and wherein the first and second switches are configured to be turned off when the third switch is turned on. 14-23. (canceled)
2,600
10,118
10,118
15,786,266
2,674
Mixed-reality systems are provided for using anchor graphs within a mixed-reality environment. These systems utilize anchor vertexes that comprise at least one first key frame, a first mixed-reality element, and at least one first transform connecting the at least one first key frame to the first mixed-reality element. Anchor edges comprising transformations connect the anchor vertexes.
1. A computer system comprising: one or more processors; and one or more computer-readable hardware storage media having stored thereon computer-executable instructions, the computer-executable instructions being executable by the one or more processors to generate an anchor graph for a mixed-reality environment by causing the computer system to: identify an anchor vertex that includes a mixed-reality element, the mixed-reality element being linked by the anchor vertex to a key frame; identify a second anchor vertex that includes a second mixed-reality element, the second mixed-reality element being linked by the second anchor vertex to a second key frame; and generate the anchor graph using information corresponding to the anchor vertex, the second anchor vertex, and a link between the anchor vertex and the second anchor vertex. 2. The computer system of claim 1, wherein the mixed-reality element is linked to the key frame by a transform, the transform defining a relationship between (1) a location and viewing direction associated with the key frame and (2) a location and pose associated with the mixed-reality element. 3. The computer system of claim 1, wherein the anchor graph is transmitted to a second computer system that is identified as being worn by a user who is physically proximate to a location associated with the key frame. 4. The computer system of claim 1, wherein the key frame comprises image data and geolocation data, the image data being captured by a camera of the computer system as a user who is wearing the computer system traverses a path, and wherein the geolocation data corresponds to a plurality of locations that are sampled from the user's traversed path, whereby the key frame includes location information corresponding to the user's traversed path. 5. The computer system of claim 1, wherein the mixed-reality element is a pre-made hologram. 6. The computer system of claim 5, wherein the pre-made hologram is viewable only when the computer system is oriented in a particular pose and situated at a particular location. 7. The computer system of claim 1, wherein a size of the anchor vertex is configurable. 8. The computer system of claim 1, wherein the anchor graph corresponds to a traversal path of a user who is wearing the computer system, and wherein execution of the computer-executable instructions further causes the computer system to: receive a previously generated anchor graph, the previously generated anchor graph corresponding to a different path that was traversed previously by a different user who was wearing a different computer system, wherein a location of the path is determined to be sufficiently proximate to a location of the different path such that the key frame is included in both the anchor graph and the previously generated anchor graph; and update the anchor graph to include additional anchor vertexes that are included in the previously generated anchor graph. 9. The computer system of claim 1, wherein the anchor graph includes a third anchor vertex, the third anchor vertex including a third mixed-reality element that is linked, in the third anchor vertex, to a third key frame, and wherein the anchor graph further includes an established link between the third anchor vertex and the second anchor vertex. 10. The computer system of claim 9, wherein, after identifying (1) the link between the anchor vertex and the second anchor vertex and (2) the established link between the third anchor vertex and the second anchor vertex, a third link is generated, the third link linking the anchor vertex to the third anchor vertex. 11. The computer system of claim 1, wherein the link between the anchor vertex and the second anchor vertex is structured so as to define a path of the user, the path originating at a location associated with the anchor vertex and ending at a location associated with the second anchor vertex. 12. The computer system of claim 1, wherein the anchor vertex further includes geographic location associated with the key frame, the geographic location being generated by one or more of (1) a global positioning sensor included as a part of the computer system, (2) an extrapolated mapping of a traversal path of a user wearing the computer system, or (3) location information entered by the user. 13. The computer system of claim 1, wherein the anchor vertex and the second anchor vertex are stored in an index of anchor vertexes. 14. The computer system of claim 1, wherein execution of the computer-executable instructions further causes the computer system to: upon an occurrence of a triggering event, receive an additional anchor vertex from a remote server, the additional anchor vertex being added to the anchor graph. 15. The computer system of claim 14, wherein the triggering event occurs when a location of the computer system is determined to be sufficiently proximate to a location associated with the additional anchor vertex. 16. A method for generating an anchor graph for a mixed-reality environment, the method being implemented by one or more processors of a computer system, the method comprising: identifying an anchor vertex that includes a mixed-reality element, the mixed-reality element being linked by the anchor vertex to a key frame; identifying a second anchor vertex that includes a second mixed-reality element, the second mixed-reality element being linked by the second anchor vertex to a second key frame; and generating the anchor graph using information corresponding to the anchor vertex, the second anchor vertex, and a link between the anchor vertex and the second anchor vertex. 17. The method of claim 16, wherein the link between the anchor vertex and the second anchor vertex is an anchor edge, the anchor edge comprising a rotational matrix that represents a physical rotation between the anchor vertex and the second anchor vertex. 18. The method of claim 17, wherein the anchor edge further includes a translational matrix that represents a translation between the anchor vertex and the second anchor vertex. 19. The method of claim 16, wherein the method further includes: upon determining that a location of the computer system is determined to be sufficiently proximate to a location associated with a different anchor vertex, receive the different anchor vertex from a remote server, the different anchor vertex being added to the anchor graph. 20. One or more hardware storage devices having stored thereon computer-executable instructions, the computer-executable instructions being executable by one or more processors of a computer system to cause the computer system to: identify an anchor vertex that includes a mixed-reality element, the mixed-reality element being linked by the anchor vertex to a key frame; identify a second anchor vertex that includes a second mixed-reality element, the second mixed-reality element being linked by the second anchor vertex to a second key frame; and generate the anchor graph using information corresponding to the anchor vertex, the second anchor vertex, and a link between the anchor vertex and the second anchor vertex.
Mixed-reality systems are provided for using anchor graphs within a mixed-reality environment. These systems utilize anchor vertexes that comprise at least one first key frame, a first mixed-reality element, and at least one first transform connecting the at least one first key frame to the first mixed-reality element. Anchor edges comprising transformations connect the anchor vertexes.1. A computer system comprising: one or more processors; and one or more computer-readable hardware storage media having stored thereon computer-executable instructions, the computer-executable instructions being executable by the one or more processors to generate an anchor graph for a mixed-reality environment by causing the computer system to: identify an anchor vertex that includes a mixed-reality element, the mixed-reality element being linked by the anchor vertex to a key frame; identify a second anchor vertex that includes a second mixed-reality element, the second mixed-reality element being linked by the second anchor vertex to a second key frame; and generate the anchor graph using information corresponding to the anchor vertex, the second anchor vertex, and a link between the anchor vertex and the second anchor vertex. 2. The computer system of claim 1, wherein the mixed-reality element is linked to the key frame by a transform, the transform defining a relationship between (1) a location and viewing direction associated with the key frame and (2) a location and pose associated with the mixed-reality element. 3. The computer system of claim 1, wherein the anchor graph is transmitted to a second computer system that is identified as being worn by a user who is physically proximate to a location associated with the key frame. 4. The computer system of claim 1, wherein the key frame comprises image data and geolocation data, the image data being captured by a camera of the computer system as a user who is wearing the computer system traverses a path, and wherein the geolocation data corresponds to a plurality of locations that are sampled from the user's traversed path, whereby the key frame includes location information corresponding to the user's traversed path. 5. The computer system of claim 1, wherein the mixed-reality element is a pre-made hologram. 6. The computer system of claim 5, wherein the pre-made hologram is viewable only when the computer system is oriented in a particular pose and situated at a particular location. 7. The computer system of claim 1, wherein a size of the anchor vertex is configurable. 8. The computer system of claim 1, wherein the anchor graph corresponds to a traversal path of a user who is wearing the computer system, and wherein execution of the computer-executable instructions further causes the computer system to: receive a previously generated anchor graph, the previously generated anchor graph corresponding to a different path that was traversed previously by a different user who was wearing a different computer system, wherein a location of the path is determined to be sufficiently proximate to a location of the different path such that the key frame is included in both the anchor graph and the previously generated anchor graph; and update the anchor graph to include additional anchor vertexes that are included in the previously generated anchor graph. 9. The computer system of claim 1, wherein the anchor graph includes a third anchor vertex, the third anchor vertex including a third mixed-reality element that is linked, in the third anchor vertex, to a third key frame, and wherein the anchor graph further includes an established link between the third anchor vertex and the second anchor vertex. 10. The computer system of claim 9, wherein, after identifying (1) the link between the anchor vertex and the second anchor vertex and (2) the established link between the third anchor vertex and the second anchor vertex, a third link is generated, the third link linking the anchor vertex to the third anchor vertex. 11. The computer system of claim 1, wherein the link between the anchor vertex and the second anchor vertex is structured so as to define a path of the user, the path originating at a location associated with the anchor vertex and ending at a location associated with the second anchor vertex. 12. The computer system of claim 1, wherein the anchor vertex further includes geographic location associated with the key frame, the geographic location being generated by one or more of (1) a global positioning sensor included as a part of the computer system, (2) an extrapolated mapping of a traversal path of a user wearing the computer system, or (3) location information entered by the user. 13. The computer system of claim 1, wherein the anchor vertex and the second anchor vertex are stored in an index of anchor vertexes. 14. The computer system of claim 1, wherein execution of the computer-executable instructions further causes the computer system to: upon an occurrence of a triggering event, receive an additional anchor vertex from a remote server, the additional anchor vertex being added to the anchor graph. 15. The computer system of claim 14, wherein the triggering event occurs when a location of the computer system is determined to be sufficiently proximate to a location associated with the additional anchor vertex. 16. A method for generating an anchor graph for a mixed-reality environment, the method being implemented by one or more processors of a computer system, the method comprising: identifying an anchor vertex that includes a mixed-reality element, the mixed-reality element being linked by the anchor vertex to a key frame; identifying a second anchor vertex that includes a second mixed-reality element, the second mixed-reality element being linked by the second anchor vertex to a second key frame; and generating the anchor graph using information corresponding to the anchor vertex, the second anchor vertex, and a link between the anchor vertex and the second anchor vertex. 17. The method of claim 16, wherein the link between the anchor vertex and the second anchor vertex is an anchor edge, the anchor edge comprising a rotational matrix that represents a physical rotation between the anchor vertex and the second anchor vertex. 18. The method of claim 17, wherein the anchor edge further includes a translational matrix that represents a translation between the anchor vertex and the second anchor vertex. 19. The method of claim 16, wherein the method further includes: upon determining that a location of the computer system is determined to be sufficiently proximate to a location associated with a different anchor vertex, receive the different anchor vertex from a remote server, the different anchor vertex being added to the anchor graph. 20. One or more hardware storage devices having stored thereon computer-executable instructions, the computer-executable instructions being executable by one or more processors of a computer system to cause the computer system to: identify an anchor vertex that includes a mixed-reality element, the mixed-reality element being linked by the anchor vertex to a key frame; identify a second anchor vertex that includes a second mixed-reality element, the second mixed-reality element being linked by the second anchor vertex to a second key frame; and generate the anchor graph using information corresponding to the anchor vertex, the second anchor vertex, and a link between the anchor vertex and the second anchor vertex.
2,600
10,119
10,119
15,676,704
2,632
An interface is provided for processing digital signals in a standardized format in a distributed antenna system. One example includes a unit disposed in a distributed antenna system. The unit includes an interface section and an output section. The interface section is configured for outputting a first complex digital signal and a second complex digital signal. The first complex digital signal is generated from a digital signal in a standardized format received from a digital base station. The output section is configured for combining the first complex digital signal and the second complex digital signal into a combined digital signal. The output section is also configured for outputting the combined digital signal. The combined digital signal comprises information to be wirelessly transmitted to a wireless user device.
1. A unit of a distributed antenna system, the unit comprising: an interface section configured to process signals for communication between (i) a first base station configured to communicate the signals using a first communication protocol and a second base station configured to communicate the signals using a second communication protocol and (ii) a remote antenna unit that is not configured to process the signals using the first communication protocol and the second communication protocol; wherein the interface section comprises: a first digital interface card for interfacing with a first digital base station and for outputting the first digital signal; a second digital interface card for interfacing with a second digital base station and for outputting the second digital signal, the second digital base station being associated with a different frequency clock than a frequency clock associated with the first digital base station; and a clock select device for selecting a system reference signal from a plurality of reference signals generated based on frequency clocks of the first digital base station and the second digital base station, the system reference signal being usable by components of the unit; and wherein the interface section is configured to process the signals by performing operations comprising: receiving a first downlink signal comprising first packetized data that includes first carrier data and first control data formatted according to the first communication protocol; receiving a second downlink signal comprising second packetized data that includes second carrier data and second control data formatted according to the second communication protocol; and converting the first downlink signal and the second downlink signal from the first and second communication protocols into a format that allows the remote antenna unit to wirelessly transmit information from the first and second downlink signals to wireless user devices, wherein the interface section is configured to convert the first downlink signal and the second downlink signal by performing operations comprising: extracting the first carrier data from the first packetized data and generating a first digital signal from the extracted first packetized data; and extracting the second carrier data from the second packetized data generating a second digital signal from the extracted second packetized data; and an output section configured to: generate a combined digital downlink signal based on the first digital signal and the second digital signal, wherein the combined digital downlink signal comprises the information to be wirelessly transmitted; and transmit the combined digital downlink signal to the remote antenna unit in the format that allows the remote antenna unit to wirelessly transmit the information. 2. The unit of claim 1, wherein the first communication protocol comprises at least one of a Common Public Radio Interface protocol, an Open Radio Equipment Interface protocol, or an Open Base Station Standard Initiative protocol and the second communication protocol comprises at least one of the Common Public Radio Interface protocol, the Open Radio Equipment Interface protocol, or the Open Base Station Standard Initiative protocol. 3. The unit of claim 1, wherein the first communication protocol and the second communication protocol are the same communication protocol. 4. The unit of claim 1, wherein the first communication protocol and the second communication protocol are different communication protocols; and wherein the second communication protocol is incompatible with the first communication protocol, preventing the first carrier data from being combined with the second carrier data. 5. The unit of claim 1, wherein the output section is further configured to generate the combined digital downlink signal based on the first digital signal and the second digital signal at least in part by summing a first plurality of digital samples from the first extracted carrier data and a second plurality of digital samples from the second extracted carrier data, wherein each of the first plurality of digital samples and the second plurality of digital samples represents a respective in-phase component and quadrature component. 6. The unit of claim 1, wherein the interface section is further configured to receive an analog downlink signal received from an analog base station; wherein the interface section comprises: a digital interface device configured to transform the first downlink signal into the first digital signal and transforming the second downlink signal into the second digital signal; and an analog interface device configured to transform the analog downlink signal into a third digital signal; wherein the output section is further configured to combine the first digital signal, the second digital signal, and the third digital signal into the combined digital downlink signal. 7. The unit of claim 1, wherein the interface section further comprises: a physical layer device configured to receive the first downlink signal and the second downlink signal; and a de-framer configured to: extract the first carrier data from the first downlink signal; and extract the second carrier data from the second downlink signal. 8. The unit of claim 1, wherein the output section comprises a backplane configured to: combine the first digital signal and the second digital signal into the combined digital downlink signal, and output the combined digital downlink signal as serialized data over a serial link to an RF transceiver. 9. The unit of claim 1, wherein the interface section comprises a drop/add device and a decision circuit in communication with the drop/add device, wherein the decision circuit is configured to cause, based on a first clock rate for the digital base station being different from a second clock rate from an additional base station in communication with the unit, the drop/add device to perform at least one of (i) dropping bits from the first digital signal and (ii) adding bits to the first digital signal. 10. The unit of claim 9, wherein the drop/add device comprises a drop/add FIFO disposed between the output section and a framer/deframer configured to extract at least one of the first carrier data and the second carrier data, wherein the decision circuit is configured to (i) cause the drop/add FIFO to drop bits of the first digital signal in response to the drop/add FIFO reaching a first depth threshold and (ii) cause the drop/add FIFO to add bits to the first digital signal in response to the drop/add FIFO reaching a second depth threshold. 11. The unit of claim 1, wherein at least one of the first carrier data and the second carrier data comprises voice data from a digital base station to be outputted by the wireless user device and at least one of the first control data and the second control data comprises data for coordinating communication between the digital base station and a device receiving according to at least one of the first communication protocol and the second communication protocol. 12. The unit of claim 11, wherein the unit is configured to generate a system reference clock rate for the distributed antenna system based on a reference clock obtained from the digital base station. 13. A method comprising: processing, by a unit of a distributed antenna system, signals for communication between (i) a first base station that communicates the signals using a first communication protocol and a second base station that communicates the signals using a second communication protocol and (ii) a remote antenna unit that does not communicate the signals using the first communication protocol or the second communication protocol, wherein processing the signals for communication between (i) the first base station and the second base station and (ii) the remote antenna unit comprises: receiving a first downlink signal comprising first packetized data that includes first carrier data and first control data formatted according to the first communication protocol; receiving a second downlink signal comprising second packetized data that includes second carrier data and second control data formatted according to the second communication protocol, wherein the first downlink signal being formatted according to the first communication protocol and the second downlink signal being formatted according to the second communication protocol prevents the remote antenna unit from correctly transmitting information that is included in the first carrier data and the second carrier data; and converting the first downlink signal and the second downlink signal into a format that allows the remote antenna unit to wirelessly transmit the information to wireless user devices, wherein converting the first downlink signal and the second downlink signal comprises: extracting the first carrier data from the first packetized data and generating a first digital signal from the extracted first packetized data, and extracting the second carrier data from the second packetized data and generating a second digital signal from the extracted second packetized data; generating a combined digital downlink signal based on the first digital signal and the second digital signal, wherein the combined digital downlink signal comprises the information to be wirelessly transmitted; transmitting the combined digital downlink signal to the remote antenna unit in the format that allows the remote antenna unit to wirelessly transmit the information; wherein the second digital base station is associated with a different frequency than a frequency clock associated with the first digital base station; and selecting a system reference signal from a plurality of reference signals generated based on frequency clocks of the first digital base station and the second digital base station, the system reference signal being usable by components of the unit. 14. The method of claim 13, wherein the first communication protocol and the second communication protocol are the same communication protocol. 15. The method of claim 13, wherein the first communication protocol and the second communication protocol are different communication protocols; and wherein the second communication protocol is incompatible with the first communication protocol, preventing the first carrier data from being combined with the second carrier data. 16. The method of claim 13, wherein combining the first digital signal and the second digital signal into the combined digital downlink signal comprises summing a first plurality of digital samples from the first extracted carrier data and a second plurality of digital samples from the second extracted carrier data, wherein each of the first plurality of digital samples and the second plurality of digital samples represents a respective in-phase component and quadrature component. 17. The method of claim 13, further comprising: receiving an analog downlink signal received from an analog base station; and transforming the analog downlink signal into a third digital signal, wherein the first digital signal, the second digital signal, and the third digital signal are combined into the combined digital downlink signal. 18. The method of claim 13, wherein the first communication protocol comprises at least one of a Common Public Radio Interface protocol, an Open Radio Equipment Interface protocol, or an Open Base Station Standard Initiative protocol and the second communication protocol comprises at least one of the Common Public Radio Interface protocol, the Open Radio Equipment Interface protocol, or the Open Base Station Standard Initiative protocol. 19. The method of claim 13, further comprising selecting a system reference signal from a plurality of reference signals generated based on frequency clocks of a first digital base station from which the first downlink signal is received and a second digital base station from which the second downlink signal is received, the system reference signal being usable by components of the unit. 20. The method of claim 19, further comprising causing, based on a first clock rate for the first digital base station being different from a second clock rate for the second digital base station, performing at least one of (i) dropping bits from the first digital signal and (ii) adding bits to the first digital signal. 21. The method of claim 13, wherein processing the signals for communication between the base station and the remote antenna unit further comprises: receiving an uplink signal from the remote antenna unit; and formatting the received uplink signal according to the first communication protocol or the second communication protocol for transmission to the base station, wherein formatting the uplink signal comprises: generating uplink control data using at least one of the first control data and the second control data; generating uplink carrier data from the received uplink signal; and outputting uplink packetized data having the uplink control data and the uplink carrier data.
An interface is provided for processing digital signals in a standardized format in a distributed antenna system. One example includes a unit disposed in a distributed antenna system. The unit includes an interface section and an output section. The interface section is configured for outputting a first complex digital signal and a second complex digital signal. The first complex digital signal is generated from a digital signal in a standardized format received from a digital base station. The output section is configured for combining the first complex digital signal and the second complex digital signal into a combined digital signal. The output section is also configured for outputting the combined digital signal. The combined digital signal comprises information to be wirelessly transmitted to a wireless user device.1. A unit of a distributed antenna system, the unit comprising: an interface section configured to process signals for communication between (i) a first base station configured to communicate the signals using a first communication protocol and a second base station configured to communicate the signals using a second communication protocol and (ii) a remote antenna unit that is not configured to process the signals using the first communication protocol and the second communication protocol; wherein the interface section comprises: a first digital interface card for interfacing with a first digital base station and for outputting the first digital signal; a second digital interface card for interfacing with a second digital base station and for outputting the second digital signal, the second digital base station being associated with a different frequency clock than a frequency clock associated with the first digital base station; and a clock select device for selecting a system reference signal from a plurality of reference signals generated based on frequency clocks of the first digital base station and the second digital base station, the system reference signal being usable by components of the unit; and wherein the interface section is configured to process the signals by performing operations comprising: receiving a first downlink signal comprising first packetized data that includes first carrier data and first control data formatted according to the first communication protocol; receiving a second downlink signal comprising second packetized data that includes second carrier data and second control data formatted according to the second communication protocol; and converting the first downlink signal and the second downlink signal from the first and second communication protocols into a format that allows the remote antenna unit to wirelessly transmit information from the first and second downlink signals to wireless user devices, wherein the interface section is configured to convert the first downlink signal and the second downlink signal by performing operations comprising: extracting the first carrier data from the first packetized data and generating a first digital signal from the extracted first packetized data; and extracting the second carrier data from the second packetized data generating a second digital signal from the extracted second packetized data; and an output section configured to: generate a combined digital downlink signal based on the first digital signal and the second digital signal, wherein the combined digital downlink signal comprises the information to be wirelessly transmitted; and transmit the combined digital downlink signal to the remote antenna unit in the format that allows the remote antenna unit to wirelessly transmit the information. 2. The unit of claim 1, wherein the first communication protocol comprises at least one of a Common Public Radio Interface protocol, an Open Radio Equipment Interface protocol, or an Open Base Station Standard Initiative protocol and the second communication protocol comprises at least one of the Common Public Radio Interface protocol, the Open Radio Equipment Interface protocol, or the Open Base Station Standard Initiative protocol. 3. The unit of claim 1, wherein the first communication protocol and the second communication protocol are the same communication protocol. 4. The unit of claim 1, wherein the first communication protocol and the second communication protocol are different communication protocols; and wherein the second communication protocol is incompatible with the first communication protocol, preventing the first carrier data from being combined with the second carrier data. 5. The unit of claim 1, wherein the output section is further configured to generate the combined digital downlink signal based on the first digital signal and the second digital signal at least in part by summing a first plurality of digital samples from the first extracted carrier data and a second plurality of digital samples from the second extracted carrier data, wherein each of the first plurality of digital samples and the second plurality of digital samples represents a respective in-phase component and quadrature component. 6. The unit of claim 1, wherein the interface section is further configured to receive an analog downlink signal received from an analog base station; wherein the interface section comprises: a digital interface device configured to transform the first downlink signal into the first digital signal and transforming the second downlink signal into the second digital signal; and an analog interface device configured to transform the analog downlink signal into a third digital signal; wherein the output section is further configured to combine the first digital signal, the second digital signal, and the third digital signal into the combined digital downlink signal. 7. The unit of claim 1, wherein the interface section further comprises: a physical layer device configured to receive the first downlink signal and the second downlink signal; and a de-framer configured to: extract the first carrier data from the first downlink signal; and extract the second carrier data from the second downlink signal. 8. The unit of claim 1, wherein the output section comprises a backplane configured to: combine the first digital signal and the second digital signal into the combined digital downlink signal, and output the combined digital downlink signal as serialized data over a serial link to an RF transceiver. 9. The unit of claim 1, wherein the interface section comprises a drop/add device and a decision circuit in communication with the drop/add device, wherein the decision circuit is configured to cause, based on a first clock rate for the digital base station being different from a second clock rate from an additional base station in communication with the unit, the drop/add device to perform at least one of (i) dropping bits from the first digital signal and (ii) adding bits to the first digital signal. 10. The unit of claim 9, wherein the drop/add device comprises a drop/add FIFO disposed between the output section and a framer/deframer configured to extract at least one of the first carrier data and the second carrier data, wherein the decision circuit is configured to (i) cause the drop/add FIFO to drop bits of the first digital signal in response to the drop/add FIFO reaching a first depth threshold and (ii) cause the drop/add FIFO to add bits to the first digital signal in response to the drop/add FIFO reaching a second depth threshold. 11. The unit of claim 1, wherein at least one of the first carrier data and the second carrier data comprises voice data from a digital base station to be outputted by the wireless user device and at least one of the first control data and the second control data comprises data for coordinating communication between the digital base station and a device receiving according to at least one of the first communication protocol and the second communication protocol. 12. The unit of claim 11, wherein the unit is configured to generate a system reference clock rate for the distributed antenna system based on a reference clock obtained from the digital base station. 13. A method comprising: processing, by a unit of a distributed antenna system, signals for communication between (i) a first base station that communicates the signals using a first communication protocol and a second base station that communicates the signals using a second communication protocol and (ii) a remote antenna unit that does not communicate the signals using the first communication protocol or the second communication protocol, wherein processing the signals for communication between (i) the first base station and the second base station and (ii) the remote antenna unit comprises: receiving a first downlink signal comprising first packetized data that includes first carrier data and first control data formatted according to the first communication protocol; receiving a second downlink signal comprising second packetized data that includes second carrier data and second control data formatted according to the second communication protocol, wherein the first downlink signal being formatted according to the first communication protocol and the second downlink signal being formatted according to the second communication protocol prevents the remote antenna unit from correctly transmitting information that is included in the first carrier data and the second carrier data; and converting the first downlink signal and the second downlink signal into a format that allows the remote antenna unit to wirelessly transmit the information to wireless user devices, wherein converting the first downlink signal and the second downlink signal comprises: extracting the first carrier data from the first packetized data and generating a first digital signal from the extracted first packetized data, and extracting the second carrier data from the second packetized data and generating a second digital signal from the extracted second packetized data; generating a combined digital downlink signal based on the first digital signal and the second digital signal, wherein the combined digital downlink signal comprises the information to be wirelessly transmitted; transmitting the combined digital downlink signal to the remote antenna unit in the format that allows the remote antenna unit to wirelessly transmit the information; wherein the second digital base station is associated with a different frequency than a frequency clock associated with the first digital base station; and selecting a system reference signal from a plurality of reference signals generated based on frequency clocks of the first digital base station and the second digital base station, the system reference signal being usable by components of the unit. 14. The method of claim 13, wherein the first communication protocol and the second communication protocol are the same communication protocol. 15. The method of claim 13, wherein the first communication protocol and the second communication protocol are different communication protocols; and wherein the second communication protocol is incompatible with the first communication protocol, preventing the first carrier data from being combined with the second carrier data. 16. The method of claim 13, wherein combining the first digital signal and the second digital signal into the combined digital downlink signal comprises summing a first plurality of digital samples from the first extracted carrier data and a second plurality of digital samples from the second extracted carrier data, wherein each of the first plurality of digital samples and the second plurality of digital samples represents a respective in-phase component and quadrature component. 17. The method of claim 13, further comprising: receiving an analog downlink signal received from an analog base station; and transforming the analog downlink signal into a third digital signal, wherein the first digital signal, the second digital signal, and the third digital signal are combined into the combined digital downlink signal. 18. The method of claim 13, wherein the first communication protocol comprises at least one of a Common Public Radio Interface protocol, an Open Radio Equipment Interface protocol, or an Open Base Station Standard Initiative protocol and the second communication protocol comprises at least one of the Common Public Radio Interface protocol, the Open Radio Equipment Interface protocol, or the Open Base Station Standard Initiative protocol. 19. The method of claim 13, further comprising selecting a system reference signal from a plurality of reference signals generated based on frequency clocks of a first digital base station from which the first downlink signal is received and a second digital base station from which the second downlink signal is received, the system reference signal being usable by components of the unit. 20. The method of claim 19, further comprising causing, based on a first clock rate for the first digital base station being different from a second clock rate for the second digital base station, performing at least one of (i) dropping bits from the first digital signal and (ii) adding bits to the first digital signal. 21. The method of claim 13, wherein processing the signals for communication between the base station and the remote antenna unit further comprises: receiving an uplink signal from the remote antenna unit; and formatting the received uplink signal according to the first communication protocol or the second communication protocol for transmission to the base station, wherein formatting the uplink signal comprises: generating uplink control data using at least one of the first control data and the second control data; generating uplink carrier data from the received uplink signal; and outputting uplink packetized data having the uplink control data and the uplink carrier data.
2,600
10,120
10,120
14,556,871
2,643
The invention relates to bandwidth signalling in a multicarrier wireless telecommunication system. The information is transferred in the band itself (bold carriers) and contains information of the size and location of the band (I). The information is repeated in a number of carriers (bold) through out the band.
1. A method in a multicarrier wireless telecommunication system for interchanging radio communication between base stations (BS) and mobile user stations (MS) of the system, the method comprising: transmitting information signals over the air interface relating to size and location of operational bands of the radio spectrum used by the system; wherein the transmitted information signals comprise information of the bandwidth and location in the spectrum of the operational bands as part of the information in one or several sub carriers of the bands; and wherein the location information is implicitly derivable from synchronization signals. 2. (canceled) 3. The method of claim 1, wherein the signalling is received by the mobile user stations, which detect the information about available blocks of spectrum and store it into a memory. 4. The method of claim 1, wherein the size information is repeated regularly in subsequent carriers or subcarriers of the operational band. 5. The method of claim 1, wherein the information comprises the start and stop frequencies of the band and thereby the bandwidth. 6. The method of claim 1, wherein the information comprises an identifying number representing the size and location of available operational bands. 7. The method of claim 3, wherein the mobile user stations repeatedly scan the information signalling for updating memory of the respective mobile user stations about changing conditions relating to the operational bands. 8. The method of claim 1, wherein the operational bands belong to different operators and wherein the subscribers of an operator may partly or wholly have access to operational bands of another operator. 9. The method of claim 1, wherein a mobile user station requests access to a multicarrier band with N carriers for downloading information, the method further comprising: the mobile user station searching the radio interface for an N-carrier band by looking for location and size information; the communication system assigning a free band with N+ε carriers to the mobile user station upon the request where ε is zero or a small number compared to N; and the mobile station downloading the information. 10. A wireless multicarrier telecommunication system comprising: a traffic controlling center; and transmitting units controlled by said traffic controlling center, wherein the transmitting units transmit information signals relating to available resources of the system to mobile units, wherein the information signals comprise information about the size and location of available bandwidth in a number of operational bands allocated to the system; wherein the location information is implicitly derivable from synchronization signals. 11. A base station node (BS) in a multicarrier telecommunication system, the base station node comprising: a memory; and a processor configured to execute program instructions stored in the memory, whereby the base station node is operative to: transmit information relating to properties of available operational bands of the spectrum allocated to the system, wherein the information is related to size and location of the available operational bands; and wherein the location information is implicitly derivable from synchronization signals. 12. A mobile station node in a multicarrier telecommunication system, the mobile station node comprising: a memory; and a processor configured to execute program instructions stored in the memory, whereby the mobile station node is operative to: receive information signalling relating to available operational bands in terms of size and location in the radio spectrum; and search for channels based on said received information signalling; wherein the location information is implicitly derivable from synchronization signals. 13. The mobile station node of claim 12, wherein the operational band relating data is stored in the memory.
The invention relates to bandwidth signalling in a multicarrier wireless telecommunication system. The information is transferred in the band itself (bold carriers) and contains information of the size and location of the band (I). The information is repeated in a number of carriers (bold) through out the band.1. A method in a multicarrier wireless telecommunication system for interchanging radio communication between base stations (BS) and mobile user stations (MS) of the system, the method comprising: transmitting information signals over the air interface relating to size and location of operational bands of the radio spectrum used by the system; wherein the transmitted information signals comprise information of the bandwidth and location in the spectrum of the operational bands as part of the information in one or several sub carriers of the bands; and wherein the location information is implicitly derivable from synchronization signals. 2. (canceled) 3. The method of claim 1, wherein the signalling is received by the mobile user stations, which detect the information about available blocks of spectrum and store it into a memory. 4. The method of claim 1, wherein the size information is repeated regularly in subsequent carriers or subcarriers of the operational band. 5. The method of claim 1, wherein the information comprises the start and stop frequencies of the band and thereby the bandwidth. 6. The method of claim 1, wherein the information comprises an identifying number representing the size and location of available operational bands. 7. The method of claim 3, wherein the mobile user stations repeatedly scan the information signalling for updating memory of the respective mobile user stations about changing conditions relating to the operational bands. 8. The method of claim 1, wherein the operational bands belong to different operators and wherein the subscribers of an operator may partly or wholly have access to operational bands of another operator. 9. The method of claim 1, wherein a mobile user station requests access to a multicarrier band with N carriers for downloading information, the method further comprising: the mobile user station searching the radio interface for an N-carrier band by looking for location and size information; the communication system assigning a free band with N+ε carriers to the mobile user station upon the request where ε is zero or a small number compared to N; and the mobile station downloading the information. 10. A wireless multicarrier telecommunication system comprising: a traffic controlling center; and transmitting units controlled by said traffic controlling center, wherein the transmitting units transmit information signals relating to available resources of the system to mobile units, wherein the information signals comprise information about the size and location of available bandwidth in a number of operational bands allocated to the system; wherein the location information is implicitly derivable from synchronization signals. 11. A base station node (BS) in a multicarrier telecommunication system, the base station node comprising: a memory; and a processor configured to execute program instructions stored in the memory, whereby the base station node is operative to: transmit information relating to properties of available operational bands of the spectrum allocated to the system, wherein the information is related to size and location of the available operational bands; and wherein the location information is implicitly derivable from synchronization signals. 12. A mobile station node in a multicarrier telecommunication system, the mobile station node comprising: a memory; and a processor configured to execute program instructions stored in the memory, whereby the mobile station node is operative to: receive information signalling relating to available operational bands in terms of size and location in the radio spectrum; and search for channels based on said received information signalling; wherein the location information is implicitly derivable from synchronization signals. 13. The mobile station node of claim 12, wherein the operational band relating data is stored in the memory.
2,600
10,121
10,121
14,631,321
2,636
An apparatus and system having an optical integrated circuit (referred to herein as an OMTP) configured for power on during discovery and optically communicating with the OMTP reader for the purpose of extracting data.
1. An optical integrated circuit comprising: an optical transmitter configured for sending a data signal to an external optical receiving device; and one or more photovoltaic power sources powered by receiving light from a light source, said one or more photovoltaic power sources necessary and sufficient to power the optical transmitter; one or more logic circuits and memory coupled to the optical transmitter for transmitting data comprising identification data for the integrated circuit; wherein the integrated circuit is configured to concurrently receive from the light source (a) a synchronization signal and (b) power from the light source; and wherein the integrated circuit comprises a clock extraction function configured to extract a clock from the synchronization signal to establish a data rate frequency for transmitting the data. 2. The integrated circuit of claim 1, wherein each of the length, width and height of the optical integrated circuit is about 500 micrometers or less. 3. The integrated circuit of claim 1, wherein the optical transmitter for transmitting data comprises an LED. 4. The integrated circuit of claim 1, further comprising at least one sensor measuring environmental data for transmission by the optical transmitter. 5. (canceled) 6. The integrated circuit of claim 1, wherein the integrated circuit concurrently receives a synchronization signal and power from the light source as light operating at a first wavelength, and the optical transmitter sends a data signal with light at a second wavelength. 7. The integrated circuit of claim 6, wherein the second wavelength is longer than the first. 8. The integrated circuit of claim 1, wherein the optical transmitter operates in the infrared spectrum and the optical receiver operates in the visible spectrum. 9. The integrated circuit of claim 1, wherein the optical transmitter is formed of one or more pieces of silicon, and at least one photovoltaic power source is formed of one or more separate pieces of silicon. 10. (canceled) 11. An optical communication system for communicating with an optical integrated circuit, the system comprising: the optical integrated circuit according to claim 1; and an optical reader comprising a laser light source powering the photovoltaic power source and sending the synchronization signal and a photosensor for receiving light from the optical transmitter, the optical reader configured to extract the identification data. 12. The system of claim 11, wherein the laser light source is modulated for simultaneously providing energy and synchronization signals to the optical integrated circuit. 13. (canceled) 14. The system of claim 11, wherein the integrated circuit further comprises at least one sensor, the at least one sensor measuring the environment surrounding the optical integrated circuit, wherein environmental data from the sensor is encoded in light from the optical transmitter, and wherein the optical reader is configured to decode the environmental data. 15. The system of claim 11, wherein the optical transmitter operates in the infrared spectrum and the optical receiver operates in the visible spectrum. 16. A method for communicating between an optical integrated circuit according to claim 1 and an optical reader comprising: illuminating the optical integrated circuit with directed light from a modulated laser source in the optical reader; powering at least one photovoltaic cell of the optical integrated circuit; and transmitting the data signal with an optical transmitter of the optical integrated circuit that is powered by the at least one photovoltaic cell. 17. (canceled) 18. The method of claim 16, wherein the optical transmitter sends the data signal using light at a second longer wavelength. 19. (canceled) 20. The method of claim 16, wherein the data signal further comprises data from a sensor coupled to the optical integrated circuit. 21. An optical integrated circuit comprising: an optical transmitter configured for sending data to an external optical receiving device; and one or more photovoltaic power sources powered by receiving light from a light source, said one or more photovoltaic power sources necessary and sufficient to power the optical transmitter; one or more logic circuits and memory coupled to the optical transmitter for transmitting data comprising identification data for the integrated circuit; and wherein the optical transmitter is formed of one or more pieces of silicon, and at least one photovoltaic power source is formed of one or more separate pieces of silicon. 22. The integrated circuit of claim 22, wherein the integrated circuit is configured to concurrently receive from the light source (a) a synchronization signal and (b) power from the light source; and wherein the integrated circuit comprises a clock extraction function configured to extract a clock from the synchronization signal to establish a data rate frequency for transmitting the data. 23. The integrated circuit of claim 22, wherein the integrated circuit does not have an RF antennae. 24. The integrated circuit of claim 1, wherein each of the length, width and height of the optical integrated circuit is about 500 micrometers or less. 25. A method of operating an integrated circuit of claim 4, comprising embedding the integrated circuit in a biological cell and remotely obtaining therefrom environmental data from the sensor.
An apparatus and system having an optical integrated circuit (referred to herein as an OMTP) configured for power on during discovery and optically communicating with the OMTP reader for the purpose of extracting data.1. An optical integrated circuit comprising: an optical transmitter configured for sending a data signal to an external optical receiving device; and one or more photovoltaic power sources powered by receiving light from a light source, said one or more photovoltaic power sources necessary and sufficient to power the optical transmitter; one or more logic circuits and memory coupled to the optical transmitter for transmitting data comprising identification data for the integrated circuit; wherein the integrated circuit is configured to concurrently receive from the light source (a) a synchronization signal and (b) power from the light source; and wherein the integrated circuit comprises a clock extraction function configured to extract a clock from the synchronization signal to establish a data rate frequency for transmitting the data. 2. The integrated circuit of claim 1, wherein each of the length, width and height of the optical integrated circuit is about 500 micrometers or less. 3. The integrated circuit of claim 1, wherein the optical transmitter for transmitting data comprises an LED. 4. The integrated circuit of claim 1, further comprising at least one sensor measuring environmental data for transmission by the optical transmitter. 5. (canceled) 6. The integrated circuit of claim 1, wherein the integrated circuit concurrently receives a synchronization signal and power from the light source as light operating at a first wavelength, and the optical transmitter sends a data signal with light at a second wavelength. 7. The integrated circuit of claim 6, wherein the second wavelength is longer than the first. 8. The integrated circuit of claim 1, wherein the optical transmitter operates in the infrared spectrum and the optical receiver operates in the visible spectrum. 9. The integrated circuit of claim 1, wherein the optical transmitter is formed of one or more pieces of silicon, and at least one photovoltaic power source is formed of one or more separate pieces of silicon. 10. (canceled) 11. An optical communication system for communicating with an optical integrated circuit, the system comprising: the optical integrated circuit according to claim 1; and an optical reader comprising a laser light source powering the photovoltaic power source and sending the synchronization signal and a photosensor for receiving light from the optical transmitter, the optical reader configured to extract the identification data. 12. The system of claim 11, wherein the laser light source is modulated for simultaneously providing energy and synchronization signals to the optical integrated circuit. 13. (canceled) 14. The system of claim 11, wherein the integrated circuit further comprises at least one sensor, the at least one sensor measuring the environment surrounding the optical integrated circuit, wherein environmental data from the sensor is encoded in light from the optical transmitter, and wherein the optical reader is configured to decode the environmental data. 15. The system of claim 11, wherein the optical transmitter operates in the infrared spectrum and the optical receiver operates in the visible spectrum. 16. A method for communicating between an optical integrated circuit according to claim 1 and an optical reader comprising: illuminating the optical integrated circuit with directed light from a modulated laser source in the optical reader; powering at least one photovoltaic cell of the optical integrated circuit; and transmitting the data signal with an optical transmitter of the optical integrated circuit that is powered by the at least one photovoltaic cell. 17. (canceled) 18. The method of claim 16, wherein the optical transmitter sends the data signal using light at a second longer wavelength. 19. (canceled) 20. The method of claim 16, wherein the data signal further comprises data from a sensor coupled to the optical integrated circuit. 21. An optical integrated circuit comprising: an optical transmitter configured for sending data to an external optical receiving device; and one or more photovoltaic power sources powered by receiving light from a light source, said one or more photovoltaic power sources necessary and sufficient to power the optical transmitter; one or more logic circuits and memory coupled to the optical transmitter for transmitting data comprising identification data for the integrated circuit; and wherein the optical transmitter is formed of one or more pieces of silicon, and at least one photovoltaic power source is formed of one or more separate pieces of silicon. 22. The integrated circuit of claim 22, wherein the integrated circuit is configured to concurrently receive from the light source (a) a synchronization signal and (b) power from the light source; and wherein the integrated circuit comprises a clock extraction function configured to extract a clock from the synchronization signal to establish a data rate frequency for transmitting the data. 23. The integrated circuit of claim 22, wherein the integrated circuit does not have an RF antennae. 24. The integrated circuit of claim 1, wherein each of the length, width and height of the optical integrated circuit is about 500 micrometers or less. 25. A method of operating an integrated circuit of claim 4, comprising embedding the integrated circuit in a biological cell and remotely obtaining therefrom environmental data from the sensor.
2,600
10,122
10,122
15,368,513
2,616
There is provided a system and method for adaptive rendered environments using user context. The method comprises determining user context data corresponding to a user of a virtual environment, altering a feature of the virtual environment using the user context data to obtain an altered feature, and rendering the altered feature of the virtual environment for display. The feature may include a non-player character in the virtual environment, such as eyesight focus of the non-player character or a physical action of the non-player character. The user context data may correspond to real world position data of a user and may be determined using a camera, for example through image recognition, or using a user device. Additionally, the virtual environment may include a cinematic, interactive game, or user generated content.
1-20. (canceled) 21: A method of using a system including a processor for use with a display, the method comprising: determining, using the processor, a viewing perspective of a user, and a user context data corresponding to a real-world position of the user of a virtual environment relative to the display, wherein the viewing perspective of the user switches between a first person view and a third person view; altering, using the processor when the viewing perspective of the user is the first person view, an eye contact of a non-player character in the virtual environment with the user based on the viewing perspective of the user and the user context data to obtain a first altered feature of the virtual environment; altering, using the processor when the viewing perspective of the user is the third person view, at least one of the eye contact and a physical action of the non-player character based on the viewing perspective of the user to obtain an altered second feature of the virtual environment; and rendering, using the processor, one of the first altered feature and the second altered feature of the virtual environment on the display. 22: The method of claim 21, wherein the determining the user context data uses a camera. 23: The method of claim 22, wherein the determining the user context data using a camera further includes using image recognition. 24: The method of claim 21, wherein the determining the user context data uses a user device. 25: The method of claim 21, wherein the virtual environment includes one of a cinematic sequence, interactive game, and user generated content. 26: A system for use with a display, the system comprising: a processor configured to: determine a viewing perspective of a user and a user context data corresponding to a real-world position of the user of a virtual environment relative to the display, wherein the viewing perspective of the user switches between a first person view and a third person view; alter, when the viewing perspective of the user is the first person view, an eye contact of a non-player character in the virtual environment with the user based on the viewing perspective of the user and the user context data to obtain a first altered feature of the virtual environment; alter, when the viewing perspective of the user is the third person view, at least one of the eye contact and a physical action of the non-player character based on the viewing perspective of the user to obtain an altered second feature of the virtual environment; and render one of the first altered feature and the second altered feature of the virtual environment on the display. 27: The system of claim 26 further comprising: a camera; wherein the processor determines the user context data using the camera. 28: The system of claim 27, wherein the processor determines the user context data using a camera further includes using image recognition. 29: The system of claim 26 further comprising: a user device; wherein the processor determines the user context data using the user device. 30: The system of claim 26, wherein the virtual environment includes one of a cinematic sequence, interactive game, and user generated content.
There is provided a system and method for adaptive rendered environments using user context. The method comprises determining user context data corresponding to a user of a virtual environment, altering a feature of the virtual environment using the user context data to obtain an altered feature, and rendering the altered feature of the virtual environment for display. The feature may include a non-player character in the virtual environment, such as eyesight focus of the non-player character or a physical action of the non-player character. The user context data may correspond to real world position data of a user and may be determined using a camera, for example through image recognition, or using a user device. Additionally, the virtual environment may include a cinematic, interactive game, or user generated content.1-20. (canceled) 21: A method of using a system including a processor for use with a display, the method comprising: determining, using the processor, a viewing perspective of a user, and a user context data corresponding to a real-world position of the user of a virtual environment relative to the display, wherein the viewing perspective of the user switches between a first person view and a third person view; altering, using the processor when the viewing perspective of the user is the first person view, an eye contact of a non-player character in the virtual environment with the user based on the viewing perspective of the user and the user context data to obtain a first altered feature of the virtual environment; altering, using the processor when the viewing perspective of the user is the third person view, at least one of the eye contact and a physical action of the non-player character based on the viewing perspective of the user to obtain an altered second feature of the virtual environment; and rendering, using the processor, one of the first altered feature and the second altered feature of the virtual environment on the display. 22: The method of claim 21, wherein the determining the user context data uses a camera. 23: The method of claim 22, wherein the determining the user context data using a camera further includes using image recognition. 24: The method of claim 21, wherein the determining the user context data uses a user device. 25: The method of claim 21, wherein the virtual environment includes one of a cinematic sequence, interactive game, and user generated content. 26: A system for use with a display, the system comprising: a processor configured to: determine a viewing perspective of a user and a user context data corresponding to a real-world position of the user of a virtual environment relative to the display, wherein the viewing perspective of the user switches between a first person view and a third person view; alter, when the viewing perspective of the user is the first person view, an eye contact of a non-player character in the virtual environment with the user based on the viewing perspective of the user and the user context data to obtain a first altered feature of the virtual environment; alter, when the viewing perspective of the user is the third person view, at least one of the eye contact and a physical action of the non-player character based on the viewing perspective of the user to obtain an altered second feature of the virtual environment; and render one of the first altered feature and the second altered feature of the virtual environment on the display. 27: The system of claim 26 further comprising: a camera; wherein the processor determines the user context data using the camera. 28: The system of claim 27, wherein the processor determines the user context data using a camera further includes using image recognition. 29: The system of claim 26 further comprising: a user device; wherein the processor determines the user context data using the user device. 30: The system of claim 26, wherein the virtual environment includes one of a cinematic sequence, interactive game, and user generated content.
2,600
10,123
10,123
14,673,440
2,689
Embodiments disclosed herein provide methods, systems, and computer readable storage media for facilitating enhanced communication with an application service provider based on medical telemetry collected by a user device. In a particular embodiment, a method provides collecting medical telemetry of a user of the user communication device and processing the medical telemetry to identify abnormalities therein. Upon identifying at least one abnormality in the medical telemetry, the method provides determining whether the at least one abnormality indicates that the user is experiencing a health issue. After determining that the at least one abnormality indicates that the user is experiencing the health issue, the method provides transferring a health notification indicating the health issue to the application service provider.
1. A non-transitory computer readable storage medium having instructions stored thereon that, when executed by a user communication device, direct the user communication device to communicate with an application service provider based on medical telemetry, the method comprising: collecting medical telemetry of a user of the user communication device; processing the medical telemetry to identify abnormalities therein; upon identifying at least one abnormality in the medical telemetry, determining whether the at least one abnormality indicates that the user is experiencing a health issue; after determining that the at least one abnormality indicates that the user is experiencing the health issue, transferring a health notification indicating the health issue to the application service provider. 2. The non-transitory computer readable storage medium of claim 1, wherein the method further comprises: after determining that the abnormality indicates that the user is experiencing the health issue, establishing a user communication with the application service provider. 3. The non-transitory computer readable storage medium of claim 2, wherein the application service provider further uses the medical information to determine at least one of a routing for the user communication and a priority for the user communication. 4. The non-transitory computer readable storage medium of claim 1, wherein compiling medical telemetry of the user comprises: receiving medical telemetry captured by a plurality of monitor devices external to the user communication device. 5. The non-transitory computer readable storage medium of claim 1, wherein processing the medical telemetry to identify abnormalities therein comprises: comparing the medical telemetry to a profile for the user that indicates normal values for the medical telemetry; and wherein an abnormality is identified if a value in the medical telemetry falls outside a corresponding value of the normal values for the medical telemetry. 6. The non-transitory computer readable storage medium of claim 1, wherein processing the medical telemetry to identify abnormalities therein comprises: comparing the medical telemetry to medical telemetry patterns that correspond to abnormalities; and wherein an abnormality is identified if a pattern in the medical telemetry substantially matches one of the medical telemetry patterns. 7. The non-transitory computer readable storage medium of claim 1, wherein determining whether the at least one abnormality indicates that the user is experiencing the health issue comprises: collecting non-medical telemetry of the user; and processing the non-medical telemetry and the at least one abnormality to determine whether the abnormality indicates that the user is experiencing the health issue, wherein the abnormality does not indicate that the user is experiencing the health issue if the non-medical telemetry indicates an alternate reason for the at least one abnormality, and the abnormality does indicate that the user is experiencing the health issue if the non-medical telemetry does not indicate an alternate reason for the at least one abnormality. 8. The non-transitory computer readable storage medium of claim 1, wherein the method further comprises: providing the user with an option to override the determination that the at least one abnormality indicates that the user is experiencing the health issue. 9. The non-transitory computer readable storage medium of claim 1, wherein the method further comprises: receiving information from the application service provider based on the medical information; and presenting the information to the user. 10. The non-transitory computer readable storage medium of claim 1, wherein transferring medical information indicating the health issue to the application service provider comprises: transferring the health notification in one of a Session Initiation Protocol (SIP) session request or a data channel associated with a WebRTC media connection request. 11. A method of operating an application service provider system to handle communications based on medical telemetry, the method comprising: receiving a health notification about a user from a user communication device, wherein the user communication device transfers the health notification in response to determining that medical telemetry collected by the user communication device includes at least one abnormality that indicates that the user is experiencing a health issue; and establishing a user communication with the user communication device. 12. The method of claim 11, further comprising: based on the health notification, at least one of prioritizing the user communication and routing the user communication. 13. The method of claim 11, further comprising: transferring a request for more detailed medical information to the user communication device; and after the user provides consent, receiving the more detailed medical information from the user communication device. 14. The method of claim 11, further comprising: providing a user profile to the user communication device, wherein the user communication device processes the medical telemetry with the user profile to determine whether the at least one abnormality indicates that the user is experiencing the health issue. 15. The method of claim 14, further comprising: during the user communication, receiving feedback regarding the health issue; adjusting the user profile based on the feedback; and transferring the adjusted user profile to the user communication device. 16. An application service provider system for handling communications based on medical telemetry, the application service provider system comprising: a communication interface configured to receive a health notification about a user from a user communication device, wherein the user communication device transfers the health notification in response to determining that medical telemetry compiled by the user communication device includes at least one abnormality that indicates that the user is experiencing a health issue, and establish a user communication with the user communication device; and a processing system configured to determine a routing for the user communication based on the health notification. 17. The application service provider system of claim 16, further comprising: the processing system configured to prioritize the user communication based on the medical information. 18. The application service provider system of claim 16, further comprising the communication interface configured to: transfer a request for more detailed medical information to the user communication device, wherein the user communication device prompts the user for consent to transfer the more detailed medical information to the application service provider; and after the user provides consent, receive the more detailed medical information from the user communication device. 19. The application service provider system of claim 16, further comprising: the processing system configured to generate a user profile; and the communication interface configured to transfer a user profile to the user communication device, wherein the user communication device processes the medical telemetry with the user profile to determine whether the at least one abnormality indicates that the user is experiencing the health issue. 20. The application service provider system of claim 19, further comprising: a user interface configured to receive feedback regarding the health issue during the user communication; the processing system configured to adjust the user profile based on the feedback; and the communication interface configured to transfer the adjusted user profile to the user communication device.
Embodiments disclosed herein provide methods, systems, and computer readable storage media for facilitating enhanced communication with an application service provider based on medical telemetry collected by a user device. In a particular embodiment, a method provides collecting medical telemetry of a user of the user communication device and processing the medical telemetry to identify abnormalities therein. Upon identifying at least one abnormality in the medical telemetry, the method provides determining whether the at least one abnormality indicates that the user is experiencing a health issue. After determining that the at least one abnormality indicates that the user is experiencing the health issue, the method provides transferring a health notification indicating the health issue to the application service provider.1. A non-transitory computer readable storage medium having instructions stored thereon that, when executed by a user communication device, direct the user communication device to communicate with an application service provider based on medical telemetry, the method comprising: collecting medical telemetry of a user of the user communication device; processing the medical telemetry to identify abnormalities therein; upon identifying at least one abnormality in the medical telemetry, determining whether the at least one abnormality indicates that the user is experiencing a health issue; after determining that the at least one abnormality indicates that the user is experiencing the health issue, transferring a health notification indicating the health issue to the application service provider. 2. The non-transitory computer readable storage medium of claim 1, wherein the method further comprises: after determining that the abnormality indicates that the user is experiencing the health issue, establishing a user communication with the application service provider. 3. The non-transitory computer readable storage medium of claim 2, wherein the application service provider further uses the medical information to determine at least one of a routing for the user communication and a priority for the user communication. 4. The non-transitory computer readable storage medium of claim 1, wherein compiling medical telemetry of the user comprises: receiving medical telemetry captured by a plurality of monitor devices external to the user communication device. 5. The non-transitory computer readable storage medium of claim 1, wherein processing the medical telemetry to identify abnormalities therein comprises: comparing the medical telemetry to a profile for the user that indicates normal values for the medical telemetry; and wherein an abnormality is identified if a value in the medical telemetry falls outside a corresponding value of the normal values for the medical telemetry. 6. The non-transitory computer readable storage medium of claim 1, wherein processing the medical telemetry to identify abnormalities therein comprises: comparing the medical telemetry to medical telemetry patterns that correspond to abnormalities; and wherein an abnormality is identified if a pattern in the medical telemetry substantially matches one of the medical telemetry patterns. 7. The non-transitory computer readable storage medium of claim 1, wherein determining whether the at least one abnormality indicates that the user is experiencing the health issue comprises: collecting non-medical telemetry of the user; and processing the non-medical telemetry and the at least one abnormality to determine whether the abnormality indicates that the user is experiencing the health issue, wherein the abnormality does not indicate that the user is experiencing the health issue if the non-medical telemetry indicates an alternate reason for the at least one abnormality, and the abnormality does indicate that the user is experiencing the health issue if the non-medical telemetry does not indicate an alternate reason for the at least one abnormality. 8. The non-transitory computer readable storage medium of claim 1, wherein the method further comprises: providing the user with an option to override the determination that the at least one abnormality indicates that the user is experiencing the health issue. 9. The non-transitory computer readable storage medium of claim 1, wherein the method further comprises: receiving information from the application service provider based on the medical information; and presenting the information to the user. 10. The non-transitory computer readable storage medium of claim 1, wherein transferring medical information indicating the health issue to the application service provider comprises: transferring the health notification in one of a Session Initiation Protocol (SIP) session request or a data channel associated with a WebRTC media connection request. 11. A method of operating an application service provider system to handle communications based on medical telemetry, the method comprising: receiving a health notification about a user from a user communication device, wherein the user communication device transfers the health notification in response to determining that medical telemetry collected by the user communication device includes at least one abnormality that indicates that the user is experiencing a health issue; and establishing a user communication with the user communication device. 12. The method of claim 11, further comprising: based on the health notification, at least one of prioritizing the user communication and routing the user communication. 13. The method of claim 11, further comprising: transferring a request for more detailed medical information to the user communication device; and after the user provides consent, receiving the more detailed medical information from the user communication device. 14. The method of claim 11, further comprising: providing a user profile to the user communication device, wherein the user communication device processes the medical telemetry with the user profile to determine whether the at least one abnormality indicates that the user is experiencing the health issue. 15. The method of claim 14, further comprising: during the user communication, receiving feedback regarding the health issue; adjusting the user profile based on the feedback; and transferring the adjusted user profile to the user communication device. 16. An application service provider system for handling communications based on medical telemetry, the application service provider system comprising: a communication interface configured to receive a health notification about a user from a user communication device, wherein the user communication device transfers the health notification in response to determining that medical telemetry compiled by the user communication device includes at least one abnormality that indicates that the user is experiencing a health issue, and establish a user communication with the user communication device; and a processing system configured to determine a routing for the user communication based on the health notification. 17. The application service provider system of claim 16, further comprising: the processing system configured to prioritize the user communication based on the medical information. 18. The application service provider system of claim 16, further comprising the communication interface configured to: transfer a request for more detailed medical information to the user communication device, wherein the user communication device prompts the user for consent to transfer the more detailed medical information to the application service provider; and after the user provides consent, receive the more detailed medical information from the user communication device. 19. The application service provider system of claim 16, further comprising: the processing system configured to generate a user profile; and the communication interface configured to transfer a user profile to the user communication device, wherein the user communication device processes the medical telemetry with the user profile to determine whether the at least one abnormality indicates that the user is experiencing the health issue. 20. The application service provider system of claim 19, further comprising: a user interface configured to receive feedback regarding the health issue during the user communication; the processing system configured to adjust the user profile based on the feedback; and the communication interface configured to transfer the adjusted user profile to the user communication device.
2,600
10,124
10,124
15,387,734
2,694
A method for controlling vehicle systems in a vehicle includes providing a steering wheel having a plurality of sensors configured to sense contact on the steering wheel. The steering wheel has a left zone and a right zone. The method includes determining a left contact value based on one or more signals received from at least one of the plurality of sensors. The left contact value indicates contact with the steering wheel within the left zone. The method includes determining a right contact value based on the one or more signals received from the at least one of the plurality of sensors. The right contact value indicates contact with the steering wheel within the right zone. The method includes determining a driver state index based on the left contact value and the right contact value and modifying control of the vehicle systems based on the driver state index.
1. A computer-implemented method for controlling vehicle systems in a vehicle, comprising: providing a steering wheel having a plurality of sensors configured to sense contact on the steering wheel, the steering wheel having a left zone and a right zone; determining a left contact value based on one or more signals received from at least one of the plurality of sensors, wherein the left contact value indicates contact with the steering wheel within the left zone; determining a right contact value based on the one or more signals received from the at least one of the plurality of sensors, wherein the right contact value indicates contact with the steering wheel within the right zone; determining a driver state index based on the left contact value and the right contact value; and modifying control of the vehicle systems based on the driver state index. 2. The computer-implemented method of claim 1, wherein the left zone and the right zone are defined by a vertical planar line perpendicular to a center point of the steering wheel, wherein the left zone is further defined by a predetermined angle between the center point of the steering wheel and the vertical planar line within the left zone at 120 degrees and the right zone is further defined by a predetermined angle between the center point of the steering wheel and the vertical planar line within the right zone at 120 degrees. 3. The computer-implemented method of claim 1, including comparing the left contact value to a left contact threshold, wherein the left contact threshold is determined based on a left contact surface area of the steering wheel within the left zone, wherein the left contact surface area maximizes contact of a left hand with the steering wheel within the left zone, and comparing the right contact value to a right contact threshold, wherein the right contact threshold is determined based on a right contact surface area of the steering wheel within the right zone, wherein the right contact surface area maximizes contact of a right hand with the steering wheel within the right zone. 4. The computer-implemented method of claim 3, wherein determining the driver state index includes determining the driver state index based on comparing the left contact value to the left contact threshold and comparing the right contact value to the right contact threshold. 5. The computer-implemented method of claim 1, wherein the left contact value is a measurement of pressure of the contact with the steering wheel within the left zone, and the right contact value is a measurement of pressure of the contact with the steering wheel within the right zone. 6. The computer-implemented method of claim 1, wherein the left contact value indicates contact with the steering wheel within the left zone and a measurement of pressure of the contact with the steering wheel within the left zone, and the right the contact value indicates contact with the steering wheel within the right zone and a measurement of pressure of the contact with the steering wheel within the right zone. 7. The computer-implemented method of claim 1, wherein the driver state index is a value on a continuum of values correlating with a measurement of a state of a driver. 8. The computer-implemented method of claim 1, wherein the driver state index is a measurement of perceived risk during driving. 9. The computer-implemented method of claim 1, including determining a vehicular state based on vehicle data received from the vehicle systems. 10. The computer-implemented method of claim 9, wherein modifying the control of the vehicle systems includes modifying the control of the vehicle systems based on the driver state index and the vehicular state. 11. A system for controlling vehicle systems in a vehicle, comprising: a steering wheel having a plurality of sensors configured to sense contact on the steering wheel, the steering wheel having a left zone and a right zone; and a processor, wherein the processor receives one or more signals from at least one of the plurality of sensors and determines a left contact value based on the one or more signals, the left contact value indicating contact with the steering wheel within the left zone, and the processor determines a right contact value based on the one or more signals, the right contact value indicating contact with the steering wheel within the right zone, wherein the processor determines a driver state index based on the left contact value and the right contact value, and the processor controls the vehicle systems based on the driver state index. 12. The system of claim 11, wherein the processor receives vehicle data from vehicle sensors of the vehicle and upon determining a non-driving passenger is present in the vehicle based on the vehicle data, the processor controls the vehicle systems based on the driver state index. 13. The system of claim 11, wherein the processor determines the driver state index based on comparing the left contact value to a left contact threshold, and comparing the right contact value to a right contact threshold. 14. The system of claim 11, wherein the left contact value is a measurement of pressure of the contact with the steering wheel within the left zone, and the right contact value is a measurement of pressure of the contact with the steering wheel within the right zone. 15. The system of claim 11, including the processor determining a vehicular state based on vehicle data from the vehicle systems, and wherein the processor controls the vehicle systems based on the driver state index and the vehicular state. 16. A non-transitory computer readable medium comprising instructions that when executed by a processor perform a method for controlling vehicle systems in a vehicle, comprising: providing a steering wheel having a plurality of sensors configured to sense contact on the steering wheel, the steering wheel having a left zone and a right zone; determining a left contact value based on one or more signals received from at least one of the plurality of sensors, wherein the left contact value indicates contact with the steering wheel within the left zone; determining a right contact value based on the one or more signals received from the at least one of the plurality of sensors, wherein the right contact value indicates contact with the steering wheel within the right zone; determining a driver state index based on the left contact value and the right contact value; and modifying control of the vehicle systems based on the driver state index. 17. The non-transitory computer readable medium of claim 16, including comparing the left contact value to a left contact threshold, comparing the right contact value to a right contact threshold, and determining the driver state index based on said comparing. 18. The non-transitory computer readable medium of claim 16, wherein the left contact value is a measurement of pressure of the contact with the steering wheel within the left zone, and the right contact value is a measurement of pressure of the contact with the steering wheel within the right zone. 19. The non-transitory computer readable medium of claim 16, including determining a vehicular state based on vehicle data, wherein the vehicular state is a hazard. 20. The non-transitory computer readable medium of claim 19, including determining a risk level of the hazard based on at least one of the driver state index, the left contact value, and the right contact value, wherein modifying the control of the vehicle systems is based on the risk level.
A method for controlling vehicle systems in a vehicle includes providing a steering wheel having a plurality of sensors configured to sense contact on the steering wheel. The steering wheel has a left zone and a right zone. The method includes determining a left contact value based on one or more signals received from at least one of the plurality of sensors. The left contact value indicates contact with the steering wheel within the left zone. The method includes determining a right contact value based on the one or more signals received from the at least one of the plurality of sensors. The right contact value indicates contact with the steering wheel within the right zone. The method includes determining a driver state index based on the left contact value and the right contact value and modifying control of the vehicle systems based on the driver state index.1. A computer-implemented method for controlling vehicle systems in a vehicle, comprising: providing a steering wheel having a plurality of sensors configured to sense contact on the steering wheel, the steering wheel having a left zone and a right zone; determining a left contact value based on one or more signals received from at least one of the plurality of sensors, wherein the left contact value indicates contact with the steering wheel within the left zone; determining a right contact value based on the one or more signals received from the at least one of the plurality of sensors, wherein the right contact value indicates contact with the steering wheel within the right zone; determining a driver state index based on the left contact value and the right contact value; and modifying control of the vehicle systems based on the driver state index. 2. The computer-implemented method of claim 1, wherein the left zone and the right zone are defined by a vertical planar line perpendicular to a center point of the steering wheel, wherein the left zone is further defined by a predetermined angle between the center point of the steering wheel and the vertical planar line within the left zone at 120 degrees and the right zone is further defined by a predetermined angle between the center point of the steering wheel and the vertical planar line within the right zone at 120 degrees. 3. The computer-implemented method of claim 1, including comparing the left contact value to a left contact threshold, wherein the left contact threshold is determined based on a left contact surface area of the steering wheel within the left zone, wherein the left contact surface area maximizes contact of a left hand with the steering wheel within the left zone, and comparing the right contact value to a right contact threshold, wherein the right contact threshold is determined based on a right contact surface area of the steering wheel within the right zone, wherein the right contact surface area maximizes contact of a right hand with the steering wheel within the right zone. 4. The computer-implemented method of claim 3, wherein determining the driver state index includes determining the driver state index based on comparing the left contact value to the left contact threshold and comparing the right contact value to the right contact threshold. 5. The computer-implemented method of claim 1, wherein the left contact value is a measurement of pressure of the contact with the steering wheel within the left zone, and the right contact value is a measurement of pressure of the contact with the steering wheel within the right zone. 6. The computer-implemented method of claim 1, wherein the left contact value indicates contact with the steering wheel within the left zone and a measurement of pressure of the contact with the steering wheel within the left zone, and the right the contact value indicates contact with the steering wheel within the right zone and a measurement of pressure of the contact with the steering wheel within the right zone. 7. The computer-implemented method of claim 1, wherein the driver state index is a value on a continuum of values correlating with a measurement of a state of a driver. 8. The computer-implemented method of claim 1, wherein the driver state index is a measurement of perceived risk during driving. 9. The computer-implemented method of claim 1, including determining a vehicular state based on vehicle data received from the vehicle systems. 10. The computer-implemented method of claim 9, wherein modifying the control of the vehicle systems includes modifying the control of the vehicle systems based on the driver state index and the vehicular state. 11. A system for controlling vehicle systems in a vehicle, comprising: a steering wheel having a plurality of sensors configured to sense contact on the steering wheel, the steering wheel having a left zone and a right zone; and a processor, wherein the processor receives one or more signals from at least one of the plurality of sensors and determines a left contact value based on the one or more signals, the left contact value indicating contact with the steering wheel within the left zone, and the processor determines a right contact value based on the one or more signals, the right contact value indicating contact with the steering wheel within the right zone, wherein the processor determines a driver state index based on the left contact value and the right contact value, and the processor controls the vehicle systems based on the driver state index. 12. The system of claim 11, wherein the processor receives vehicle data from vehicle sensors of the vehicle and upon determining a non-driving passenger is present in the vehicle based on the vehicle data, the processor controls the vehicle systems based on the driver state index. 13. The system of claim 11, wherein the processor determines the driver state index based on comparing the left contact value to a left contact threshold, and comparing the right contact value to a right contact threshold. 14. The system of claim 11, wherein the left contact value is a measurement of pressure of the contact with the steering wheel within the left zone, and the right contact value is a measurement of pressure of the contact with the steering wheel within the right zone. 15. The system of claim 11, including the processor determining a vehicular state based on vehicle data from the vehicle systems, and wherein the processor controls the vehicle systems based on the driver state index and the vehicular state. 16. A non-transitory computer readable medium comprising instructions that when executed by a processor perform a method for controlling vehicle systems in a vehicle, comprising: providing a steering wheel having a plurality of sensors configured to sense contact on the steering wheel, the steering wheel having a left zone and a right zone; determining a left contact value based on one or more signals received from at least one of the plurality of sensors, wherein the left contact value indicates contact with the steering wheel within the left zone; determining a right contact value based on the one or more signals received from the at least one of the plurality of sensors, wherein the right contact value indicates contact with the steering wheel within the right zone; determining a driver state index based on the left contact value and the right contact value; and modifying control of the vehicle systems based on the driver state index. 17. The non-transitory computer readable medium of claim 16, including comparing the left contact value to a left contact threshold, comparing the right contact value to a right contact threshold, and determining the driver state index based on said comparing. 18. The non-transitory computer readable medium of claim 16, wherein the left contact value is a measurement of pressure of the contact with the steering wheel within the left zone, and the right contact value is a measurement of pressure of the contact with the steering wheel within the right zone. 19. The non-transitory computer readable medium of claim 16, including determining a vehicular state based on vehicle data, wherein the vehicular state is a hazard. 20. The non-transitory computer readable medium of claim 19, including determining a risk level of the hazard based on at least one of the driver state index, the left contact value, and the right contact value, wherein modifying the control of the vehicle systems is based on the risk level.
2,600
10,125
10,125
14,034,919
2,613
The creation of digital postcards allows users to combine location information with images for display in a postcard interface. An image selection is received from a user. Location data associated with the image and describing a location at which the image was captured is accessed. One or more candidate locations located within a threshold distance of the location are identified, and a selection of a candidate location is received from the user. Location information associated with the selected candidate location is accessed, and the image is modified to include one or more portions of the accessed location information. The modified image is stored as a digital postcard in a memory for subsequent access and display.
1. A method comprising: receiving a selection of an image from a user; accessing location data associated with the image, the location data describing a location at which the image was captured; identifying one or more candidate locations located within a threshold distance of the location described by the location data; receiving a selection of a candidate location from the user; accessing location information associated with the selected location; modifying the image to create a digital postcard, wherein modifying the image comprises combining one or more portions of the location information including textual content describing the selected candidate location within the image such that the one or more portions of the location information are displayed when the digital postcard is displayed; and storing the digital postcard in a non-transitory computer-readable storage medium. 2. The method of claim 1, wherein the image is captured by the user with a client device. 3. The method of claim 2, wherein the location data is captured by a GPS receiver within the client device. 4. The method of claim 1, wherein the image is selected from among a plurality of images stored at an image repository website. 5. The method of claim 1, wherein the location data is included within metadata of the image. 6. The method of claim 1, wherein identifying one or more candidate locations comprises: querying a location database with the location data; and receiving one or more candidate locations from the location database. 7. The method of claim 1, further comprising: displaying the one or more candidate locations to the user; wherein receiving a selection of a candidate location comprises receiving a selection of a displayed candidate location. 8. The method of claim 1, further comprising associating the selected candidate location with the digital postcard. 9. The method of claim 1, wherein accessing location information comprises querying a location database with the selected candidate location. 10. The method of claim 1, wherein the accessed location information comprises one or more of: a location name, a city in which the location is located, an address of the location, a location category, and a time or date of image capture. 11. The method of claim 1, wherein modifying the image comprises overlaying text associated with the location information over the image. 12. The method of claim 1, wherein modifying the image further comprises performing one or more image processing operations on the image. 13. The method of claim 1, further comprising receiving a location rating from the user. 14. The method of claim 13, wherein modifying the image further comprises including the location rating within the image such that the location rating is displayed when the digital postcard is displayed. 15. The method of claim 1, further comprising receiving a location review from the user. 16. The method of claim 13, wherein modifying the image further comprises including one or more portions of the location review within the image such that the one or more portions of the location review are displayed when the digital postcard is displayed. 17. The method of claim 1, further comprising: receiving a selection of a guide from a user from a plurality of guides, each guide associated with one or more digital postcards; and associating the digital postcard with the selected guide. 18. The method of claim 1, further comprising: uploading the digital postcard to a digital postcard database, the digital postcard database comprising a plurality of digital postcards and configured to display one or more digital postcards to requesting users. 19. The method of claim 18, wherein the digital postcard database comprises a searchable website configured to allow requesting users to search for digital postcards. 20. The method of claim 1, further comprising: receiving a selection of a target user from the user; and sharing the digital postcard with the target user. 21. A system comprising: an input configured to receive a selection of an image from a user and to receive a selection of a candidate location from the user; a location interface configured to: access location data associated with the image, the location data describing a location at which the image was captured; identify one or more candidate locations located within a threshold distance of the location described by the location data; present the identified candidate locations to the user; and access location information associated with the selected candidate location; and a digital postcard engine configured to: modify the image to create a digital postcard, wherein modifying the image comprises combining one or more portions of the accessed location information including textual content describing the selected candidate location within the image such that the one or more portions of the location information are displayed when the digital postcard is displayed; and store the digital postcard in a non-transitory computer-readable storage medium. 22. A non-transitory computer-readable storage medium comprising computer-executable instructions, the instructions including instructions for: receiving a selection of an image from a user; accessing location data associated with the image, the location data describing a location at which the image was captured; identifying one or more candidate locations located within a threshold distance of the location described by the location data; receiving a selection of a candidate location from the user; accessing location information associated with the selected location; modifying the image to create a digital postcard, wherein modifying the image comprises combining one or more portions of the location information including textual content describing the selected candidate location within the image such that the one or more portions of the location information are displayed when the digital postcard is displayed; and storing the digital postcard in a second non-transitory computer-readable storage medium.
The creation of digital postcards allows users to combine location information with images for display in a postcard interface. An image selection is received from a user. Location data associated with the image and describing a location at which the image was captured is accessed. One or more candidate locations located within a threshold distance of the location are identified, and a selection of a candidate location is received from the user. Location information associated with the selected candidate location is accessed, and the image is modified to include one or more portions of the accessed location information. The modified image is stored as a digital postcard in a memory for subsequent access and display.1. A method comprising: receiving a selection of an image from a user; accessing location data associated with the image, the location data describing a location at which the image was captured; identifying one or more candidate locations located within a threshold distance of the location described by the location data; receiving a selection of a candidate location from the user; accessing location information associated with the selected location; modifying the image to create a digital postcard, wherein modifying the image comprises combining one or more portions of the location information including textual content describing the selected candidate location within the image such that the one or more portions of the location information are displayed when the digital postcard is displayed; and storing the digital postcard in a non-transitory computer-readable storage medium. 2. The method of claim 1, wherein the image is captured by the user with a client device. 3. The method of claim 2, wherein the location data is captured by a GPS receiver within the client device. 4. The method of claim 1, wherein the image is selected from among a plurality of images stored at an image repository website. 5. The method of claim 1, wherein the location data is included within metadata of the image. 6. The method of claim 1, wherein identifying one or more candidate locations comprises: querying a location database with the location data; and receiving one or more candidate locations from the location database. 7. The method of claim 1, further comprising: displaying the one or more candidate locations to the user; wherein receiving a selection of a candidate location comprises receiving a selection of a displayed candidate location. 8. The method of claim 1, further comprising associating the selected candidate location with the digital postcard. 9. The method of claim 1, wherein accessing location information comprises querying a location database with the selected candidate location. 10. The method of claim 1, wherein the accessed location information comprises one or more of: a location name, a city in which the location is located, an address of the location, a location category, and a time or date of image capture. 11. The method of claim 1, wherein modifying the image comprises overlaying text associated with the location information over the image. 12. The method of claim 1, wherein modifying the image further comprises performing one or more image processing operations on the image. 13. The method of claim 1, further comprising receiving a location rating from the user. 14. The method of claim 13, wherein modifying the image further comprises including the location rating within the image such that the location rating is displayed when the digital postcard is displayed. 15. The method of claim 1, further comprising receiving a location review from the user. 16. The method of claim 13, wherein modifying the image further comprises including one or more portions of the location review within the image such that the one or more portions of the location review are displayed when the digital postcard is displayed. 17. The method of claim 1, further comprising: receiving a selection of a guide from a user from a plurality of guides, each guide associated with one or more digital postcards; and associating the digital postcard with the selected guide. 18. The method of claim 1, further comprising: uploading the digital postcard to a digital postcard database, the digital postcard database comprising a plurality of digital postcards and configured to display one or more digital postcards to requesting users. 19. The method of claim 18, wherein the digital postcard database comprises a searchable website configured to allow requesting users to search for digital postcards. 20. The method of claim 1, further comprising: receiving a selection of a target user from the user; and sharing the digital postcard with the target user. 21. A system comprising: an input configured to receive a selection of an image from a user and to receive a selection of a candidate location from the user; a location interface configured to: access location data associated with the image, the location data describing a location at which the image was captured; identify one or more candidate locations located within a threshold distance of the location described by the location data; present the identified candidate locations to the user; and access location information associated with the selected candidate location; and a digital postcard engine configured to: modify the image to create a digital postcard, wherein modifying the image comprises combining one or more portions of the accessed location information including textual content describing the selected candidate location within the image such that the one or more portions of the location information are displayed when the digital postcard is displayed; and store the digital postcard in a non-transitory computer-readable storage medium. 22. A non-transitory computer-readable storage medium comprising computer-executable instructions, the instructions including instructions for: receiving a selection of an image from a user; accessing location data associated with the image, the location data describing a location at which the image was captured; identifying one or more candidate locations located within a threshold distance of the location described by the location data; receiving a selection of a candidate location from the user; accessing location information associated with the selected location; modifying the image to create a digital postcard, wherein modifying the image comprises combining one or more portions of the location information including textual content describing the selected candidate location within the image such that the one or more portions of the location information are displayed when the digital postcard is displayed; and storing the digital postcard in a second non-transitory computer-readable storage medium.
2,600
10,126
10,126
15,051,952
2,616
A method and system for directing image rendering, implemented in a computer system including a plurality of processors includes determining one or more processors in the system on which to execute one or more commands. A graphics processing unit (GPU) control application program interface (API) determines one or more processors in the system on which to execute one or more commands. A signal is transmitted to each of the one or more processors indicating which of the one or more commands are to be executed by that processor. The one or more processors execute their respective command. A request is transmitted to each of the one or more processors to transfer information to one another once processing is complete, and an image is rendered based upon the processed information by at least one processor and the received transferred information from at least another processor.
1. A method for directing image rendering, implemented in a computer system including a plurality of processors, comprising: determining, by a graphics processing unit (GPU) control application program interface (API), one or more processors in the system on which to execute one or more commands; transmitting a signal to each of the one or more processors indicating which of the one or more commands are to be executed by that processor; executing, by the one or more processors, their respective command; transmitting a request to each of the one or more processors to transfer information to one another once processing is complete; and rendering an image based upon the processed information by at least one processor and the received transferred information from at least another processor. 2. The method of claim 1 wherein the plurality of processors include a plurality of GPUs. 3. The method of claim 1 wherein the determining includes determining that a first command is to be processed by a first processor and a second command is to be processed by a second processor. 4. The method of claim 3 wherein the first processor renders the image based upon the processed information and receives the transferred information from the second processor. 5. The method of claim 4, further comprising instructing the first processor to delay executing the first command until receiving the transferred information from the second processor. 6. The method of claim 5 wherein the instructing the first processor to delay executing the first command until receiving the transferred information from the second processor includes transmitting a GPU sync command. 7. The method of claim 1 wherein the signal indicating which of the one or more commands are to be executed by that processor is a GPU mask command. 8. The method of claim 1 wherein the transmitting the request to each of the one or more processors to transfer information to one another includes sending a GPU transfer command to the processor being directed to transfer its information. 9. A system, comprising: a first processor in communication with an application entity; a second processor in communication with the application entity; and a display in communication with the first processor; wherein the first processor is configured to receive a first command from the application entity indicating that the first processor is to execute the first command, wherein the second processor is configured to receive a second command from the application entity indicating that the second processor is to execute the second command and a third command from the application entity that the second processor is to transfer information to the first processor upon completion of execution of the second command, and wherein the first processor is further configured to render an image to the display based upon the processed command by the first processor and the received transferred information from the second processor. 10. The system of claim 9 wherein the application entity includes a graphics processing unit (GPU) control application program interface (API) that determines the command to execute on the first processor and the command to execute on the second processor. 11. The system of claim 10, wherein the application entity transmits a signal to the first processor instructing the first processor to delay executing the first command until receiving the transferred information from the second processor. 12. The system of claim 11 wherein the instructing the first processor to delay executing the first command until receiving the transferred information from the second processor includes transmitting a GPU sync command. 13. The system of claim 10 wherein the first command and the second command are GPU mask commands. 14. The system of claim 9 wherein the third command includes sending a GPU transfer command to the processor being directed to transfer its information. 15. A non-transitory computer readable storage medium, having instructions recorded thereon that, when executed by a computing device, cause the computing device to perform operations comprising: determining, by a graphics processing unit (GPU) control application program interface (API), one or more processors in the system on which to execute one or more commands; transmitting a signal to each of the one or more processors indicating which of the one or more commands are to be executed by that processor; executing, by the one or more processors, their respective command; transmitting a request to each of the one or more processors to transfer information to one another once processing is complete; and rendering an image based upon the processed information by at least one processor and the received transferred information from at least another processor. 16. The non-transitory computer readable storage medium of claim 15 wherein the determining includes determining that a first command is to be processed by a first processor and a second command is to be processed by a second processor. 17. The non-transitory computer readable storage medium of claim 16 wherein the first processor renders the image based upon the processed information and receives the transferred information from the second processor. 18. The non-transitory computer readable storage medium of claim 17, further comprising instructing the first processor to delay executing the first command until receiving the transferred information from the second processor. 19. The non-transitory computer readable storage medium of claim 18 wherein the instructing the first processor to delay executing the first command until receiving the transferred information from the second processor includes transmitting a GPU sync command. 20. The non-transitory computer readable storage medium of claim 15 wherein the signal indicating which of the one or more commands are to be executed by that processor is a GPU mask command.
A method and system for directing image rendering, implemented in a computer system including a plurality of processors includes determining one or more processors in the system on which to execute one or more commands. A graphics processing unit (GPU) control application program interface (API) determines one or more processors in the system on which to execute one or more commands. A signal is transmitted to each of the one or more processors indicating which of the one or more commands are to be executed by that processor. The one or more processors execute their respective command. A request is transmitted to each of the one or more processors to transfer information to one another once processing is complete, and an image is rendered based upon the processed information by at least one processor and the received transferred information from at least another processor.1. A method for directing image rendering, implemented in a computer system including a plurality of processors, comprising: determining, by a graphics processing unit (GPU) control application program interface (API), one or more processors in the system on which to execute one or more commands; transmitting a signal to each of the one or more processors indicating which of the one or more commands are to be executed by that processor; executing, by the one or more processors, their respective command; transmitting a request to each of the one or more processors to transfer information to one another once processing is complete; and rendering an image based upon the processed information by at least one processor and the received transferred information from at least another processor. 2. The method of claim 1 wherein the plurality of processors include a plurality of GPUs. 3. The method of claim 1 wherein the determining includes determining that a first command is to be processed by a first processor and a second command is to be processed by a second processor. 4. The method of claim 3 wherein the first processor renders the image based upon the processed information and receives the transferred information from the second processor. 5. The method of claim 4, further comprising instructing the first processor to delay executing the first command until receiving the transferred information from the second processor. 6. The method of claim 5 wherein the instructing the first processor to delay executing the first command until receiving the transferred information from the second processor includes transmitting a GPU sync command. 7. The method of claim 1 wherein the signal indicating which of the one or more commands are to be executed by that processor is a GPU mask command. 8. The method of claim 1 wherein the transmitting the request to each of the one or more processors to transfer information to one another includes sending a GPU transfer command to the processor being directed to transfer its information. 9. A system, comprising: a first processor in communication with an application entity; a second processor in communication with the application entity; and a display in communication with the first processor; wherein the first processor is configured to receive a first command from the application entity indicating that the first processor is to execute the first command, wherein the second processor is configured to receive a second command from the application entity indicating that the second processor is to execute the second command and a third command from the application entity that the second processor is to transfer information to the first processor upon completion of execution of the second command, and wherein the first processor is further configured to render an image to the display based upon the processed command by the first processor and the received transferred information from the second processor. 10. The system of claim 9 wherein the application entity includes a graphics processing unit (GPU) control application program interface (API) that determines the command to execute on the first processor and the command to execute on the second processor. 11. The system of claim 10, wherein the application entity transmits a signal to the first processor instructing the first processor to delay executing the first command until receiving the transferred information from the second processor. 12. The system of claim 11 wherein the instructing the first processor to delay executing the first command until receiving the transferred information from the second processor includes transmitting a GPU sync command. 13. The system of claim 10 wherein the first command and the second command are GPU mask commands. 14. The system of claim 9 wherein the third command includes sending a GPU transfer command to the processor being directed to transfer its information. 15. A non-transitory computer readable storage medium, having instructions recorded thereon that, when executed by a computing device, cause the computing device to perform operations comprising: determining, by a graphics processing unit (GPU) control application program interface (API), one or more processors in the system on which to execute one or more commands; transmitting a signal to each of the one or more processors indicating which of the one or more commands are to be executed by that processor; executing, by the one or more processors, their respective command; transmitting a request to each of the one or more processors to transfer information to one another once processing is complete; and rendering an image based upon the processed information by at least one processor and the received transferred information from at least another processor. 16. The non-transitory computer readable storage medium of claim 15 wherein the determining includes determining that a first command is to be processed by a first processor and a second command is to be processed by a second processor. 17. The non-transitory computer readable storage medium of claim 16 wherein the first processor renders the image based upon the processed information and receives the transferred information from the second processor. 18. The non-transitory computer readable storage medium of claim 17, further comprising instructing the first processor to delay executing the first command until receiving the transferred information from the second processor. 19. The non-transitory computer readable storage medium of claim 18 wherein the instructing the first processor to delay executing the first command until receiving the transferred information from the second processor includes transmitting a GPU sync command. 20. The non-transitory computer readable storage medium of claim 15 wherein the signal indicating which of the one or more commands are to be executed by that processor is a GPU mask command.
2,600
10,127
10,127
15,291,702
2,687
A receiving device may assign a remote-control device of the receiving device to a special mode by storing an association of an identifier unique to the remote-control device with the particular special mode. The receiving device may receive a command from the remote-control device and determine that there has been a special mode assigned to the remote-control device based on the stored association of the identifier unique to the remote-control device with the particular special mode. The receiving device will then interpret the command received from the remote-control device according to how commands are to be processed in the special mode. The receiving device having the remote-control device assigned to the special mode may cause the receiving device to execute a different command or process than it would have normally performed when receiving such a command from a remote-control device that is not assigned to the special mode.
1. A method for providing remote-control special modes, the method comprising: receiving, by a receiving device, a command from a remote-control device of the receiving device to control a function of the receiving device; in response to receiving the command from the remote-control device, determining, by the receiving device, whether the remote-control device is assigned to a special mode; if it is determined by the receiving device that the remote-control device is assigned to a special mode, then: interpreting, by the receiving device, the command received from the remote-control device according to the special mode; and executing, by the receiving device, the command interpreted according to the special mode; and if it is determined by the receiving device that the remote-control device is not assigned to a special mode, then executing the command as received by the receiving device from the remote-control device. 2. The method of claim 1 wherein the determining, by the receiving device, whether the remote-control device is assigned to a special mode includes: reading, by the receiving device, an identifier of the remote-control device received with the command received from the remote-control device; determining, by the receiving device, whether the received identifier of the remote-control device is associated with a special mode; if the received identifier of the remote-control device is determined by the receiving device to be associated with a special mode, then determining, by the receiving device, that the remote-control device is assigned to the special mode; and if the received identifier of the remote-control device is determined by the receiving device to not be associated with a special mode, then determining, by the receiving device, that the remote-control device is not assigned to a special mode. 3. The method of claim 2 wherein the determining, by the receiving device, whether the received identifier of the remote-control device is associated with a special mode includes determining whether the received identifier of the remote-control device is associated with one of a plurality of special modes recognized by the receiving device. 4. The method of claim 1 wherein the special mode is a music channel mode in which commands from the remote-control device to change to a next channel on the receiving device are interpreted by the receiving device to change channels within a list of music service channels, even if a current channel of the receiving device is not a music service channel and the next channel is not a music service channel. 5. The method of claim 1 wherein the special mode is a limited use mode in which particular commands from the remote-control device are ignored by the receiving device. 6. The method of claim 1 wherein the determining, by the receiving device, whether the remote-control device is assigned to a special mode includes, in response to receiving the command from the remote-control device, determining, by the receiving device, that the remote-control device is currently assigned to a special music mode, and wherein executing, by the receiving device, the command interpreted according to the special mode includes: the receiving device taking the receiving device out of a standby mode if the receiving device is in the standby mode; and if the receiving device determines the command from the remote-control device is a command to change to a next channel, the receiving device changing a current channel to a music service channel within a list of music service channels even if a current channel of the receiving device is not a music service channel and the next channel is not a music service channel. 7. The method of claim 6 wherein the receiving device changing the channel to a music service channel in a list of music service channels includes: determining, by the receiving device, whether the remote-control device is currently associated with a list of favorite music service channels; and if it is determined, by the receiving device, that the remote-control device is currently associated with a list of favorite music service channels, then changing to a music service channel within the list of favorite music service channels currently associated with the remote-control device. 8. The method of claim 6 wherein executing, by the receiving device, the command interpreted according to the special mode further includes: the receiving device inserting, into audio output of the music service channel to which the receiving device is changing, an audible indication of a channel identifier of the music service channel to which the receiving device is changing. 9. The method of claim 6 wherein executing, by the receiving device, the command interpreted according to the special mode further includes: the receiving device outputting an audible indication of a status of the receiving device in conjunction with changing to the music service channel within the list of music service channels. 10. The method of claim 1 further comprising: receiving, by the receiving device, another command from the remote-control device of the receiving device to control a function of the receiving device; in response to receiving the other command from the remote-control device, determining, by the receiving device, that the remote-control device is currently assigned to a special music mode; in response to the determining, by the receiving device, that the remote-control device is currently assigned to the special music mode: if the receiving device determines the command from the remote-control device is a command to change to a next channel, the receiving device determining whether a current status of the receiving device does not allow changing to another channel; and if the receiving device determines the current status of the receiving device does not allow changing to another channel, the receiving device outputting an audio signal indicating that the current status of the receiving device does not allow changing to another channel. 11. The method of claim 1 further comprising: receiving, by the receiving device, another command from the remote-control device of the receiving device to control a function of the receiving device; in response to receiving the other command from the remote-control device, determining, by the receiving device, that the remote-control device is currently assigned to a special limited use mode; in response to the determining, by the receiving device, that the remote-control device is currently assigned to the special limited use mode: the receiving device determining whether the received other command is in a list of commands that are limited according to the special limited use mode; and if the receiving device determines the received other command is in a list of commands that are limited according to the special limited use mode, the receiving device determining not to execute the received other command. 12. The method of claim 11 wherein the list of commands that are limited according to the special limited use mode are commands to do one or more of: change settings of the receiving device and activate a menu on a display device in operable communication with the receiving device. 13. The method of claim 11 wherein the list of commands that are limited according to the special limited use mode are commands to change to channels other than channels on a list of allowed channels. 14. The method of claim 11 wherein the list of commands that are limited according to the special limited use mode are commands to change to a restricted list of channels. 15. The method of claim 11 wherein the list of commands that are limited according to the special limited use mode are commands to perform one or more operations including at least one of: recording content, playing recorded content, performing menu operations, performing guide operations, changing system settings, changing receiving device settings, changing user preferences, changing menu settings, changing guide settings, changing favorites lists, playing particular programs, playing particular content; switching to particular channels, changing source input, changing remote-control device modes, changing parental control settings, changing user credentials, changing television settings, changing auxiliary device settings, changing volume, turning on or off muting, changing closed captioning settings, changing subtitle settings, changing video settings, changing audio settings, changing display mode settings, changing content recording settings, changing future program recording settings, changing alerts, changing home automation settings and changing home security settings. 16. The method of claim 11, further comprising: if it is determined by the receiving device, in response to receiving the other command, that the remote-control device is currently assigned to a special limited use mode, then the receiving device determining not to execute particular commands received from the remote-control device until a command is received from a different remote-control device that is determined by the receiving device to currently not be in the special limited use mode. 17. The method of claim 11, further comprising: if it is determined by the receiving device, in response to receiving the other command, that the remote-control device is currently assigned to a special limited use mode, then the receiving device disabling one or more functions of the receiving device until a command is received from a different remote-control device that is determined by the receiving device to currently not be in the special limited use mode. 18. The method of claim 1 wherein the interpreting, by the receiving device, the command received from the remote-control device according to the special mode includes: determining, by the receiving device, whether the command received from the remote-control device is in a list of commands associated with the special mode; if it is determined, by the receiving device, that the command received from the remote-control device is in a list of commands associated with the special mode then, instead of executing the received command, the receiving device using the command received from the remote-control device to find a stored special mode process to be performed that is mapped to the command received from the remote-control device; and using, by the receiving device, the stored special mode process as the command interpreted according to the special mode. 19. The method of claim 18 wherein the special mode process to be performed that is mapped to the command received from the remote-control device is a sequence of commands. 20. The method of claim 18 wherein the special mode process to be performed that is mapped to the command received from the remote-control device is to ignore the command received from the remote-control device. 21. A system for providing remote-control special modes, comprising: at least one controller of a receiving device; and a memory coupled to the at least one controller of the receiving device, the memory having computer-executable instructions stored thereon that, when executed by the at least one controller of the receiving device, cause the at least one controller of the receiving device to: be able to receive a command to have a remote-control device of the receiving device be in a special mode; in response to the received command to have a remote-control device of the receiving device be in a special mode, record on the receiving device an association of an identifier of the remote-control device with the special mode; and interpret commands received by the receiving device from the remote-control device based on the association of the identifier of the remote-control device with the special mode. 22. The system of claim 21, wherein the computer-executable instructions, when executed by the at least one controller of the receiving device, further cause the at least one controller of the receiving device to execute commands received by the receiving device from the remote-control device based on the association of the identifier of the remote-control device with the special mode. 23. The system of claim 21, wherein the computer-executable instructions, when executed by the at least one controller of the receiving device, further cause the at least one controller of the receiving device to determine to not execute particular restricted commands received by the receiving device from the remote-control device based on the association of the identifier of the remote-control device with the special mode. 24. A non-transitory computer-readable storage medium having computer executable instructions thereon that, when executed by at least one computer processor, cause the at least one computer processor to: cause a receiving device to assign a remote-control device of the receiving device to a special mode; interpret commands received by the receiving device from the remote-control device assigned to the special mode based on an association stored on the receiving device of an identifier of the remote-control device with the special mode; and respond to a command received by the receiving device from the remote-control device assigned to the special mode differently than the same command received by the receiving device from a different remote-control device that is not assigned by the receiving device to the special mode. 25. The non-transitory computer-readable storage medium of claim 24 wherein the computer executable instructions, when executed, cause the at least one computer processor to respond to a command received by the receiving device from the remote-control device assigned to the special mode differently than the same command received by the receiving device from a different remote-control device that is not assigned by the receiving device to the special mode by causing the at least one computer processor to ignore the command received by the receiving device from the remote-control device assigned to the special mode and to not ignore the same command received by the receiving device from the different remote-control device that is not assigned by the receiving device to the special mode. 26. The non-transitory computer-readable storage medium of claim 24 wherein the command received by the receiving device from the remote-control device assigned to the special mode is a command to change to a next channel and the computer executable instructions, when executed, cause the at least one computer processor to respond to the command received by the receiving device from the remote-control device assigned to the special mode differently than the same command received by the receiving device from a different remote-control device that is not assigned by the receiving device to the special mode by causing the at least one computer processor to: in response to the command received by the receiving device from the remote-control device assigned to the special mode to change to a next channel: take the receiving device out of a standby mode if the receiving device is in the standby mode; and change a current channel to a music service channel within a list of music service channels even if a current channel of the receiving device is not a music service channel and the next channel is not a music service channel; and in response to a same command to change to a next channel received by the receiving device from the different remote-control device that is not assigned by the receiving device to the special mode, change to the next channel. 27. The non-transitory computer-readable storage medium of claim 24 wherein the computer executable instructions thereon, when executed, further cause the at least one computer processor to: be able to receive a command from the remote-control device to assign the remote-control device of the receiving device to the special mode; and in response to the command received from the remote-control device to assign the remote-control device of the receiving device to the special mode, cause the receiving device to assign the remote-control device of the receiving device to the special mode by storing the association on the receiving device of the identifier of the remote-control device with the special mode.
A receiving device may assign a remote-control device of the receiving device to a special mode by storing an association of an identifier unique to the remote-control device with the particular special mode. The receiving device may receive a command from the remote-control device and determine that there has been a special mode assigned to the remote-control device based on the stored association of the identifier unique to the remote-control device with the particular special mode. The receiving device will then interpret the command received from the remote-control device according to how commands are to be processed in the special mode. The receiving device having the remote-control device assigned to the special mode may cause the receiving device to execute a different command or process than it would have normally performed when receiving such a command from a remote-control device that is not assigned to the special mode.1. A method for providing remote-control special modes, the method comprising: receiving, by a receiving device, a command from a remote-control device of the receiving device to control a function of the receiving device; in response to receiving the command from the remote-control device, determining, by the receiving device, whether the remote-control device is assigned to a special mode; if it is determined by the receiving device that the remote-control device is assigned to a special mode, then: interpreting, by the receiving device, the command received from the remote-control device according to the special mode; and executing, by the receiving device, the command interpreted according to the special mode; and if it is determined by the receiving device that the remote-control device is not assigned to a special mode, then executing the command as received by the receiving device from the remote-control device. 2. The method of claim 1 wherein the determining, by the receiving device, whether the remote-control device is assigned to a special mode includes: reading, by the receiving device, an identifier of the remote-control device received with the command received from the remote-control device; determining, by the receiving device, whether the received identifier of the remote-control device is associated with a special mode; if the received identifier of the remote-control device is determined by the receiving device to be associated with a special mode, then determining, by the receiving device, that the remote-control device is assigned to the special mode; and if the received identifier of the remote-control device is determined by the receiving device to not be associated with a special mode, then determining, by the receiving device, that the remote-control device is not assigned to a special mode. 3. The method of claim 2 wherein the determining, by the receiving device, whether the received identifier of the remote-control device is associated with a special mode includes determining whether the received identifier of the remote-control device is associated with one of a plurality of special modes recognized by the receiving device. 4. The method of claim 1 wherein the special mode is a music channel mode in which commands from the remote-control device to change to a next channel on the receiving device are interpreted by the receiving device to change channels within a list of music service channels, even if a current channel of the receiving device is not a music service channel and the next channel is not a music service channel. 5. The method of claim 1 wherein the special mode is a limited use mode in which particular commands from the remote-control device are ignored by the receiving device. 6. The method of claim 1 wherein the determining, by the receiving device, whether the remote-control device is assigned to a special mode includes, in response to receiving the command from the remote-control device, determining, by the receiving device, that the remote-control device is currently assigned to a special music mode, and wherein executing, by the receiving device, the command interpreted according to the special mode includes: the receiving device taking the receiving device out of a standby mode if the receiving device is in the standby mode; and if the receiving device determines the command from the remote-control device is a command to change to a next channel, the receiving device changing a current channel to a music service channel within a list of music service channels even if a current channel of the receiving device is not a music service channel and the next channel is not a music service channel. 7. The method of claim 6 wherein the receiving device changing the channel to a music service channel in a list of music service channels includes: determining, by the receiving device, whether the remote-control device is currently associated with a list of favorite music service channels; and if it is determined, by the receiving device, that the remote-control device is currently associated with a list of favorite music service channels, then changing to a music service channel within the list of favorite music service channels currently associated with the remote-control device. 8. The method of claim 6 wherein executing, by the receiving device, the command interpreted according to the special mode further includes: the receiving device inserting, into audio output of the music service channel to which the receiving device is changing, an audible indication of a channel identifier of the music service channel to which the receiving device is changing. 9. The method of claim 6 wherein executing, by the receiving device, the command interpreted according to the special mode further includes: the receiving device outputting an audible indication of a status of the receiving device in conjunction with changing to the music service channel within the list of music service channels. 10. The method of claim 1 further comprising: receiving, by the receiving device, another command from the remote-control device of the receiving device to control a function of the receiving device; in response to receiving the other command from the remote-control device, determining, by the receiving device, that the remote-control device is currently assigned to a special music mode; in response to the determining, by the receiving device, that the remote-control device is currently assigned to the special music mode: if the receiving device determines the command from the remote-control device is a command to change to a next channel, the receiving device determining whether a current status of the receiving device does not allow changing to another channel; and if the receiving device determines the current status of the receiving device does not allow changing to another channel, the receiving device outputting an audio signal indicating that the current status of the receiving device does not allow changing to another channel. 11. The method of claim 1 further comprising: receiving, by the receiving device, another command from the remote-control device of the receiving device to control a function of the receiving device; in response to receiving the other command from the remote-control device, determining, by the receiving device, that the remote-control device is currently assigned to a special limited use mode; in response to the determining, by the receiving device, that the remote-control device is currently assigned to the special limited use mode: the receiving device determining whether the received other command is in a list of commands that are limited according to the special limited use mode; and if the receiving device determines the received other command is in a list of commands that are limited according to the special limited use mode, the receiving device determining not to execute the received other command. 12. The method of claim 11 wherein the list of commands that are limited according to the special limited use mode are commands to do one or more of: change settings of the receiving device and activate a menu on a display device in operable communication with the receiving device. 13. The method of claim 11 wherein the list of commands that are limited according to the special limited use mode are commands to change to channels other than channels on a list of allowed channels. 14. The method of claim 11 wherein the list of commands that are limited according to the special limited use mode are commands to change to a restricted list of channels. 15. The method of claim 11 wherein the list of commands that are limited according to the special limited use mode are commands to perform one or more operations including at least one of: recording content, playing recorded content, performing menu operations, performing guide operations, changing system settings, changing receiving device settings, changing user preferences, changing menu settings, changing guide settings, changing favorites lists, playing particular programs, playing particular content; switching to particular channels, changing source input, changing remote-control device modes, changing parental control settings, changing user credentials, changing television settings, changing auxiliary device settings, changing volume, turning on or off muting, changing closed captioning settings, changing subtitle settings, changing video settings, changing audio settings, changing display mode settings, changing content recording settings, changing future program recording settings, changing alerts, changing home automation settings and changing home security settings. 16. The method of claim 11, further comprising: if it is determined by the receiving device, in response to receiving the other command, that the remote-control device is currently assigned to a special limited use mode, then the receiving device determining not to execute particular commands received from the remote-control device until a command is received from a different remote-control device that is determined by the receiving device to currently not be in the special limited use mode. 17. The method of claim 11, further comprising: if it is determined by the receiving device, in response to receiving the other command, that the remote-control device is currently assigned to a special limited use mode, then the receiving device disabling one or more functions of the receiving device until a command is received from a different remote-control device that is determined by the receiving device to currently not be in the special limited use mode. 18. The method of claim 1 wherein the interpreting, by the receiving device, the command received from the remote-control device according to the special mode includes: determining, by the receiving device, whether the command received from the remote-control device is in a list of commands associated with the special mode; if it is determined, by the receiving device, that the command received from the remote-control device is in a list of commands associated with the special mode then, instead of executing the received command, the receiving device using the command received from the remote-control device to find a stored special mode process to be performed that is mapped to the command received from the remote-control device; and using, by the receiving device, the stored special mode process as the command interpreted according to the special mode. 19. The method of claim 18 wherein the special mode process to be performed that is mapped to the command received from the remote-control device is a sequence of commands. 20. The method of claim 18 wherein the special mode process to be performed that is mapped to the command received from the remote-control device is to ignore the command received from the remote-control device. 21. A system for providing remote-control special modes, comprising: at least one controller of a receiving device; and a memory coupled to the at least one controller of the receiving device, the memory having computer-executable instructions stored thereon that, when executed by the at least one controller of the receiving device, cause the at least one controller of the receiving device to: be able to receive a command to have a remote-control device of the receiving device be in a special mode; in response to the received command to have a remote-control device of the receiving device be in a special mode, record on the receiving device an association of an identifier of the remote-control device with the special mode; and interpret commands received by the receiving device from the remote-control device based on the association of the identifier of the remote-control device with the special mode. 22. The system of claim 21, wherein the computer-executable instructions, when executed by the at least one controller of the receiving device, further cause the at least one controller of the receiving device to execute commands received by the receiving device from the remote-control device based on the association of the identifier of the remote-control device with the special mode. 23. The system of claim 21, wherein the computer-executable instructions, when executed by the at least one controller of the receiving device, further cause the at least one controller of the receiving device to determine to not execute particular restricted commands received by the receiving device from the remote-control device based on the association of the identifier of the remote-control device with the special mode. 24. A non-transitory computer-readable storage medium having computer executable instructions thereon that, when executed by at least one computer processor, cause the at least one computer processor to: cause a receiving device to assign a remote-control device of the receiving device to a special mode; interpret commands received by the receiving device from the remote-control device assigned to the special mode based on an association stored on the receiving device of an identifier of the remote-control device with the special mode; and respond to a command received by the receiving device from the remote-control device assigned to the special mode differently than the same command received by the receiving device from a different remote-control device that is not assigned by the receiving device to the special mode. 25. The non-transitory computer-readable storage medium of claim 24 wherein the computer executable instructions, when executed, cause the at least one computer processor to respond to a command received by the receiving device from the remote-control device assigned to the special mode differently than the same command received by the receiving device from a different remote-control device that is not assigned by the receiving device to the special mode by causing the at least one computer processor to ignore the command received by the receiving device from the remote-control device assigned to the special mode and to not ignore the same command received by the receiving device from the different remote-control device that is not assigned by the receiving device to the special mode. 26. The non-transitory computer-readable storage medium of claim 24 wherein the command received by the receiving device from the remote-control device assigned to the special mode is a command to change to a next channel and the computer executable instructions, when executed, cause the at least one computer processor to respond to the command received by the receiving device from the remote-control device assigned to the special mode differently than the same command received by the receiving device from a different remote-control device that is not assigned by the receiving device to the special mode by causing the at least one computer processor to: in response to the command received by the receiving device from the remote-control device assigned to the special mode to change to a next channel: take the receiving device out of a standby mode if the receiving device is in the standby mode; and change a current channel to a music service channel within a list of music service channels even if a current channel of the receiving device is not a music service channel and the next channel is not a music service channel; and in response to a same command to change to a next channel received by the receiving device from the different remote-control device that is not assigned by the receiving device to the special mode, change to the next channel. 27. The non-transitory computer-readable storage medium of claim 24 wherein the computer executable instructions thereon, when executed, further cause the at least one computer processor to: be able to receive a command from the remote-control device to assign the remote-control device of the receiving device to the special mode; and in response to the command received from the remote-control device to assign the remote-control device of the receiving device to the special mode, cause the receiving device to assign the remote-control device of the receiving device to the special mode by storing the association on the receiving device of the identifier of the remote-control device with the special mode.
2,600
10,128
10,128
15,149,013
2,643
In some implementations, a computing device can be configured to automatically tum off notifications when generating a notification would cause a disturbance or be unwanted by a user. The device can be configured with quiet hours during which notifications that would otherwise be generated by the computing device can be suppressed. In some implementations, quiet hours can be configured as a time period with a start time and an end time. In some implementations, quiet hours can be derived from application data. For example, calendar data, alarm clock data, map data, etc. can be used to determine when quiet hours should be enforced. In some implementations, the device can be configured with exceptions to quiet hour notification suppression. In some implementations, the user can identify contacts to which the quiet hours notification suppression should not be applied.
1. A method including: at an electronic device with a display and one or more input devices: while the electronic device is not in a restricted-notification mode, wherein the electronic device is configured to generate a notification in response to detecting a notification event, the notification including sound, light, movement or a combination thereof, and the restricted-notification mode restricts the generation of notifications at the electronic device in response to detected notification events: displaying, on the display, a graphical user interface that includes a first region and a second region, different from the first region, the first region including one or more visual indications of status of the electronic device; while displaying the graphical user interface, determining that one or more mode-transition criteria have been satisfied; and in response to determining that the one or more mode-transition criteria have been satisfied: transitioning the electronic device to the restricted-notification mode; and displaying, in the first region of the graphical user interface, a restricted-notification visual indication that indicates that the electronic device is in the restricted-notification mode, wherein the restricted-notification visual indication was not displayed in the first region of the graphical user interface when the electronic device was not in the restricted-notification mode. 2. The method of claim 1, wherein, while the electronic device is in the restricted-notification mode, the restricted-notification visual indication and the one or more visual indications of the status of the electronic device are displayed in the first region of the graphical user interface. 3. The method of claim 2, wherein, while the electronic device is in the restricted-notification mode, the restricted-notification visual indication and the one or more visual indications of the status of the electronic device are displayed concurrently in the first region of the graphical user interface. 4. The method of claim 1, wherein the one or more visual indications of the status of the electronic device include one or more of an indication of signal strength at the electronic device, an indication of a telecommunications carrier at the electronic device, a current time at the electronic device, and an indication of battery status at the electronic device. 5. The method of claim 1, wherein the one or more mode-transition criteria include a criterion that is satisfied when a user of the electronic device manually enables the restricted-notification mode on the electronic device. 6. The method of claim 1, wherein the one or more mode-transition criteria include a criterion that is satisfied when the electronic device determines, without user input, that the electronic device should transition to the restricted-notification mode. 7. The method of claim 6, wherein the determining by the electronic device, without user input, that the electronic device should transition to the restricted-notification mode includes determining that a current time at the electronic device is within a period of time associated with the restricted-notification mode. 8. The method of claim 1, wherein the restricted-notification visual indication includes one or more of text, a symbol, and an icon. 9. The method of claim 1, further including: while the electronic device is in the restricted-notification mode, determining that the electronic device should transition out of the restricted-notification mode; and in response to determining that the electronic device should transition out of the restricted-notification mode: transitioning the electronic device out of the restricted-notification mode; and ceasing display of the restricted-notification visual indication in the first region of the graphical user interface. 10. The method of claim 9, wherein determining that the electronic device should transition out of the restricted-notification mode includes determining that a user of the electronic device has started using the electronic device. 11. The method of claim 1, wherein the one or more mode-transition criteria include a criterion that is satisfied when a user of the electronic device manually enables a restricted-notification toggle on the electronic device, the restricted-notification toggle corresponding to a plurality of restricted-notification settings at the electronic device. 12. The method of claim 1, wherein the one or more mode-transition criteria include a criterion that is satisfied when a user of the electronic device provides a voice command to the electronic device requesting that the restricted-notification mode be enabled on the electronic device. 13. The method of claim 12, wherein the voice command includes a specified time period for which the restricted-notification mode should be enabled on the electronic device. 14. A non-transitory computer-readable medium including one or more sequences of instructions which, when executed by one or more processors of an electronic device having a display and one or more input devices, cause the one or more processors to perform a method including: while the electronic device is not in a restricted-notification mode, wherein the electronic device is configured to generate a notification in response to detecting a notification event, the notification including sound, light, movement or a combination thereof, and the restricted-notification mode restricts the generation of notifications at the electronic device in response to detected notification events: displaying, on the display, a graphical user interface that includes a first region and a second region, different from the first region, the first region including one or more visual indications of status of the electronic device; while displaying the graphical user interface, determining that one or more mode-transition criteria have been satisfied; and in response to determining that the one or more mode-transition criteria have been satisfied: transitioning the electronic device to the restricted-notification mode; and displaying, in the first region of the graphical user interface, a restricted-notification visual indication that indicates that the electronic device is in the restricted-notification mode, wherein the restricted-notification visual indication was not displayed in the first region of the graphical user interface when the electronic device was not in the restricted-notification mode. 15. An electronic device including: a display; one or more input devices; one or more processors; and a non-transitory computer-readable medium including one or more sequences of instructions which, when executed by the one or more processors, cause the one or more processors to perform a method including: while the electronic device is not in a restricted-notification mode, wherein the electronic device is configured to generate a notification in response to detecting a notification event, the notification including sound, light, movement or a combination thereof, and the restricted-notification mode restricts the generation of notifications at the electronic device in response to detected notification events: displaying, on the display, a graphical user interface that includes a first region and a second region, different from the first region, the first region including one or more visual indications of status of the electronic device; while displaying the graphical user interface, determining that one or more mode-transition criteria have been satisfied; and in response to determining that the one or more mode-transition criteria have been satisfied: transitioning the electronic device to the restricted-notification mode; and displaying, in the first region of the graphical user interface, a restricted-notification visual indication that indicates that the electronic device is in the restricted-notification mode, wherein the restricted-notification visual indication was not displayed in the first region of the graphical user interface when the electronic device was not in the restricted-notification mode.
In some implementations, a computing device can be configured to automatically tum off notifications when generating a notification would cause a disturbance or be unwanted by a user. The device can be configured with quiet hours during which notifications that would otherwise be generated by the computing device can be suppressed. In some implementations, quiet hours can be configured as a time period with a start time and an end time. In some implementations, quiet hours can be derived from application data. For example, calendar data, alarm clock data, map data, etc. can be used to determine when quiet hours should be enforced. In some implementations, the device can be configured with exceptions to quiet hour notification suppression. In some implementations, the user can identify contacts to which the quiet hours notification suppression should not be applied.1. A method including: at an electronic device with a display and one or more input devices: while the electronic device is not in a restricted-notification mode, wherein the electronic device is configured to generate a notification in response to detecting a notification event, the notification including sound, light, movement or a combination thereof, and the restricted-notification mode restricts the generation of notifications at the electronic device in response to detected notification events: displaying, on the display, a graphical user interface that includes a first region and a second region, different from the first region, the first region including one or more visual indications of status of the electronic device; while displaying the graphical user interface, determining that one or more mode-transition criteria have been satisfied; and in response to determining that the one or more mode-transition criteria have been satisfied: transitioning the electronic device to the restricted-notification mode; and displaying, in the first region of the graphical user interface, a restricted-notification visual indication that indicates that the electronic device is in the restricted-notification mode, wherein the restricted-notification visual indication was not displayed in the first region of the graphical user interface when the electronic device was not in the restricted-notification mode. 2. The method of claim 1, wherein, while the electronic device is in the restricted-notification mode, the restricted-notification visual indication and the one or more visual indications of the status of the electronic device are displayed in the first region of the graphical user interface. 3. The method of claim 2, wherein, while the electronic device is in the restricted-notification mode, the restricted-notification visual indication and the one or more visual indications of the status of the electronic device are displayed concurrently in the first region of the graphical user interface. 4. The method of claim 1, wherein the one or more visual indications of the status of the electronic device include one or more of an indication of signal strength at the electronic device, an indication of a telecommunications carrier at the electronic device, a current time at the electronic device, and an indication of battery status at the electronic device. 5. The method of claim 1, wherein the one or more mode-transition criteria include a criterion that is satisfied when a user of the electronic device manually enables the restricted-notification mode on the electronic device. 6. The method of claim 1, wherein the one or more mode-transition criteria include a criterion that is satisfied when the electronic device determines, without user input, that the electronic device should transition to the restricted-notification mode. 7. The method of claim 6, wherein the determining by the electronic device, without user input, that the electronic device should transition to the restricted-notification mode includes determining that a current time at the electronic device is within a period of time associated with the restricted-notification mode. 8. The method of claim 1, wherein the restricted-notification visual indication includes one or more of text, a symbol, and an icon. 9. The method of claim 1, further including: while the electronic device is in the restricted-notification mode, determining that the electronic device should transition out of the restricted-notification mode; and in response to determining that the electronic device should transition out of the restricted-notification mode: transitioning the electronic device out of the restricted-notification mode; and ceasing display of the restricted-notification visual indication in the first region of the graphical user interface. 10. The method of claim 9, wherein determining that the electronic device should transition out of the restricted-notification mode includes determining that a user of the electronic device has started using the electronic device. 11. The method of claim 1, wherein the one or more mode-transition criteria include a criterion that is satisfied when a user of the electronic device manually enables a restricted-notification toggle on the electronic device, the restricted-notification toggle corresponding to a plurality of restricted-notification settings at the electronic device. 12. The method of claim 1, wherein the one or more mode-transition criteria include a criterion that is satisfied when a user of the electronic device provides a voice command to the electronic device requesting that the restricted-notification mode be enabled on the electronic device. 13. The method of claim 12, wherein the voice command includes a specified time period for which the restricted-notification mode should be enabled on the electronic device. 14. A non-transitory computer-readable medium including one or more sequences of instructions which, when executed by one or more processors of an electronic device having a display and one or more input devices, cause the one or more processors to perform a method including: while the electronic device is not in a restricted-notification mode, wherein the electronic device is configured to generate a notification in response to detecting a notification event, the notification including sound, light, movement or a combination thereof, and the restricted-notification mode restricts the generation of notifications at the electronic device in response to detected notification events: displaying, on the display, a graphical user interface that includes a first region and a second region, different from the first region, the first region including one or more visual indications of status of the electronic device; while displaying the graphical user interface, determining that one or more mode-transition criteria have been satisfied; and in response to determining that the one or more mode-transition criteria have been satisfied: transitioning the electronic device to the restricted-notification mode; and displaying, in the first region of the graphical user interface, a restricted-notification visual indication that indicates that the electronic device is in the restricted-notification mode, wherein the restricted-notification visual indication was not displayed in the first region of the graphical user interface when the electronic device was not in the restricted-notification mode. 15. An electronic device including: a display; one or more input devices; one or more processors; and a non-transitory computer-readable medium including one or more sequences of instructions which, when executed by the one or more processors, cause the one or more processors to perform a method including: while the electronic device is not in a restricted-notification mode, wherein the electronic device is configured to generate a notification in response to detecting a notification event, the notification including sound, light, movement or a combination thereof, and the restricted-notification mode restricts the generation of notifications at the electronic device in response to detected notification events: displaying, on the display, a graphical user interface that includes a first region and a second region, different from the first region, the first region including one or more visual indications of status of the electronic device; while displaying the graphical user interface, determining that one or more mode-transition criteria have been satisfied; and in response to determining that the one or more mode-transition criteria have been satisfied: transitioning the electronic device to the restricted-notification mode; and displaying, in the first region of the graphical user interface, a restricted-notification visual indication that indicates that the electronic device is in the restricted-notification mode, wherein the restricted-notification visual indication was not displayed in the first region of the graphical user interface when the electronic device was not in the restricted-notification mode.
2,600
10,129
10,129
14,611,735
2,612
Embodiments of the present invention provide for improved timing control in 2-D image processing to maintain a constant rate of fetches and pixel outputs even when the processing operations transition to a new line or frame of pixels. A one-to-one relationship between incoming pixel rate and outgoing pixel rate is maintained without additional clock cycles or memory bandwidth as an improved timing control according to the present invention takes advantage of idle memory bandwidth by pre-fetching a new column of pixel data in a first pixel block of a next line or frame while a new column of an edge pixel block on a current line is duplicated or zeroed out. As the edge pixel block(s) on the current line are processed, the data in the first pixel block of the next line or frame become ready for computation without extra clock cycles or extra memory bandwidth.
1. A method used in scanning an image in raster mode, the method comprising: filling a vector in a pixel block with predetermined values, in response to a determination that the vector of the pixel block is located beyond an edge of the image; performing an image processing on the pixel block with the predetermined values; and fetching a vector of a next pixel block of the image, in response to the determination that the vector of the pixel block is located beyond the edge of the image. 2. The method of claim 1, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel on an adjacent line to the line on an opposing edge of the edge of the image. 3. The method of claim 1, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel two lines away from the line. 4. The method of claim 1, wherein the predetermined values are duplicated from other values in the pixel block. 5. The method of claim 1, wherein the predetermined values are zeroes. 6. The method of claim 1, wherein the image processing includes a convolution operation applied to the pixel block or a correlation operation applied to the pixel block. 7. The method of claim 1, wherein the fetching is performed in a same clock unit as the filling is performed, thereby maintaining a constant rate of pixel outputs. 8. A non-transitory, computer-readable medium encoded with instructions that, when executed by a processor, cause the processor to perform a method that scans an image in raster mode, the method comprising: filling a vector in a pixel block with predetermined values, in response to a determination that the vector of the pixel block is located beyond an edge of the image; performing an image processing on the pixel block with the predetermined values; and fetching a vector of a next pixel block of the image, in response to the determination that the vector of the pixel block is located beyond the edge of the image. 9. The medium of claim 8, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel on an adjacent line to the line on an opposing edge of the edge of the image. 10. The medium of claim 8, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel two lines away from the line. 11. The medium of claim 8, wherein the predetermined values are duplicated from other values in the pixel block. 12. The medium of claim 8, wherein the predetermined values are zeroes. 13. The medium of claim 8, wherein the image processing includes a convolution operation applied to the pixel block or a correlation operation applied to the pixel block. 14. The medium of claim 8, wherein the fetching is performed in a same clock unit as the filling is performed, thereby maintaining a constant rate of pixel outputs. 15. An apparatus that scans an image in raster mode, the apparatus comprising: a processor configured to fill a vector in a pixel block with predetermined values, in response to a determination that the vector of the pixel block is located beyond an edge of the image, wherein the processor performs an image processing on the pixel block with the predetermined value, and the processor is further configured to fetch a vector of a next pixel block of the image, in response to the determination that the vector of the pixel block is located beyond the edge of the image. 16. The apparatus of claim 15, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel on an adjacent line to the line on an opposing edge of the edge of the image. 17. The apparatus of claim 15, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel two lines away from the line. 18. The apparatus of claim 15, wherein the predetermined values are duplicated from other values in the pixel block. 19. The apparatus of claim 15, wherein the image processing includes a convolution operation applied to the pixel block or a correlation operation applied to the pixel block. 20. The apparatus of claim 15, wherein the processor is configured to fetch the vector of the next pixel block in a same clock unit as the processor fills the vector in the pixel block, thereby maintaining a constant rate of pixel outputs.
Embodiments of the present invention provide for improved timing control in 2-D image processing to maintain a constant rate of fetches and pixel outputs even when the processing operations transition to a new line or frame of pixels. A one-to-one relationship between incoming pixel rate and outgoing pixel rate is maintained without additional clock cycles or memory bandwidth as an improved timing control according to the present invention takes advantage of idle memory bandwidth by pre-fetching a new column of pixel data in a first pixel block of a next line or frame while a new column of an edge pixel block on a current line is duplicated or zeroed out. As the edge pixel block(s) on the current line are processed, the data in the first pixel block of the next line or frame become ready for computation without extra clock cycles or extra memory bandwidth.1. A method used in scanning an image in raster mode, the method comprising: filling a vector in a pixel block with predetermined values, in response to a determination that the vector of the pixel block is located beyond an edge of the image; performing an image processing on the pixel block with the predetermined values; and fetching a vector of a next pixel block of the image, in response to the determination that the vector of the pixel block is located beyond the edge of the image. 2. The method of claim 1, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel on an adjacent line to the line on an opposing edge of the edge of the image. 3. The method of claim 1, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel two lines away from the line. 4. The method of claim 1, wherein the predetermined values are duplicated from other values in the pixel block. 5. The method of claim 1, wherein the predetermined values are zeroes. 6. The method of claim 1, wherein the image processing includes a convolution operation applied to the pixel block or a correlation operation applied to the pixel block. 7. The method of claim 1, wherein the fetching is performed in a same clock unit as the filling is performed, thereby maintaining a constant rate of pixel outputs. 8. A non-transitory, computer-readable medium encoded with instructions that, when executed by a processor, cause the processor to perform a method that scans an image in raster mode, the method comprising: filling a vector in a pixel block with predetermined values, in response to a determination that the vector of the pixel block is located beyond an edge of the image; performing an image processing on the pixel block with the predetermined values; and fetching a vector of a next pixel block of the image, in response to the determination that the vector of the pixel block is located beyond the edge of the image. 9. The medium of claim 8, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel on an adjacent line to the line on an opposing edge of the edge of the image. 10. The medium of claim 8, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel two lines away from the line. 11. The medium of claim 8, wherein the predetermined values are duplicated from other values in the pixel block. 12. The medium of claim 8, wherein the predetermined values are zeroes. 13. The medium of claim 8, wherein the image processing includes a convolution operation applied to the pixel block or a correlation operation applied to the pixel block. 14. The medium of claim 8, wherein the fetching is performed in a same clock unit as the filling is performed, thereby maintaining a constant rate of pixel outputs. 15. An apparatus that scans an image in raster mode, the apparatus comprising: a processor configured to fill a vector in a pixel block with predetermined values, in response to a determination that the vector of the pixel block is located beyond an edge of the image, wherein the processor performs an image processing on the pixel block with the predetermined value, and the processor is further configured to fetch a vector of a next pixel block of the image, in response to the determination that the vector of the pixel block is located beyond the edge of the image. 16. The apparatus of claim 15, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel on an adjacent line to the line on an opposing edge of the edge of the image. 17. The apparatus of claim 15, wherein the pixel block is centered around a pixel on a line, and the next pixel block is centered around a pixel two lines away from the line. 18. The apparatus of claim 15, wherein the predetermined values are duplicated from other values in the pixel block. 19. The apparatus of claim 15, wherein the image processing includes a convolution operation applied to the pixel block or a correlation operation applied to the pixel block. 20. The apparatus of claim 15, wherein the processor is configured to fetch the vector of the next pixel block in a same clock unit as the processor fills the vector in the pixel block, thereby maintaining a constant rate of pixel outputs.
2,600
10,130
10,130
15,155,278
2,648
A method and system for improving the safety of drivers of vehicles while either sending or receiving written communications to third parties includes a steering wheel mounted keyboard such that the opposing thumbs of a driver can send and revive text messages, e-mails, etc., with the display of such text messages being projected via a HUD display onto the windshield of the vehicle. The driver's hands are thus always on the steering wheel in approximately the preferred 10 o'clock and 2 o'clock positions and the driver's view is directly on the road through the windshield directly in front of the driver while any text message is being composed, thus reducing the many accidents that occur due to drivers being distracted when viewing their Smartphone messages and taking their eyes off the road while doing so. Avoidance of gesture based controls or oral/voice controls is also preferred as such gestures require a driver's hands to leave the steering wheel and thus jeopardize the safety of the driver.
1. A communications device for use with a vehicle to facilitate safe texting while driving, comprising: a HUD projection system adapted for association with a vehicle to project text onto the windshield interior of a vehicle directly in front of a driver of the vehicle; a first hand unit attached to said steering wheel adjacent the left hand of the user, said first hand unit comprising a first alpha-numeric keyboard suitable for texting with the left thumb; a secondhand unit attached to said steering wheel adjacent the right hand of said user, said second hand unit comprising a second alpha-numeric keyboard suitable for texting with the right thumb wherein said first alpha-numeric keyboard includes some but not all of the letters of the alphabet and said second alpha-numeric keyboard includes at least those letters of the alphabet not on said first alpha-numeric keyboard, wherein said first keyboard and said second keyboard form a complete qwerty keyboard suitable for texting; a hand detecting sensor associated with the first hand unit and the second hand unit, said hand detecting sensor adapted to preclude text communications from being entered on either of said first hand unit or said second hand unit if a driver's hands are not both contacting a steering wheel to which said communications device is attached; wherein when a driver's left and right hands are in contact with the steering wheel in approximately 10 o'clock and 2 o'clock positions about the steering wheel, the driver is able to input text by using his/her respective thumbs, with the image of such texts being projected onto the HUD system such that text can be read by the driver while said driver's view is maintained through the front windshield of the vehicle. 2. The communications device of claim 1 wherein one of said first hand unit and second hand unit includes an alert that comprises a vibratory motor. 3. The communications device of claim 1 wherein said first hand unit and said second hand unit are reversibly associated with a steering wheel of a vehicle. 4. The communication device of claim 1 wherein the communications device is devoid of a screen that is connected to the steering wheel. 5. The communications device of claim 1 wherein said screen, first keyboard and second keyboard are in wireless communication to a cell phone in said vehicle. 6. The device of claim 1, wherein said device further comprises a portable heads-up display device comprising a projector engine disposed in a housing; a curved screen coupled to the housing so that light from the projector engine is incident on having an inner surface of said curved screen, said housing adapted to be tilted upwardly from the direction of incident light projected from the projector engine. 7. The device of claim 6, further comprising a combiner coupled to the housing that is adapted to be tilted upwardly from the direction of reflected light from the screen, said combiner having a concave surface that has a dichroic coating to reflect light of selected wavelengths. 8. The device of claim 6, wherein the hand detecting sensor is coupled to the projector engine. 9. The device of claim 6, wherein the projector engine project images on the screen that are pre-distorted to compensate for distortion in the image appearing at the combiner. 10. The device of claim 6, further comprising a means for sensing ambient light conditions at the heads-up display device, said means for sensing adapted to adjust the brightness of projected light responsive to ambient light conditions. 11. A communications device for use with a vehicle to facilitate safe texting while driving, comprising a housing; a projector engine disposed in the housing; a screen coupled to the housing so that light from the projector engine is incident on a surface of the screen; a semi-transparent combiner coupled to the housing to receive light reflected from the screen; a first hand unit attached to said steering wheel adjacent the left hand of the user, said first hand unit comprising a first alpha-numeric keyboard suitable for texting with the left thumb; a second hand unit attached to said steering wheel adjacent the right hand of said user, said second hand unit comprising a second alpha-numeric keyboard suitable for texting with the right thumb wherein said first alpha-numeric keyboard includes some but not all of the letters of the alphabet and said second alpha-numeric keyboard includes at least those letters of the alphabet not on said first alpha-numeric keyboard, wherein said first keyboard and said second keyboard form a complete qwerty keyboard suitable for texting; a hand detecting sensor associated with the first hand unit and the second hand unit, said hand detecting sensor adapted to preclude text communications from being entered on either of said first hand unit or said second hand unit if a driver's hands are not both contacting a steering wheel to which said communications device is attached; wherein when a driver's left and right hands are in contact with the steering wheel in approximately 10 o'clock and 2 o'clock positions about the steering wheel, the driver is able to input text by using his/her respective thumbs, with the image of such texts being projected onto the screen such that text can be read by the driver while said driver's view is maintained through the front windshield of the vehicle, and a means for sensing ambient light conditions that is adapted to adjust the brightness of projected light responsive to ambient light conditions. 12. The device of claim 11, further comprising a forward-facing camera for capturing images of a road upon which the vehicle is traveling. 13. A communications device for use with a vehicle to facilitate safe texting while driving, comprising a housing; a projector engine disposed in the housing; a screen coupled to the housing so that light from the projector engine is incident on a surface of the screen; a semi-transparent combiner coupled to the housing to receive light reflected from the screen; a first hand unit attached to said steering wheel adjacent the left hand of the user, said first hand unit comprising a first alpha-numeric keyboard suitable for texting with the left thumb; a second hand unit attached to said steering wheel adjacent the right hand of said user, said second hand unit comprising a second alpha-numeric keyboard suitable for texting with the right thumb wherein said first alpha-numeric keyboard includes some but not all of the letters of the alphabet and said second alpha-numeric keyboard includes at least those letters of the alphabet not on said first alpha-numeric keyboard, wherein said first keyboard and said second keyboard form a complete qwerty keyboard suitable for texting; a hand detecting sensor associated with the first hand unit and the second hand unit, said hand detecting sensor adapted to preclude text communications from being entered on either of said first hand unit or said second hand unit if a driver's hands are not both contacting a steering wheel to which said communications device is attached; wherein when a driver's left and right hands are in contact with the steering wheel in approximately 10 o'clock and 2 o'clock positions about the steering wheel, the driver is able to input text by using his/her respective thumbs, with the image of such texts being projected onto the screen such that text can be read by the driver while said driver's view is maintained through the front windshield of the vehicle; said first keyboard and second keyboard adapted to be in wireless communication with a smart phone of the driver of said vehicle. 14. The device of claim 13, wherein said device further comprises a portable heads-up display device comprising a projector engine disposed in a housing; a curved screen coupled to the housing so that light from the projector engine is incident on having an inner surface of said curved screen, said housing adapted to be tilted upwardly from the direction of incident light projected from the projector engine. 15. The device of claim 13, further comprising a combiner coupled to the housing that is adapted to be tilted upwardly from the direction of reflected light from the screen, said combiner having a concave surface that has a dichroic coating to reflect light of selected wavelengths. 16. The device of claim 6, wherein the hand detecting sensor is coupled to the projector engine. 17. The device of claim 13, wherein the projector engine project images on the screen that are pre-distorted to compensate for distortion in the image appearing at the combiner. 18. The device of claim 13, further comprising a means for sensing ambient light conditions at the heads-up display device, said means for sensing adapted to adjust the brightness of projected light responsive to ambient light conditions. 19. The communications device of claim 13, wherein said first hand unit and said second hand unit are reversibly associated with a steering wheel of a vehicle. 20. The communication device of claim 13, wherein the communications device is devoid of a screen that is connected to the steering wheel.
A method and system for improving the safety of drivers of vehicles while either sending or receiving written communications to third parties includes a steering wheel mounted keyboard such that the opposing thumbs of a driver can send and revive text messages, e-mails, etc., with the display of such text messages being projected via a HUD display onto the windshield of the vehicle. The driver's hands are thus always on the steering wheel in approximately the preferred 10 o'clock and 2 o'clock positions and the driver's view is directly on the road through the windshield directly in front of the driver while any text message is being composed, thus reducing the many accidents that occur due to drivers being distracted when viewing their Smartphone messages and taking their eyes off the road while doing so. Avoidance of gesture based controls or oral/voice controls is also preferred as such gestures require a driver's hands to leave the steering wheel and thus jeopardize the safety of the driver.1. A communications device for use with a vehicle to facilitate safe texting while driving, comprising: a HUD projection system adapted for association with a vehicle to project text onto the windshield interior of a vehicle directly in front of a driver of the vehicle; a first hand unit attached to said steering wheel adjacent the left hand of the user, said first hand unit comprising a first alpha-numeric keyboard suitable for texting with the left thumb; a secondhand unit attached to said steering wheel adjacent the right hand of said user, said second hand unit comprising a second alpha-numeric keyboard suitable for texting with the right thumb wherein said first alpha-numeric keyboard includes some but not all of the letters of the alphabet and said second alpha-numeric keyboard includes at least those letters of the alphabet not on said first alpha-numeric keyboard, wherein said first keyboard and said second keyboard form a complete qwerty keyboard suitable for texting; a hand detecting sensor associated with the first hand unit and the second hand unit, said hand detecting sensor adapted to preclude text communications from being entered on either of said first hand unit or said second hand unit if a driver's hands are not both contacting a steering wheel to which said communications device is attached; wherein when a driver's left and right hands are in contact with the steering wheel in approximately 10 o'clock and 2 o'clock positions about the steering wheel, the driver is able to input text by using his/her respective thumbs, with the image of such texts being projected onto the HUD system such that text can be read by the driver while said driver's view is maintained through the front windshield of the vehicle. 2. The communications device of claim 1 wherein one of said first hand unit and second hand unit includes an alert that comprises a vibratory motor. 3. The communications device of claim 1 wherein said first hand unit and said second hand unit are reversibly associated with a steering wheel of a vehicle. 4. The communication device of claim 1 wherein the communications device is devoid of a screen that is connected to the steering wheel. 5. The communications device of claim 1 wherein said screen, first keyboard and second keyboard are in wireless communication to a cell phone in said vehicle. 6. The device of claim 1, wherein said device further comprises a portable heads-up display device comprising a projector engine disposed in a housing; a curved screen coupled to the housing so that light from the projector engine is incident on having an inner surface of said curved screen, said housing adapted to be tilted upwardly from the direction of incident light projected from the projector engine. 7. The device of claim 6, further comprising a combiner coupled to the housing that is adapted to be tilted upwardly from the direction of reflected light from the screen, said combiner having a concave surface that has a dichroic coating to reflect light of selected wavelengths. 8. The device of claim 6, wherein the hand detecting sensor is coupled to the projector engine. 9. The device of claim 6, wherein the projector engine project images on the screen that are pre-distorted to compensate for distortion in the image appearing at the combiner. 10. The device of claim 6, further comprising a means for sensing ambient light conditions at the heads-up display device, said means for sensing adapted to adjust the brightness of projected light responsive to ambient light conditions. 11. A communications device for use with a vehicle to facilitate safe texting while driving, comprising a housing; a projector engine disposed in the housing; a screen coupled to the housing so that light from the projector engine is incident on a surface of the screen; a semi-transparent combiner coupled to the housing to receive light reflected from the screen; a first hand unit attached to said steering wheel adjacent the left hand of the user, said first hand unit comprising a first alpha-numeric keyboard suitable for texting with the left thumb; a second hand unit attached to said steering wheel adjacent the right hand of said user, said second hand unit comprising a second alpha-numeric keyboard suitable for texting with the right thumb wherein said first alpha-numeric keyboard includes some but not all of the letters of the alphabet and said second alpha-numeric keyboard includes at least those letters of the alphabet not on said first alpha-numeric keyboard, wherein said first keyboard and said second keyboard form a complete qwerty keyboard suitable for texting; a hand detecting sensor associated with the first hand unit and the second hand unit, said hand detecting sensor adapted to preclude text communications from being entered on either of said first hand unit or said second hand unit if a driver's hands are not both contacting a steering wheel to which said communications device is attached; wherein when a driver's left and right hands are in contact with the steering wheel in approximately 10 o'clock and 2 o'clock positions about the steering wheel, the driver is able to input text by using his/her respective thumbs, with the image of such texts being projected onto the screen such that text can be read by the driver while said driver's view is maintained through the front windshield of the vehicle, and a means for sensing ambient light conditions that is adapted to adjust the brightness of projected light responsive to ambient light conditions. 12. The device of claim 11, further comprising a forward-facing camera for capturing images of a road upon which the vehicle is traveling. 13. A communications device for use with a vehicle to facilitate safe texting while driving, comprising a housing; a projector engine disposed in the housing; a screen coupled to the housing so that light from the projector engine is incident on a surface of the screen; a semi-transparent combiner coupled to the housing to receive light reflected from the screen; a first hand unit attached to said steering wheel adjacent the left hand of the user, said first hand unit comprising a first alpha-numeric keyboard suitable for texting with the left thumb; a second hand unit attached to said steering wheel adjacent the right hand of said user, said second hand unit comprising a second alpha-numeric keyboard suitable for texting with the right thumb wherein said first alpha-numeric keyboard includes some but not all of the letters of the alphabet and said second alpha-numeric keyboard includes at least those letters of the alphabet not on said first alpha-numeric keyboard, wherein said first keyboard and said second keyboard form a complete qwerty keyboard suitable for texting; a hand detecting sensor associated with the first hand unit and the second hand unit, said hand detecting sensor adapted to preclude text communications from being entered on either of said first hand unit or said second hand unit if a driver's hands are not both contacting a steering wheel to which said communications device is attached; wherein when a driver's left and right hands are in contact with the steering wheel in approximately 10 o'clock and 2 o'clock positions about the steering wheel, the driver is able to input text by using his/her respective thumbs, with the image of such texts being projected onto the screen such that text can be read by the driver while said driver's view is maintained through the front windshield of the vehicle; said first keyboard and second keyboard adapted to be in wireless communication with a smart phone of the driver of said vehicle. 14. The device of claim 13, wherein said device further comprises a portable heads-up display device comprising a projector engine disposed in a housing; a curved screen coupled to the housing so that light from the projector engine is incident on having an inner surface of said curved screen, said housing adapted to be tilted upwardly from the direction of incident light projected from the projector engine. 15. The device of claim 13, further comprising a combiner coupled to the housing that is adapted to be tilted upwardly from the direction of reflected light from the screen, said combiner having a concave surface that has a dichroic coating to reflect light of selected wavelengths. 16. The device of claim 6, wherein the hand detecting sensor is coupled to the projector engine. 17. The device of claim 13, wherein the projector engine project images on the screen that are pre-distorted to compensate for distortion in the image appearing at the combiner. 18. The device of claim 13, further comprising a means for sensing ambient light conditions at the heads-up display device, said means for sensing adapted to adjust the brightness of projected light responsive to ambient light conditions. 19. The communications device of claim 13, wherein said first hand unit and said second hand unit are reversibly associated with a steering wheel of a vehicle. 20. The communication device of claim 13, wherein the communications device is devoid of a screen that is connected to the steering wheel.
2,600
10,131
10,131
15,141,388
2,696
A high dynamic range solid state image sensor and camera system are disclosed. In one aspect, the solid state image sensor includes a first wafer including an array of pixels, each of the pixels comprising a photosensor, and a second wafer including an array of readout circuits. Each of the readout circuits is configured to output a readout signal indicative of an amount of light received by a corresponding one of the pixels and each of the readout circuits includes a counter. Each of the counters is configured to increment in response to the corresponding photosensor receiving an amount of light that is greater than a photosensor threshold. Each of the readout circuits is configured to generate the readout signal based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel.
1. A solid state image sensor, comprising: a first wafer comprising an array of pixels, each of the pixels comprising a photosensor; and a second wafer comprising an array of readout circuits, each of the readout circuits being configured to output a readout signal indicative of an amount of light received by a corresponding one of the pixels, each of the readout circuits comprising a counter, wherein each of the counters is configured to increment in response to the corresponding photosensor receiving an amount of light that is greater than a photosensor threshold, and wherein each of the readout circuits is configured to generate the readout signal based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel. 2. The solid state image sensor of claim 1, wherein each of the pixels is configured to be reset in response to receiving an amount of light that is greater than the photosensor threshold. 3. The solid state image sensor of claim 1, wherein each of the counters and each of the pixels is configured to be reset at the end of an exposure period, each of the readout circuits being configured to generate, at the end of the exposure period, the readout signal as a digital value having: i) most significant bits based on the value stored in the corresponding counter, and ii) least significant bits based on the remainder stored in the corresponding pixel. 4. The solid state image sensor of claim 3, wherein the most significant bits and the least significant bits are respectively read from the pixel and the readout circuit simultaneously. 5. The solid state image sensor of claim 1, wherein each of the photosensors is a backside illumination (BSI) sensor. 6. The solid state image sensor of claim 1, wherein each of the pixels is configured to be reset a plurality of times within an exposure period. 7. The solid state image sensor of claim 1, wherein the readout signal is configured to be read from each of the readout circuits once for each of a plurality of exposure periods. 8. The solid state image sensor of claim 1, wherein the readout circuits correspond one-to-one with the pixels. 9. The solid state image sensor of claim 1, wherein the first and second wafers are stacked and fine hybrid bonded with a pitch of less than 1 μm. 10. A method, operable by a solid state image sensor comprising a first wafer comprising an array of pixels, each of the pixels comprising a photosensor, and a second wafer comprising an array of readout circuits, each of the readout circuits comprising a counter, the method comprising: incrementing one of the counters in response to a corresponding one of the photosensors receiving an amount of light that is greater than a photosensor threshold; and generating, via the readout circuit, a readout signal indicative of an amount of light received by the corresponding photosensor within an exposure period, the readout signal being based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel. 11. The method of claim 10, further comprising resetting the corresponding pixel in response to the corresponding photosensor receiving an amount of light that is greater than the photosensor threshold. 12. The method of claim 10, further comprising: resetting each of the counters and each of the pixels at the end of an exposure period; and generating, via the readout circuit and at the end of the exposure period, the readout signal as a digital value having: i) most significant bits based on the value stored in the corresponding counter, and ii) least significant bits based on the remainder stored in the corresponding pixel. 13. The method of claim 12, further comprising respectively reading the most significant bits and the least significant bits from the corresponding pixel and the readout circuit simultaneously. 14. The method of claim 10, wherein each of the photosensors is a backside illumination (BSI) sensor. 15. The method of claim 10, further comprising resetting the corresponding pixel a plurality of times within an exposure period. 16. The method of claim 10, further comprising reading out the readout signal from the readout circuit once for each of a plurality of exposure periods. 17. The method of claim 10, wherein the readout circuits correspond one-to-one with the pixels. 18. The method of claim 10, wherein the first and second wafers are stacked and fine hybrid bonded with a pitch of less than 1 μm. 19. An apparatus, comprising: means for incrementing one of a plurality of counters in response to a corresponding one of a plurality of photosensors receiving an amount of light that is greater than a photosensor threshold, the pixels being formed on a first wafer of an image sensor; and means for generating a readout signal indicative of an amount of light received by a corresponding one of the photosensors within an exposure period, the readout signal being based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel, the means for generating being located on a second wafer of the image sensor. 20. The apparatus of claim 19, wherein the means for generating comprises means for generating, via one of a plurality of readout circuits, the readout signal, and wherein each of the readout circuits comprises a corresponding one of the counters. 21. The apparatus of claim 19, further comprising means for resetting the corresponding pixel in response to the corresponding photosensor receiving an amount of light that is greater than the photosensor threshold. 22. The apparatus of claim 19, further comprising: means for resetting each of the counters and each of the pixels at the end of an exposure period; and means for generating, via the readout circuit and at the end of the exposure period, the readout signal as a digital value having: i) most significant bits based on the value stored in the corresponding counter, and ii) least significant bits based on the remainder stored in the corresponding pixel. 23. The apparatus of claim 21, further comprising means for respectively reading the most significant bits and the least significant bits from the corresponding pixel and the readout circuit simultaneously. 24. The apparatus of claim 19, further comprising means for resetting the corresponding pixel a plurality of times within an exposure period. 25. A non-transitory computer readable storage medium having stored thereon instructions that, when executed, cause a processor circuit of a device to: increment one of a plurality of counters in response to a corresponding one of a plurality of photosensors receiving an amount of light that is greater than a photosensor threshold, the pixels being formed on a first wafer of an image sensor; and generate, via one of a plurality of readout circuit, a readout signal indicative of an amount of light received by a corresponding one of the photosensors within an exposure period, the readout signal being based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel, the readout circuits being formed on a second wafer of the image sensor and each of the readout circuits comprising a corresponding one of the counters. 26. The non-transitory computer readable storage medium of claim 25, further having stored thereon instructions that, when executed, cause the processor circuit to reset the corresponding pixel in response to the corresponding photosensor receiving an amount of light that is greater than the photosensor threshold. 27. The non-transitory computer readable storage medium of claim 25, further having stored thereon instructions that, when executed, cause the processor circuit to: reset each of the counters and each of the pixels at the end of an exposure period; and generate, via the readout circuit and at the end of the exposure period, the readout signal as a digital value having: i) most significant bits based on the value stored in the corresponding counter, and ii) least significant bits based on the remainder stored in the corresponding pixel. 28. The non-transitory computer readable storage medium of claim 27, further having stored thereon instructions that, when executed, cause the processor circuit to respectively read the most significant bits and the least significant bits from the corresponding pixel and the readout circuit simultaneously. 29. The non-transitory computer readable storage medium of claim 25, wherein each of the photosensors is a backside illumination (BSI) sensor. 30. The non-transitory computer readable storage medium of claim 25, further having stored thereon instructions that, when executed, cause the processor circuit to reset the corresponding pixel a plurality of times within an exposure period.
A high dynamic range solid state image sensor and camera system are disclosed. In one aspect, the solid state image sensor includes a first wafer including an array of pixels, each of the pixels comprising a photosensor, and a second wafer including an array of readout circuits. Each of the readout circuits is configured to output a readout signal indicative of an amount of light received by a corresponding one of the pixels and each of the readout circuits includes a counter. Each of the counters is configured to increment in response to the corresponding photosensor receiving an amount of light that is greater than a photosensor threshold. Each of the readout circuits is configured to generate the readout signal based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel.1. A solid state image sensor, comprising: a first wafer comprising an array of pixels, each of the pixels comprising a photosensor; and a second wafer comprising an array of readout circuits, each of the readout circuits being configured to output a readout signal indicative of an amount of light received by a corresponding one of the pixels, each of the readout circuits comprising a counter, wherein each of the counters is configured to increment in response to the corresponding photosensor receiving an amount of light that is greater than a photosensor threshold, and wherein each of the readout circuits is configured to generate the readout signal based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel. 2. The solid state image sensor of claim 1, wherein each of the pixels is configured to be reset in response to receiving an amount of light that is greater than the photosensor threshold. 3. The solid state image sensor of claim 1, wherein each of the counters and each of the pixels is configured to be reset at the end of an exposure period, each of the readout circuits being configured to generate, at the end of the exposure period, the readout signal as a digital value having: i) most significant bits based on the value stored in the corresponding counter, and ii) least significant bits based on the remainder stored in the corresponding pixel. 4. The solid state image sensor of claim 3, wherein the most significant bits and the least significant bits are respectively read from the pixel and the readout circuit simultaneously. 5. The solid state image sensor of claim 1, wherein each of the photosensors is a backside illumination (BSI) sensor. 6. The solid state image sensor of claim 1, wherein each of the pixels is configured to be reset a plurality of times within an exposure period. 7. The solid state image sensor of claim 1, wherein the readout signal is configured to be read from each of the readout circuits once for each of a plurality of exposure periods. 8. The solid state image sensor of claim 1, wherein the readout circuits correspond one-to-one with the pixels. 9. The solid state image sensor of claim 1, wherein the first and second wafers are stacked and fine hybrid bonded with a pitch of less than 1 μm. 10. A method, operable by a solid state image sensor comprising a first wafer comprising an array of pixels, each of the pixels comprising a photosensor, and a second wafer comprising an array of readout circuits, each of the readout circuits comprising a counter, the method comprising: incrementing one of the counters in response to a corresponding one of the photosensors receiving an amount of light that is greater than a photosensor threshold; and generating, via the readout circuit, a readout signal indicative of an amount of light received by the corresponding photosensor within an exposure period, the readout signal being based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel. 11. The method of claim 10, further comprising resetting the corresponding pixel in response to the corresponding photosensor receiving an amount of light that is greater than the photosensor threshold. 12. The method of claim 10, further comprising: resetting each of the counters and each of the pixels at the end of an exposure period; and generating, via the readout circuit and at the end of the exposure period, the readout signal as a digital value having: i) most significant bits based on the value stored in the corresponding counter, and ii) least significant bits based on the remainder stored in the corresponding pixel. 13. The method of claim 12, further comprising respectively reading the most significant bits and the least significant bits from the corresponding pixel and the readout circuit simultaneously. 14. The method of claim 10, wherein each of the photosensors is a backside illumination (BSI) sensor. 15. The method of claim 10, further comprising resetting the corresponding pixel a plurality of times within an exposure period. 16. The method of claim 10, further comprising reading out the readout signal from the readout circuit once for each of a plurality of exposure periods. 17. The method of claim 10, wherein the readout circuits correspond one-to-one with the pixels. 18. The method of claim 10, wherein the first and second wafers are stacked and fine hybrid bonded with a pitch of less than 1 μm. 19. An apparatus, comprising: means for incrementing one of a plurality of counters in response to a corresponding one of a plurality of photosensors receiving an amount of light that is greater than a photosensor threshold, the pixels being formed on a first wafer of an image sensor; and means for generating a readout signal indicative of an amount of light received by a corresponding one of the photosensors within an exposure period, the readout signal being based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel, the means for generating being located on a second wafer of the image sensor. 20. The apparatus of claim 19, wherein the means for generating comprises means for generating, via one of a plurality of readout circuits, the readout signal, and wherein each of the readout circuits comprises a corresponding one of the counters. 21. The apparatus of claim 19, further comprising means for resetting the corresponding pixel in response to the corresponding photosensor receiving an amount of light that is greater than the photosensor threshold. 22. The apparatus of claim 19, further comprising: means for resetting each of the counters and each of the pixels at the end of an exposure period; and means for generating, via the readout circuit and at the end of the exposure period, the readout signal as a digital value having: i) most significant bits based on the value stored in the corresponding counter, and ii) least significant bits based on the remainder stored in the corresponding pixel. 23. The apparatus of claim 21, further comprising means for respectively reading the most significant bits and the least significant bits from the corresponding pixel and the readout circuit simultaneously. 24. The apparatus of claim 19, further comprising means for resetting the corresponding pixel a plurality of times within an exposure period. 25. A non-transitory computer readable storage medium having stored thereon instructions that, when executed, cause a processor circuit of a device to: increment one of a plurality of counters in response to a corresponding one of a plurality of photosensors receiving an amount of light that is greater than a photosensor threshold, the pixels being formed on a first wafer of an image sensor; and generate, via one of a plurality of readout circuit, a readout signal indicative of an amount of light received by a corresponding one of the photosensors within an exposure period, the readout signal being based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel, the readout circuits being formed on a second wafer of the image sensor and each of the readout circuits comprising a corresponding one of the counters. 26. The non-transitory computer readable storage medium of claim 25, further having stored thereon instructions that, when executed, cause the processor circuit to reset the corresponding pixel in response to the corresponding photosensor receiving an amount of light that is greater than the photosensor threshold. 27. The non-transitory computer readable storage medium of claim 25, further having stored thereon instructions that, when executed, cause the processor circuit to: reset each of the counters and each of the pixels at the end of an exposure period; and generate, via the readout circuit and at the end of the exposure period, the readout signal as a digital value having: i) most significant bits based on the value stored in the corresponding counter, and ii) least significant bits based on the remainder stored in the corresponding pixel. 28. The non-transitory computer readable storage medium of claim 27, further having stored thereon instructions that, when executed, cause the processor circuit to respectively read the most significant bits and the least significant bits from the corresponding pixel and the readout circuit simultaneously. 29. The non-transitory computer readable storage medium of claim 25, wherein each of the photosensors is a backside illumination (BSI) sensor. 30. The non-transitory computer readable storage medium of claim 25, further having stored thereon instructions that, when executed, cause the processor circuit to reset the corresponding pixel a plurality of times within an exposure period.
2,600
10,132
10,132
15,283,060
2,619
Invoking a function of a mixed reality interaction enabled object is provided. In response to determining that an input was received selecting the mixed reality interaction enabled object to perform an action, an interface is received showing a set of available application programming interfaces and functions corresponding to the mixed reality interaction enabled object. A selection of one of the set of available application programming interfaces and functions corresponding to the mixed reality interaction enabled object is received via the interface. The action corresponding to the selection of the one of the set of available application programming interfaces and functions is invoked on the mixed reality interaction enabled object.
1-20. (canceled) 21. A computer-implemented method of invoking a function of a mixed reality interaction enabled object on a computing system, the computer-implemented method comprising: determining by the computing system whether there is one or more mixed reality interaction enabled objects at a location; in response to determining that there is one or more mixed reality interaction enabled objects at the location, enabling a user to select at least one of the one or more mixed reality interaction enabled objects with which to interact; selecting, by the user, at least one of the one or more mixed reality interaction enabled objects with which to interact obtaining, in response to the user selecting the at least one of the one or more mixed reality interaction enabled objects with which to interact, from the selected at least one of the one or more mixed reality interaction enabled objects, one or more application programming interfaces, the one or more application programming interfaces providing one or more functions that may be invoked by the user; displaying on a display system associated with the computing system the one or more application programming interfaces to the user; and invoking, upon the user selecting at least one of the one or more provided functions, the at least one selected function. 22. The computer-implemented method of claim 21, wherein enabling a user to select at least one of the one or more mixed reality interaction enabled objects with which to interact includes enabling the user to use an imaging device associated with the computing system with which to focus on the one or more mixed reality interaction enabled objects. 23. The computer-implemented method of claim 21, wherein obtaining from the selected at least one of the one or more mixed reality interaction enabled objects one or more application programming interfaces further includes obtaining an identifier and an authorization access from the selected at least one of the one or more mixed reality interaction enabled objects, the identifier for uniquely identifying the selected at least one or more mixed reality interaction enabled objects and the access authorization for determining whether the user is authorized to interact with the selected at least one or more mixed reality interaction enabled objects. 24. The computer-implemented method of claim 21, further comprising: obtaining, in response to the user selecting two or more mixed reality interaction enabled objects with which to interact, one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects; determining, using the obtained one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects, whether one of the two or more selected mixed reality interaction enabled objects has a function in common with at least another one of the two or more selected mixed reality interaction enabled objects; displaying one application programming interface to the user displaying the common function; and invoking, upon the user selecting the common function, the common function on the two or more selected mixed reality interaction enabled objects having the common function. 25. The computer-implemented method of claim 21, wherein the user transmits to users remote to the location the determined one or more mixed reality interaction enabled objects such that the remote users may interact with the determined one or more mixed reality interaction enabled objects. 26. The computer-implemented method of claim 21, wherein determining whether there is one or more mixed reality interaction enabled objects at the location includes the determined one or more mixed reality interaction enabled objects providing a cue to the user, the cue informing the user that the determined one or more mixed reality interaction enabled objects are mixed reality objects. 27. The computer-implemented method of claim 26, wherein the cue includes visual marks that can be read by the computing device. 28. The computer-implemented method of claim 27, wherein the cue includes one of radio frequency identification tags, bar codes, quick response codes. 29. The computer-implemented method of claim 21, further comprising: allowing the user to add an information field to the determined one or more mixed reality interaction enabled objects in order to add information. 30. The computer-implemented method of claim 29, the information includes a tag, the tag for assigning ownership of the determined one or more mixed reality interaction enabled objects to a person, or to provide an expiration date in cases where the determined one or more mixed reality interaction enabled objects are food objects. 31. The computer-implemented method of claim 21, wherein the one or more application programming interfaces include an application programming interface of a third party, the application programming interface of the third party allowing interactions with the third party corresponding to the determined one or more mixed reality interaction enabled objects. 32. A computing system for invoking a function of a mixed reality interaction enabled object, the computing comprising: at least one storage device for storing program code; and at least one processor for processing the program code to: determine whether there is one or more mixed reality interaction enabled objects at a location; in response to determining that there is one or more mixed reality interaction enabled objects at the location, enable a user to select at least one of the one or more mixed reality interaction enabled objects with which to interact; in response to the user selecting at least one of the one or more mixed reality interaction enabled objects with which to interact, obtain from the selected at least one of the one or more mixed reality interaction enabled objects, one or more application programming interfaces, the one or more application programming interfaces providing one or more functions that may be invoked by the user; display on a display system associated with the computing system the one or more application programming interfaces to the user; and invoke, upon the user selecting at least one of the one or more provided functions, the at least one selected function. 33. The computing system of claim 32, wherein the program code is further processed to: obtain, in response to the user selecting two or more mixed reality interaction enabled objects with which to interact, one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects; determine, using the obtained one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects, whether one of the two or more selected mixed reality interaction enabled objects has a function in common with at least another one of the two or more selected mixed reality interaction enabled objects; display one application programming interface to the user displaying the common function; and invoke, upon the user selecting the common function, the common function on the two or more selected mixed reality interaction enabled objects having the common function. 34. The computing system of claim 32, wherein the user transmits to users remote to the location the determined one or more mixed reality interaction enabled objects such that the remote users may interact with the determined one or more mixed reality interaction enabled objects. 35. The computing system of claim 32, wherein the program code is further processed to: allow the user to add an information field to the determined one or more mixed reality interaction enabled objects in order to add information, the information including assigning ownership of the determined one or more mixed reality interaction enabled objects to a person, or providing an expiration date in cases where the determined one or more mixed reality interaction enabled objects are food objects. 36. A computer program product for invoking a function of a mixed reality interaction enabled object on a computing system, the computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith for execution on the computing system, the computer readable program code comprising computer readable program code configured to: determine whether there is one or more mixed reality interaction enabled objects at a location; in response to determining that there is one or more mixed reality interaction enabled objects at the location, enable a user to select at least one of the one or more mixed reality interaction enabled objects with which to interact; in response to the user selecting at least one of the one or more mixed reality interaction enabled objects with which to interact, obtain from the selected at least one of the one or more mixed reality interaction enabled objects, one or more application programming interfaces, the one or more application programming interfaces providing one or more functions that may be invoked by the user; display on a display system associated with the computing system the one or more application programming interfaces to the user; and invoke, upon the user selecting at least one of the one or more provided functions, the at least one selected function. 37. The computer program product of claim 36, wherein obtaining from the selected at least one of the one or more mixed reality interaction enabled objects one or more application programming interfaces further includes obtaining an identifier and an access authorization from the selected at least one of the one or more mixed reality interaction enabled objects, the identifier for uniquely identifying the selected at least one or more mixed reality interaction enabled objects and the access authorization for determining whether the user is authorized to interact with the selected at least one or more mixed reality interaction enabled objects. 38. The computer program product of claim 36, further comprising computer readable program code configured to: obtain, in response to the user selecting two or more mixed reality interaction enabled objects with which to interact, one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects; determine, using the obtained one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects, whether one of the two or more selected mixed reality interaction enabled objects has a function in common with at least another one of the two or more selected mixed reality interaction enabled objects; display one application programming interface to the user displaying the common function; and invoke, upon the user selecting the common function, the common function on the two or more selected mixed reality interaction enabled objects having the common function. 39. The computer program product of claim 36, wherein the user transmits to users remote to the location the determined one or more mixed reality interaction enabled objects such that the remote users may interact with the determined one or more mixed reality interaction enabled objects. 40. The computer program product of claim 36, further comprising computer readable program code configured to: allow the user to add an information field to the determined one or more mixed reality interaction enabled objects in order to add information.
Invoking a function of a mixed reality interaction enabled object is provided. In response to determining that an input was received selecting the mixed reality interaction enabled object to perform an action, an interface is received showing a set of available application programming interfaces and functions corresponding to the mixed reality interaction enabled object. A selection of one of the set of available application programming interfaces and functions corresponding to the mixed reality interaction enabled object is received via the interface. The action corresponding to the selection of the one of the set of available application programming interfaces and functions is invoked on the mixed reality interaction enabled object.1-20. (canceled) 21. A computer-implemented method of invoking a function of a mixed reality interaction enabled object on a computing system, the computer-implemented method comprising: determining by the computing system whether there is one or more mixed reality interaction enabled objects at a location; in response to determining that there is one or more mixed reality interaction enabled objects at the location, enabling a user to select at least one of the one or more mixed reality interaction enabled objects with which to interact; selecting, by the user, at least one of the one or more mixed reality interaction enabled objects with which to interact obtaining, in response to the user selecting the at least one of the one or more mixed reality interaction enabled objects with which to interact, from the selected at least one of the one or more mixed reality interaction enabled objects, one or more application programming interfaces, the one or more application programming interfaces providing one or more functions that may be invoked by the user; displaying on a display system associated with the computing system the one or more application programming interfaces to the user; and invoking, upon the user selecting at least one of the one or more provided functions, the at least one selected function. 22. The computer-implemented method of claim 21, wherein enabling a user to select at least one of the one or more mixed reality interaction enabled objects with which to interact includes enabling the user to use an imaging device associated with the computing system with which to focus on the one or more mixed reality interaction enabled objects. 23. The computer-implemented method of claim 21, wherein obtaining from the selected at least one of the one or more mixed reality interaction enabled objects one or more application programming interfaces further includes obtaining an identifier and an authorization access from the selected at least one of the one or more mixed reality interaction enabled objects, the identifier for uniquely identifying the selected at least one or more mixed reality interaction enabled objects and the access authorization for determining whether the user is authorized to interact with the selected at least one or more mixed reality interaction enabled objects. 24. The computer-implemented method of claim 21, further comprising: obtaining, in response to the user selecting two or more mixed reality interaction enabled objects with which to interact, one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects; determining, using the obtained one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects, whether one of the two or more selected mixed reality interaction enabled objects has a function in common with at least another one of the two or more selected mixed reality interaction enabled objects; displaying one application programming interface to the user displaying the common function; and invoking, upon the user selecting the common function, the common function on the two or more selected mixed reality interaction enabled objects having the common function. 25. The computer-implemented method of claim 21, wherein the user transmits to users remote to the location the determined one or more mixed reality interaction enabled objects such that the remote users may interact with the determined one or more mixed reality interaction enabled objects. 26. The computer-implemented method of claim 21, wherein determining whether there is one or more mixed reality interaction enabled objects at the location includes the determined one or more mixed reality interaction enabled objects providing a cue to the user, the cue informing the user that the determined one or more mixed reality interaction enabled objects are mixed reality objects. 27. The computer-implemented method of claim 26, wherein the cue includes visual marks that can be read by the computing device. 28. The computer-implemented method of claim 27, wherein the cue includes one of radio frequency identification tags, bar codes, quick response codes. 29. The computer-implemented method of claim 21, further comprising: allowing the user to add an information field to the determined one or more mixed reality interaction enabled objects in order to add information. 30. The computer-implemented method of claim 29, the information includes a tag, the tag for assigning ownership of the determined one or more mixed reality interaction enabled objects to a person, or to provide an expiration date in cases where the determined one or more mixed reality interaction enabled objects are food objects. 31. The computer-implemented method of claim 21, wherein the one or more application programming interfaces include an application programming interface of a third party, the application programming interface of the third party allowing interactions with the third party corresponding to the determined one or more mixed reality interaction enabled objects. 32. A computing system for invoking a function of a mixed reality interaction enabled object, the computing comprising: at least one storage device for storing program code; and at least one processor for processing the program code to: determine whether there is one or more mixed reality interaction enabled objects at a location; in response to determining that there is one or more mixed reality interaction enabled objects at the location, enable a user to select at least one of the one or more mixed reality interaction enabled objects with which to interact; in response to the user selecting at least one of the one or more mixed reality interaction enabled objects with which to interact, obtain from the selected at least one of the one or more mixed reality interaction enabled objects, one or more application programming interfaces, the one or more application programming interfaces providing one or more functions that may be invoked by the user; display on a display system associated with the computing system the one or more application programming interfaces to the user; and invoke, upon the user selecting at least one of the one or more provided functions, the at least one selected function. 33. The computing system of claim 32, wherein the program code is further processed to: obtain, in response to the user selecting two or more mixed reality interaction enabled objects with which to interact, one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects; determine, using the obtained one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects, whether one of the two or more selected mixed reality interaction enabled objects has a function in common with at least another one of the two or more selected mixed reality interaction enabled objects; display one application programming interface to the user displaying the common function; and invoke, upon the user selecting the common function, the common function on the two or more selected mixed reality interaction enabled objects having the common function. 34. The computing system of claim 32, wherein the user transmits to users remote to the location the determined one or more mixed reality interaction enabled objects such that the remote users may interact with the determined one or more mixed reality interaction enabled objects. 35. The computing system of claim 32, wherein the program code is further processed to: allow the user to add an information field to the determined one or more mixed reality interaction enabled objects in order to add information, the information including assigning ownership of the determined one or more mixed reality interaction enabled objects to a person, or providing an expiration date in cases where the determined one or more mixed reality interaction enabled objects are food objects. 36. A computer program product for invoking a function of a mixed reality interaction enabled object on a computing system, the computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith for execution on the computing system, the computer readable program code comprising computer readable program code configured to: determine whether there is one or more mixed reality interaction enabled objects at a location; in response to determining that there is one or more mixed reality interaction enabled objects at the location, enable a user to select at least one of the one or more mixed reality interaction enabled objects with which to interact; in response to the user selecting at least one of the one or more mixed reality interaction enabled objects with which to interact, obtain from the selected at least one of the one or more mixed reality interaction enabled objects, one or more application programming interfaces, the one or more application programming interfaces providing one or more functions that may be invoked by the user; display on a display system associated with the computing system the one or more application programming interfaces to the user; and invoke, upon the user selecting at least one of the one or more provided functions, the at least one selected function. 37. The computer program product of claim 36, wherein obtaining from the selected at least one of the one or more mixed reality interaction enabled objects one or more application programming interfaces further includes obtaining an identifier and an access authorization from the selected at least one of the one or more mixed reality interaction enabled objects, the identifier for uniquely identifying the selected at least one or more mixed reality interaction enabled objects and the access authorization for determining whether the user is authorized to interact with the selected at least one or more mixed reality interaction enabled objects. 38. The computer program product of claim 36, further comprising computer readable program code configured to: obtain, in response to the user selecting two or more mixed reality interaction enabled objects with which to interact, one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects; determine, using the obtained one or more application programming interfaces from each one of the two or more selected mixed reality interaction enabled objects, whether one of the two or more selected mixed reality interaction enabled objects has a function in common with at least another one of the two or more selected mixed reality interaction enabled objects; display one application programming interface to the user displaying the common function; and invoke, upon the user selecting the common function, the common function on the two or more selected mixed reality interaction enabled objects having the common function. 39. The computer program product of claim 36, wherein the user transmits to users remote to the location the determined one or more mixed reality interaction enabled objects such that the remote users may interact with the determined one or more mixed reality interaction enabled objects. 40. The computer program product of claim 36, further comprising computer readable program code configured to: allow the user to add an information field to the determined one or more mixed reality interaction enabled objects in order to add information.
2,600
10,133
10,133
14,320,942
2,641
A message-processing resource receives location information indicating a current location of a mobile device in a network. A user of the mobile device subscribes to use of the network. The message-processing resource maps an identity of the user to a subscriber account and candidate messages associated with the user. The message-processing resource utilizes the location information to identify a message pertinent to the user and the current location of the mobile device. The message-processing resource selects the message from the candidate messages and provides notification of the selected message to the user of the mobile device.
1. A method comprising: receiving location information indicating a current location of a mobile device in a network, a user of the mobile device subscribing to use of the network; mapping an identity of the user to a subscriber account and candidate messages associated with the user; utilizing the location information to identify a message pertinent to the user and the current location of the mobile device, the message selected from the candidate messages; and providing notification of the selected message to the user of the mobile device. 2. The method as in claim 1, wherein utilizing the location information to identify the message pertinent to the user and the current location of the mobile device includes: accessing a repository to identify the candidate messages associated with the subscriber account; producing a filter based at least in part on the location information; and applying the filter to the candidate messages to select the message pertinent to the user at the current location of the mobile device. 3. The method as in claim 1, wherein mapping the identity of the user to the subscriber account includes: receiving a unique network address assigned to the mobile device; and identifying the subscriber account using the unique network address. 4. The method as in claim 1, wherein the subscriber account is a fee-based subscriber account managed by a network service provider, the network service provider providing the user access to the network at multiple access points. 5. The method as in claim 4, wherein receiving location information indicating the current location of the mobile device operated by the user includes detecting that the mobile device operated by the user has established a connection with a particular access point in the network provided by the network service provider, the particular access point disposed at a known geographical location in the network; and wherein providing notification of the selected message to the user of the mobile device includes: communicating the selected message over the network provided by the network service provider to the mobile device. 6. The method as in claim 1 further comprising: detecting a time of day that the user is located at the current location; and in addition to utilizing the location information to identify the message pertinent to the user and the current location of the mobile device, utilizing the detected time of day as an additional parameter to identify the message pertinent to the user. 7. The method as in claim 1 further comprising: on a prior occasion of the user operating the mobile device at the current location as specified by the location information, monitoring habits of the user operating the mobile device at the current location; and utilizing habits of the user operating the mobile device on the prior occasion as a basis to identify the message pertinent to the user of the current location of the mobile device. 8. The method as in claim 1 further comprising: wherein utilizing the location information to identify a message pertinent to the user in the current location of the mobile device further comprises: producing the message to include a promotion of goods available from a retail entity located in a vicinity of the current location. 9. The method as in claim 1, wherein the current location is a subscriber domain in which the user resides; wherein mapping the identity of the user to a subscriber account further includes analyzing a history of content previously delivered to the user in the subscriber domain; and the method further comprising: based on the history of the previously delivered content, producing the message to indicate availability of newly available content for retrieval and viewing by the user in the subscriber domain. 10. The method as in claim 1, wherein the current location is a remote location with respect to a subscriber domain in which the user domiciles, the subscriber domain having access to the network via a shared cable network provided by a respective network service provider managing the network, the subscriber domain including customer premises equipment to retrieve content over the shared cable network; and wherein utilizing the location information to identify the message pertinent to the user at the current location includes: detecting that the user is located in a vicinity of a distributor that makes the customer premises equipment available to subscribers of the shared cable network, the message indicating the location of the distributor. 11. The method as in claim 1 further comprising: detecting movement of the mobile device into a geographical region in which the user domiciles; and in response to detecting movement of the mobile device into the geographical region, producing the message to include a notification relevant to the geographical region. 12. The method as in claim 1, wherein receiving the location information includes: detecting that the mobile device operated by the user has established a connection with a particular access point in the network. The 13. A method comprising: storing a set of candidate messages, the candidate messages allocated for selective distribution to a user of a mobile device; receiving location information indicating a current location of the mobile device in a network to which the user subscribes; producing a filter based at least in part on the location information and an identity of the user; and applying the filter to the set of candidate messages to select a message in which to forward over the network to the mobile device. 14. The method as in claim 13 further comprising: producing tag data for each respective candidate message in the set, the tag data indicating circumstances in which to selectively forward the respective candidate message to the mobile device operated by the user; and wherein applying the filter includes matching the location information to the tag data to select the message to forward to the mobile device. 15. The method as in claim 14, wherein producing the tag data includes: generating first tag data to indicate geographical information; and assigning the first tag data to the set of candidate messages, the first tag data indicating a respective geographical location to which a respective message in the set is eligible for delivery to the mobile device. 16. The method as in claim 15, wherein producing the tag data includes: generating second tag data to indicate time information; and assigning the second tag data to the set of candidate messages, the second tag data indicating a time in which a respective candidate messages in the set is eligible for delivery to the mobile device. 17. The method as in claim 13 further comprising: initiating delivery of the selected message over the network to the mobile device. 18. A computer system comprising: computer processor hardware; and a hardware storage resource coupled to the computer processor hardware, the hardware storage resource storing instructions that, when executed by the computer processor hardware, causes the computer processor hardware to perform operations of: receiving location information indicating a current location of a mobile device in a network, a user of the mobile device subscribing to use of the network; mapping an identity of the user to a subscriber account and candidate messages associated with the user; utilizing the location information to identify a message pertinent to the user and the current location of the mobile device, the message selected from the candidate messages; and providing notification of the selected message to the user of the mobile device. 19. The computer system as in claim 18, wherein utilizing the location information to identify the message pertinent to the user and the current location of the mobile device includes: accessing a repository to identify the candidate messages associated with the subscriber account; producing a filter based at least in part on the location information; and applying the filter to the candidate messages to select the message pertinent to the user at the current location of the mobile device. 20. The computer system as in claim 18, wherein mapping the identity of the user to the subscriber account includes: receiving a unique network address assigned to the mobile device; and identifying the subscriber account using the unique network address. 21. The computer system as in claim 18, wherein the subscriber account is a fee-based subscriber account managed by a network service provider, the network service provider providing the user access to the network at multiple access points. 22. The computer system as in claim 21, wherein receiving location information indicating the current location of the mobile device operated by the user includes detecting that the mobile device operated by the user has established a connection with a particular access point in the network provided by the network service provider, the particular access point disposed at a known geographical location in the network; and wherein providing notification of the selected message to the user of the mobile device includes: communicating the selected message over the network provided by the network service provider to the mobile device. 23. The computer system as in claim 18, wherein the computer processor hardware further performs operations of: detecting a time of day that the user is located at the current location; and in addition to utilizing the location information to identify the message pertinent to the user and the current location of the mobile device, utilizing the detected time of day as an additional parameter to identify the message pertinent to the user. 24. The computer system as in claim 18, wherein the computer processor hardware further performs operations of: on a prior occasion of the user operating the mobile device at the current location as specified by the location information, monitoring habits of the user operating the mobile device at the current location; and utilizing habits of the user operating the mobile device on the prior occasion as a basis to identify the message pertinent to the user of the current location of the mobile device. 25. The computer system as in claim 18, wherein the computer processor hardware further performs operations of: receiving input specifying a retail entity with which the user has conducted business on a prior occasion of operating the mobile device at the current location; and wherein utilizing the location information to identify a message pertinent to the user in the current location of the mobile device further comprises: producing the message to include a promotion of goods available from the retail entity. 26. The computer system as in claim 18, wherein the current location is a subscriber domain in which the user resides; wherein mapping the identity of the user to a subscriber account further includes analyzing a history of content previously delivered to the user in the subscriber domain; and the method further comprising: based on the history of the previously delivered content, producing the message to indicate availability of newly available content for retrieval and viewing by the user in the subscriber domain. 27. The computer system as in claim 18, wherein the current location is a remote location with respect to a subscriber domain in which the user domiciles, the subscriber domain having access to the network via a shared cable network provided by a respective network service provider managing the network, the subscriber domain including customer premises equipment to retrieve content over the shared cable network; and wherein utilizing the location information to identify the message pertinent to the user at the current location includes: detecting that the user is located in a vicinity of a distributor that makes the customer premises equipment available to subscribers of the shared cable network, the message indicating the location of the distributor. 28. The computer system as in claim 18, wherein the computer processor hardware further performs operations of: detecting movement of the mobile device into a geographical region in which the user domiciles; and in response to detecting movement of the mobile device into the geographical region, producing the message to include a notification relevant to the geographical region. 29. The computer system as in claim 18, wherein receiving the location information includes: detecting that the mobile device operated by the user has established a connection with a particular access point in the network. 30. Computer-readable hardware storage having instructions stored thereon, the instructions, when carried out by computer processor hardware, causes the computer processor hardware to perform operations of: receiving location information indicating a current location of a mobile device in a network to which a user of the mobile device subscribes; producing a filter based at least in part on the location information and an identity of the user; and applying the filter to a set of candidate messages to select a message in which to forward over the network to the mobile device, the candidate messages allocated for selective distribution to the user of the mobile device.
A message-processing resource receives location information indicating a current location of a mobile device in a network. A user of the mobile device subscribes to use of the network. The message-processing resource maps an identity of the user to a subscriber account and candidate messages associated with the user. The message-processing resource utilizes the location information to identify a message pertinent to the user and the current location of the mobile device. The message-processing resource selects the message from the candidate messages and provides notification of the selected message to the user of the mobile device.1. A method comprising: receiving location information indicating a current location of a mobile device in a network, a user of the mobile device subscribing to use of the network; mapping an identity of the user to a subscriber account and candidate messages associated with the user; utilizing the location information to identify a message pertinent to the user and the current location of the mobile device, the message selected from the candidate messages; and providing notification of the selected message to the user of the mobile device. 2. The method as in claim 1, wherein utilizing the location information to identify the message pertinent to the user and the current location of the mobile device includes: accessing a repository to identify the candidate messages associated with the subscriber account; producing a filter based at least in part on the location information; and applying the filter to the candidate messages to select the message pertinent to the user at the current location of the mobile device. 3. The method as in claim 1, wherein mapping the identity of the user to the subscriber account includes: receiving a unique network address assigned to the mobile device; and identifying the subscriber account using the unique network address. 4. The method as in claim 1, wherein the subscriber account is a fee-based subscriber account managed by a network service provider, the network service provider providing the user access to the network at multiple access points. 5. The method as in claim 4, wherein receiving location information indicating the current location of the mobile device operated by the user includes detecting that the mobile device operated by the user has established a connection with a particular access point in the network provided by the network service provider, the particular access point disposed at a known geographical location in the network; and wherein providing notification of the selected message to the user of the mobile device includes: communicating the selected message over the network provided by the network service provider to the mobile device. 6. The method as in claim 1 further comprising: detecting a time of day that the user is located at the current location; and in addition to utilizing the location information to identify the message pertinent to the user and the current location of the mobile device, utilizing the detected time of day as an additional parameter to identify the message pertinent to the user. 7. The method as in claim 1 further comprising: on a prior occasion of the user operating the mobile device at the current location as specified by the location information, monitoring habits of the user operating the mobile device at the current location; and utilizing habits of the user operating the mobile device on the prior occasion as a basis to identify the message pertinent to the user of the current location of the mobile device. 8. The method as in claim 1 further comprising: wherein utilizing the location information to identify a message pertinent to the user in the current location of the mobile device further comprises: producing the message to include a promotion of goods available from a retail entity located in a vicinity of the current location. 9. The method as in claim 1, wherein the current location is a subscriber domain in which the user resides; wherein mapping the identity of the user to a subscriber account further includes analyzing a history of content previously delivered to the user in the subscriber domain; and the method further comprising: based on the history of the previously delivered content, producing the message to indicate availability of newly available content for retrieval and viewing by the user in the subscriber domain. 10. The method as in claim 1, wherein the current location is a remote location with respect to a subscriber domain in which the user domiciles, the subscriber domain having access to the network via a shared cable network provided by a respective network service provider managing the network, the subscriber domain including customer premises equipment to retrieve content over the shared cable network; and wherein utilizing the location information to identify the message pertinent to the user at the current location includes: detecting that the user is located in a vicinity of a distributor that makes the customer premises equipment available to subscribers of the shared cable network, the message indicating the location of the distributor. 11. The method as in claim 1 further comprising: detecting movement of the mobile device into a geographical region in which the user domiciles; and in response to detecting movement of the mobile device into the geographical region, producing the message to include a notification relevant to the geographical region. 12. The method as in claim 1, wherein receiving the location information includes: detecting that the mobile device operated by the user has established a connection with a particular access point in the network. The 13. A method comprising: storing a set of candidate messages, the candidate messages allocated for selective distribution to a user of a mobile device; receiving location information indicating a current location of the mobile device in a network to which the user subscribes; producing a filter based at least in part on the location information and an identity of the user; and applying the filter to the set of candidate messages to select a message in which to forward over the network to the mobile device. 14. The method as in claim 13 further comprising: producing tag data for each respective candidate message in the set, the tag data indicating circumstances in which to selectively forward the respective candidate message to the mobile device operated by the user; and wherein applying the filter includes matching the location information to the tag data to select the message to forward to the mobile device. 15. The method as in claim 14, wherein producing the tag data includes: generating first tag data to indicate geographical information; and assigning the first tag data to the set of candidate messages, the first tag data indicating a respective geographical location to which a respective message in the set is eligible for delivery to the mobile device. 16. The method as in claim 15, wherein producing the tag data includes: generating second tag data to indicate time information; and assigning the second tag data to the set of candidate messages, the second tag data indicating a time in which a respective candidate messages in the set is eligible for delivery to the mobile device. 17. The method as in claim 13 further comprising: initiating delivery of the selected message over the network to the mobile device. 18. A computer system comprising: computer processor hardware; and a hardware storage resource coupled to the computer processor hardware, the hardware storage resource storing instructions that, when executed by the computer processor hardware, causes the computer processor hardware to perform operations of: receiving location information indicating a current location of a mobile device in a network, a user of the mobile device subscribing to use of the network; mapping an identity of the user to a subscriber account and candidate messages associated with the user; utilizing the location information to identify a message pertinent to the user and the current location of the mobile device, the message selected from the candidate messages; and providing notification of the selected message to the user of the mobile device. 19. The computer system as in claim 18, wherein utilizing the location information to identify the message pertinent to the user and the current location of the mobile device includes: accessing a repository to identify the candidate messages associated with the subscriber account; producing a filter based at least in part on the location information; and applying the filter to the candidate messages to select the message pertinent to the user at the current location of the mobile device. 20. The computer system as in claim 18, wherein mapping the identity of the user to the subscriber account includes: receiving a unique network address assigned to the mobile device; and identifying the subscriber account using the unique network address. 21. The computer system as in claim 18, wherein the subscriber account is a fee-based subscriber account managed by a network service provider, the network service provider providing the user access to the network at multiple access points. 22. The computer system as in claim 21, wherein receiving location information indicating the current location of the mobile device operated by the user includes detecting that the mobile device operated by the user has established a connection with a particular access point in the network provided by the network service provider, the particular access point disposed at a known geographical location in the network; and wherein providing notification of the selected message to the user of the mobile device includes: communicating the selected message over the network provided by the network service provider to the mobile device. 23. The computer system as in claim 18, wherein the computer processor hardware further performs operations of: detecting a time of day that the user is located at the current location; and in addition to utilizing the location information to identify the message pertinent to the user and the current location of the mobile device, utilizing the detected time of day as an additional parameter to identify the message pertinent to the user. 24. The computer system as in claim 18, wherein the computer processor hardware further performs operations of: on a prior occasion of the user operating the mobile device at the current location as specified by the location information, monitoring habits of the user operating the mobile device at the current location; and utilizing habits of the user operating the mobile device on the prior occasion as a basis to identify the message pertinent to the user of the current location of the mobile device. 25. The computer system as in claim 18, wherein the computer processor hardware further performs operations of: receiving input specifying a retail entity with which the user has conducted business on a prior occasion of operating the mobile device at the current location; and wherein utilizing the location information to identify a message pertinent to the user in the current location of the mobile device further comprises: producing the message to include a promotion of goods available from the retail entity. 26. The computer system as in claim 18, wherein the current location is a subscriber domain in which the user resides; wherein mapping the identity of the user to a subscriber account further includes analyzing a history of content previously delivered to the user in the subscriber domain; and the method further comprising: based on the history of the previously delivered content, producing the message to indicate availability of newly available content for retrieval and viewing by the user in the subscriber domain. 27. The computer system as in claim 18, wherein the current location is a remote location with respect to a subscriber domain in which the user domiciles, the subscriber domain having access to the network via a shared cable network provided by a respective network service provider managing the network, the subscriber domain including customer premises equipment to retrieve content over the shared cable network; and wherein utilizing the location information to identify the message pertinent to the user at the current location includes: detecting that the user is located in a vicinity of a distributor that makes the customer premises equipment available to subscribers of the shared cable network, the message indicating the location of the distributor. 28. The computer system as in claim 18, wherein the computer processor hardware further performs operations of: detecting movement of the mobile device into a geographical region in which the user domiciles; and in response to detecting movement of the mobile device into the geographical region, producing the message to include a notification relevant to the geographical region. 29. The computer system as in claim 18, wherein receiving the location information includes: detecting that the mobile device operated by the user has established a connection with a particular access point in the network. 30. Computer-readable hardware storage having instructions stored thereon, the instructions, when carried out by computer processor hardware, causes the computer processor hardware to perform operations of: receiving location information indicating a current location of a mobile device in a network to which a user of the mobile device subscribes; producing a filter based at least in part on the location information and an identity of the user; and applying the filter to a set of candidate messages to select a message in which to forward over the network to the mobile device, the candidate messages allocated for selective distribution to the user of the mobile device.
2,600
10,134
10,134
15,610,824
2,647
To control content distribution to plural mobile entities, a device for managing the distribution of the content may determine mobile entities which participate in sharing the same content and which are located in the same region. The mobile entities respectively have a first interface for communication with a mobile communication network and a second interface for forming an ad-hoc network with another mobile entity. A message indicating at least one peer address of a peer mobile entity may be selectively transmitted to a mobile entity. The peer address allows the mobile entity to retrieve at least a piece of the content over the second interface from another mobile entity.
1. A method of managing distribution of content to a mobile entity, the mobile entity having a first interface for communicating with a mobile communication network and a second interface for forming an ad-hoc network with another mobile entity, the method comprising: receiving, over the mobile communication network, a request from the mobile entity, the request including a content identifier identifying the content; determining, based on the received request, a region in which the mobile entity is located; retrieving, based on the content identifier, information on other mobile entities which participate in sharing the content; identifying a subset of the other mobile entities based on the determined region; transmitting, via the mobile communication network, a message to the mobile entity, the message including at least one of a peer address of a peer mobile entity included in the identified subset or a source address, the peer address allowing the mobile entity to retrieve at least a piece of the content over the second interface from the peer mobile entity. 2. A method of retrieving content by a mobile entity which has a first interface for communicating with a mobile communication network and a second interface for forming an ad-hoc network with another mobile entity, the method comprising: transmitting, over the first interface, a request including a content identifier identifying the content; processing a message received over the first interface in response to the request, the message including at least one of a source address or a peer address; selectively, in response to the message including the peer address, retrieving at least a piece of the content over the second interface from another mobile entity using the peer address.
To control content distribution to plural mobile entities, a device for managing the distribution of the content may determine mobile entities which participate in sharing the same content and which are located in the same region. The mobile entities respectively have a first interface for communication with a mobile communication network and a second interface for forming an ad-hoc network with another mobile entity. A message indicating at least one peer address of a peer mobile entity may be selectively transmitted to a mobile entity. The peer address allows the mobile entity to retrieve at least a piece of the content over the second interface from another mobile entity.1. A method of managing distribution of content to a mobile entity, the mobile entity having a first interface for communicating with a mobile communication network and a second interface for forming an ad-hoc network with another mobile entity, the method comprising: receiving, over the mobile communication network, a request from the mobile entity, the request including a content identifier identifying the content; determining, based on the received request, a region in which the mobile entity is located; retrieving, based on the content identifier, information on other mobile entities which participate in sharing the content; identifying a subset of the other mobile entities based on the determined region; transmitting, via the mobile communication network, a message to the mobile entity, the message including at least one of a peer address of a peer mobile entity included in the identified subset or a source address, the peer address allowing the mobile entity to retrieve at least a piece of the content over the second interface from the peer mobile entity. 2. A method of retrieving content by a mobile entity which has a first interface for communicating with a mobile communication network and a second interface for forming an ad-hoc network with another mobile entity, the method comprising: transmitting, over the first interface, a request including a content identifier identifying the content; processing a message received over the first interface in response to the request, the message including at least one of a source address or a peer address; selectively, in response to the message including the peer address, retrieving at least a piece of the content over the second interface from another mobile entity using the peer address.
2,600
10,135
10,135
15,306,248
2,649
Systems and methods for controlling uplink (UL) power allocation in a user equipment (UE) operating in a communication network are provided. The method includes: selecting between at least a first UL power allocation technique and a second power allocation technique for use in the UE; and using the selected power allocation technique in the UE to transmit uplink data by allocating transmit power between at least two carriers on which the uplink data is transmitted.
1. A method for controlling uplink (UL) power allocation in a user equipment (UE) operating in a communication network, the method comprising: selecting between at least a first UL power allocation technique and a second power allocation technique for use in the UE; and using the selected power allocation technique in the UE to transmit uplink data by allocating transmit power between at least two carriers on which the uplink data is transmitted. 2. The method of claim 1, wherein the first UL power allocation technique is a parallel split power allocation which is determined based on dedicated physical control channel (DPCCH) quality and a serving grant for each carrier. 3. The method of claim 1, wherein the second UL power allocation technique is a power sensitive power allocation scheme in which power is allocated sequentially from best to worst carrier up to a serving grant for each carrier. 4. The method of claim 1, wherein a data allocation scheme is selected based at least in part by the selected UL power allocation technique. 5. The method of claim 1, wherein the choice of the UL power allocation technique to select is received from the communication network. 6. The method of claim 1, wherein the selection of the UL power allocation technique occurs at the UE. 7. The method of claim 1, wherein the selection of the UL power allocation technique is based at least in part on at least one of a network configured parameter; a function of a different in dedicated physical control channel (DPCCH) power level between two carriers; a function of absolute DPCCH power level for one or both carriers; a function of a difference in a serving grant between the two carriers; a function of absolute serving grants for the two carriers; a function of a power status at the UE; and a function of measured downlink (DL) quality for DL carriers corresponding to UL carriers. 8. The method of claim 1, wherein when the UE is operating in dual band dual cell high speed uplink packet access (DB-DC-HSUPA) mode and a second carrier is active, the second UL power allocation technique is selected when a serving grant is below a predetermined threshold, otherwise the first UL power allocation technique is selected. 9. The method of claim 8, wherein the predetermined threshold is determined by the network and communicated to the UE via at least one radio resource control (RRC) message. 10. The method of claim 1, wherein when the UE is operating in dual band dual cell high speed uplink packet access (DB-DC-HSUPA) mode and a second carrier is active, the second UL power allocation technique is selected when a serving grant is below a predetermined threshold and the UE is power limited, otherwise the first UL power allocation technique is selected. 11. The method of claim 10, wherein the predetermined threshold is determined by the network and communicated to the UE via at least one radio resource control (RRC) message. 12. A user equipment (UE) in a communication network in which uplink (UL) power allocation is controlled, the UE comprising: a processor configured to select between at least a first UL power allocation technique and a second UL power allocation technique for use in the UE; and the processor configured to use the selected UL power allocation technique in the UE to transmit uplink data by allocating transmit power between at least two carriers on which the uplink data is transmitted. 13. The UE of claim 12, wherein the first UL power allocation technique is a parallel split power allocation which is determined based on dedicated physical control channel (DPCCH) quality and a serving grant for each carrier. 14. The UE of claim 12, wherein the second UL power allocation technique is a power sensitive power allocation scheme in which power is allocated sequentially from best to worst carrier up to a serving grant for each carrier. 15. The UE of claim 12, wherein a data allocation scheme is selected based at least in part by the selected UL power allocation technique. 16. The UE of claim 12, wherein the choice of the UL power allocation technique to select is received from the communication network. 17. The UE of claim 12, wherein the selection of the UL power allocation technique occurs at the UE. 18. The UE of claim 12, wherein the selection of the UL power allocation technique is based at least in part on at least one of a network configured parameter; a function of a different in dedicated physical control channel (DPCCH) power level between two carriers; a function of absolute DPCCH power level for one or both carriers; a function of a difference in a serving grant between the two carriers; a function of absolute serving grants for the two carriers; a function of a power status at the UE; and a function of measured downlink (DL) quality for DL carriers corresponding to UL carriers. 19. The UE of claim 12, wherein when the UE is operating in dual band dual cell high speed uplink packet access (DB-DC-HSUPA) mode and a second carrier is active, the second UL power allocation technique is selected when a serving grant is below a predetermined threshold, otherwise the first UL power allocation technique is selected. 20. The UE of claim 19, wherein the predetermined threshold is determined by the network and communicated to the UE via at least one radio resource control (RRC) message. 21. The UE of claim 12, wherein when the UE is operating in dual band dual cell high speed uplink packet access (DB-DC-HSUPA) mode and a second carrier is active, the second UL power allocation technique is selected when a serving grant is below a predetermined threshold and the UE is power limited, otherwise the first UL power allocation technique is selected. 22. The UE of claim 21, wherein the predetermined threshold is determined by the network and communicated to the UE via at least one radio resource control (RRC) message. 23-26. (canceled)
Systems and methods for controlling uplink (UL) power allocation in a user equipment (UE) operating in a communication network are provided. The method includes: selecting between at least a first UL power allocation technique and a second power allocation technique for use in the UE; and using the selected power allocation technique in the UE to transmit uplink data by allocating transmit power between at least two carriers on which the uplink data is transmitted.1. A method for controlling uplink (UL) power allocation in a user equipment (UE) operating in a communication network, the method comprising: selecting between at least a first UL power allocation technique and a second power allocation technique for use in the UE; and using the selected power allocation technique in the UE to transmit uplink data by allocating transmit power between at least two carriers on which the uplink data is transmitted. 2. The method of claim 1, wherein the first UL power allocation technique is a parallel split power allocation which is determined based on dedicated physical control channel (DPCCH) quality and a serving grant for each carrier. 3. The method of claim 1, wherein the second UL power allocation technique is a power sensitive power allocation scheme in which power is allocated sequentially from best to worst carrier up to a serving grant for each carrier. 4. The method of claim 1, wherein a data allocation scheme is selected based at least in part by the selected UL power allocation technique. 5. The method of claim 1, wherein the choice of the UL power allocation technique to select is received from the communication network. 6. The method of claim 1, wherein the selection of the UL power allocation technique occurs at the UE. 7. The method of claim 1, wherein the selection of the UL power allocation technique is based at least in part on at least one of a network configured parameter; a function of a different in dedicated physical control channel (DPCCH) power level between two carriers; a function of absolute DPCCH power level for one or both carriers; a function of a difference in a serving grant between the two carriers; a function of absolute serving grants for the two carriers; a function of a power status at the UE; and a function of measured downlink (DL) quality for DL carriers corresponding to UL carriers. 8. The method of claim 1, wherein when the UE is operating in dual band dual cell high speed uplink packet access (DB-DC-HSUPA) mode and a second carrier is active, the second UL power allocation technique is selected when a serving grant is below a predetermined threshold, otherwise the first UL power allocation technique is selected. 9. The method of claim 8, wherein the predetermined threshold is determined by the network and communicated to the UE via at least one radio resource control (RRC) message. 10. The method of claim 1, wherein when the UE is operating in dual band dual cell high speed uplink packet access (DB-DC-HSUPA) mode and a second carrier is active, the second UL power allocation technique is selected when a serving grant is below a predetermined threshold and the UE is power limited, otherwise the first UL power allocation technique is selected. 11. The method of claim 10, wherein the predetermined threshold is determined by the network and communicated to the UE via at least one radio resource control (RRC) message. 12. A user equipment (UE) in a communication network in which uplink (UL) power allocation is controlled, the UE comprising: a processor configured to select between at least a first UL power allocation technique and a second UL power allocation technique for use in the UE; and the processor configured to use the selected UL power allocation technique in the UE to transmit uplink data by allocating transmit power between at least two carriers on which the uplink data is transmitted. 13. The UE of claim 12, wherein the first UL power allocation technique is a parallel split power allocation which is determined based on dedicated physical control channel (DPCCH) quality and a serving grant for each carrier. 14. The UE of claim 12, wherein the second UL power allocation technique is a power sensitive power allocation scheme in which power is allocated sequentially from best to worst carrier up to a serving grant for each carrier. 15. The UE of claim 12, wherein a data allocation scheme is selected based at least in part by the selected UL power allocation technique. 16. The UE of claim 12, wherein the choice of the UL power allocation technique to select is received from the communication network. 17. The UE of claim 12, wherein the selection of the UL power allocation technique occurs at the UE. 18. The UE of claim 12, wherein the selection of the UL power allocation technique is based at least in part on at least one of a network configured parameter; a function of a different in dedicated physical control channel (DPCCH) power level between two carriers; a function of absolute DPCCH power level for one or both carriers; a function of a difference in a serving grant between the two carriers; a function of absolute serving grants for the two carriers; a function of a power status at the UE; and a function of measured downlink (DL) quality for DL carriers corresponding to UL carriers. 19. The UE of claim 12, wherein when the UE is operating in dual band dual cell high speed uplink packet access (DB-DC-HSUPA) mode and a second carrier is active, the second UL power allocation technique is selected when a serving grant is below a predetermined threshold, otherwise the first UL power allocation technique is selected. 20. The UE of claim 19, wherein the predetermined threshold is determined by the network and communicated to the UE via at least one radio resource control (RRC) message. 21. The UE of claim 12, wherein when the UE is operating in dual band dual cell high speed uplink packet access (DB-DC-HSUPA) mode and a second carrier is active, the second UL power allocation technique is selected when a serving grant is below a predetermined threshold and the UE is power limited, otherwise the first UL power allocation technique is selected. 22. The UE of claim 21, wherein the predetermined threshold is determined by the network and communicated to the UE via at least one radio resource control (RRC) message. 23-26. (canceled)
2,600
10,136
10,136
15,368,065
2,627
A change in position of electronic smart glasses is detected by a sensor device as long as the electronic smart glasses are arranged in a detection area of the sensor device. A control device drives a display device of the electronic smart glasses in dependence on the detected change in position. An adjusting device changes alignment of the sensor device in correspondence with the detected change in position of the electronic smart glasses in such a manner that the detection area of the sensor device follows the change in position of the electronic smart glasses.
1. A display system, comprising: electronic smart glasses having a display device; a sensor device configured to detect a change in position of the electronic smart glasses as long as the electronic smart glasses are arranged in a detection area of the sensor device; a control device configured to drive the display device of the electronic smart glasses in dependence on the change in position detected by the sensor device; and an adjusting device configured to change an alignment of the sensor device in correspondence with the change in position of the electronic smart glasses and thereby cause the detection area of the sensor device to follow the change in position of the electronic smart glasses. 2. A display system according to claim 1, wherein the sensor device is supported rotatably at least around one axis of rotation, and wherein the adjusting device is configured to tilt the sensor device around the axis of rotation. 3. A display system according to claim 2, wherein the sensor device is supported to be translatorally immobile. 4. A display system according to claim 3, further comprising at least one light-emitting diode disposed at the electronic smart glasses, and wherein the sensor device comprises an infrared sensor configured to detect the at least one light-emitting diode and to determine the position of the electronic smart glasses based on detection of the at least one light-emitting diode. 5. A display system according to claim 4, wherein the sensor device has a single sensor detecting the position of the electronic smart glasses. 6. A display system according to claim 5, wherein the electronic smart glasses are augmented-reality glasses. 7. A display system according to claim 5, wherein the electronic smart glasses are virtual-reality glasses. 8. A display system according to claim 1, wherein the sensor device is supported to be translatorally immobile. 9. A display system according to claim 1, further comprising at least one light-emitting diode disposed at the electronic smart glasses, and wherein the sensor device comprises an infrared sensor configured to detect the at least one light-emitting diode and to determine the position of the electronic smart glasses based on detection of the at least one light-emitting diode. 10. A display system according to claim 1, wherein the sensor device has a single sensor detecting the position of the electronic smart glasses. 11. A display system according to claim 1, wherein the electronic smart glasses are augmented-reality glasses. 12. A display system according to claim 1, wherein the electronic smart glasses are virtual-reality glasses. 13. A method for operating a display system, comprising: detecting, by a sensor device, a change in position of electronic smart glasses arranged in a detection area of the sensor device; driving, by a control device, a display device of the electronic smart glasses in dependence on the change in position detected by the sensor device; and aligning the sensor device by an adjusting device in correspondence with the change in position of the electronic smart glasses detected by the sensor device, so that the detection area of the sensor device follows the change in position of the electronic smart glasses. 14. A method for operating a display system according to claim 13, wherein the sensor device is supported rotatably at least around one axis of rotation, and wherein said aligning of the sensor device includes tilting the sensor device by the adjusting device around the axis of rotation. 15. A method for operating a display system according to claim 13, wherein said detecting of the change in position of the electronic smart glasses includes detecting, by an infrared sensor, at least one light-emitting diode disposed at the electronic smart glasses.
A change in position of electronic smart glasses is detected by a sensor device as long as the electronic smart glasses are arranged in a detection area of the sensor device. A control device drives a display device of the electronic smart glasses in dependence on the detected change in position. An adjusting device changes alignment of the sensor device in correspondence with the detected change in position of the electronic smart glasses in such a manner that the detection area of the sensor device follows the change in position of the electronic smart glasses.1. A display system, comprising: electronic smart glasses having a display device; a sensor device configured to detect a change in position of the electronic smart glasses as long as the electronic smart glasses are arranged in a detection area of the sensor device; a control device configured to drive the display device of the electronic smart glasses in dependence on the change in position detected by the sensor device; and an adjusting device configured to change an alignment of the sensor device in correspondence with the change in position of the electronic smart glasses and thereby cause the detection area of the sensor device to follow the change in position of the electronic smart glasses. 2. A display system according to claim 1, wherein the sensor device is supported rotatably at least around one axis of rotation, and wherein the adjusting device is configured to tilt the sensor device around the axis of rotation. 3. A display system according to claim 2, wherein the sensor device is supported to be translatorally immobile. 4. A display system according to claim 3, further comprising at least one light-emitting diode disposed at the electronic smart glasses, and wherein the sensor device comprises an infrared sensor configured to detect the at least one light-emitting diode and to determine the position of the electronic smart glasses based on detection of the at least one light-emitting diode. 5. A display system according to claim 4, wherein the sensor device has a single sensor detecting the position of the electronic smart glasses. 6. A display system according to claim 5, wherein the electronic smart glasses are augmented-reality glasses. 7. A display system according to claim 5, wherein the electronic smart glasses are virtual-reality glasses. 8. A display system according to claim 1, wherein the sensor device is supported to be translatorally immobile. 9. A display system according to claim 1, further comprising at least one light-emitting diode disposed at the electronic smart glasses, and wherein the sensor device comprises an infrared sensor configured to detect the at least one light-emitting diode and to determine the position of the electronic smart glasses based on detection of the at least one light-emitting diode. 10. A display system according to claim 1, wherein the sensor device has a single sensor detecting the position of the electronic smart glasses. 11. A display system according to claim 1, wherein the electronic smart glasses are augmented-reality glasses. 12. A display system according to claim 1, wherein the electronic smart glasses are virtual-reality glasses. 13. A method for operating a display system, comprising: detecting, by a sensor device, a change in position of electronic smart glasses arranged in a detection area of the sensor device; driving, by a control device, a display device of the electronic smart glasses in dependence on the change in position detected by the sensor device; and aligning the sensor device by an adjusting device in correspondence with the change in position of the electronic smart glasses detected by the sensor device, so that the detection area of the sensor device follows the change in position of the electronic smart glasses. 14. A method for operating a display system according to claim 13, wherein the sensor device is supported rotatably at least around one axis of rotation, and wherein said aligning of the sensor device includes tilting the sensor device by the adjusting device around the axis of rotation. 15. A method for operating a display system according to claim 13, wherein said detecting of the change in position of the electronic smart glasses includes detecting, by an infrared sensor, at least one light-emitting diode disposed at the electronic smart glasses.
2,600
10,137
10,137
14,585,272
2,694
Systems and methods of security are provided, including a sensor to detect a location of at least one user, and generate detection data according to the detected location of the at least one user, a processor communicatively coupled to the sensor to receive the detection data, to determine whether the at least one user is occupying a building according to the detection data, and to store allowance data that sets one or more preferences for the at least one user, and an alarm device, communicatively coupled to at least the processor, that is armed or disarmed by the processor according to the allowance data and the determination as to whether the at least one user is occupying the building.
1. A security system comprising: a sensor to detect a location of at least one user, and generate detection data according to the detected location of the at least one user; a processor communicatively coupled to the sensor to receive the detection data, to determine whether the at least one user occupying a building according to the detection data, and to store allowance data that sets one or more preferences for the at least one user; and an alarm device, communicatively coupled to at least the processor, that is armed or disarmed by the processor according to the allowance data and the determination as to whether the at least one user is occupying the building. 2. The system of claim 1, wherein the processor has a preset timer that is activated to count down when the at least one user has exited the building. 3. The system of claim 2, further comprising: a display that is coupled to at least one of the group consisting of the processor and the alarm device to display the countdown of the timer. 4. The system of claim 1, wherein the sensor is a motion sensor that is disabled by the processor when the at least one user is occupying the building. 5. The system of claim 1, wherein the sensor is one from the group consisting of: a door sensor and a window sensor, and wherein the processor determines whether a respective door or window is opened from inside the building. 6. The system of claim 5, wherein the processor controls the alarm device to refrain from activation when the door or window is opened from inside the building. 7. The system of claim 1, wherein the processor sets one or more items selected from a group consisting of: doors and windows for entry or exit after arming of the alarm device. 8. The system of claim 7, wherein the one or more doors are those that open from inside the building to outside the building. 9. The system of claim 1, wherein a failsafe period is set by the processor to arm the alarm device when the failsafe period is expired. 10. The system of claim 9, wherein the processor reduces the failsafe period when at least one of the sensor and the processor determines that the at least one user is outside the building. 11. The system of claim 1, wherein the processor configures the allowance data to allow the at least one user to exit though one or more preset doors without activating the alarm device. 12. The system of claim 1, wherein the processor configures the allowance data to allow motion within the building without activating the alarm device. 13. The system of claim 1, wherein the processor configures the allowance data so as to allow window opening without activating the alarm device and to provide a notification to the user. 14. The system of claim 1, wherein the processor configures the allowance data so that the sensor detects security events from a perimeter of the building. 15. The system of claim 1, wherein the processor configures the allowance data so that when the at least one user is occupying the building, one or more other users are permitted to enter the building or be within a preset distance of the building without activating the alarm device. 16. The system of claim 15, wherein the one or more other users are users that are registered with the processor. 17. The system of claim 1, wherein the processor configures the allowance data so as to allow the at least one user to exit the building without activating the alarm device. 18. A method comprising: detecting, by a sensor, a location of at least one user, and generating detection data according to the detected location of the at least one user; receiving, by a processor communicatively coupled to the sensor, the detection data, determining whether the at least one user is occupying a building according to the detection data, and storing allowance data that sets one or more preferences for the at least one user; and arming an alarm device that is communicatively coupled to the processor according to the allowance data and the determination as to whether the at least one user occupying the building. 19. The method of claim 18, further comprising: activating a preset timer to count down when the at least one user has exited the building. 20. The method of claim 19, further comprising: displaying, on a display coupled to the alarm device, the countdown of the timer. 21. The method of claim 18, further comprising: determining, with the processor, whether a respective door or window is opened from inside the building according to motion data received from the sensor. 22. The method of claim 21, further comprising: controlling, by the processor, the alarm device to refrain from activation when the door or window is opened from inside the building. 23. The method of claim 18, further comprising: configuring the allowance data, by the processor, so as to allow the at least one user to exit the building without activating the alarm device. 24. The method of claim 18, further comprising: setting, by the processor, a failsafe period to arm the alarm device when the failsafe period is expired. 25. The method of claim 24, further comprising: reducing, by the processor, the failsafe period when the sensor or the processor determines that the at least one user is outside the building. 26. The method of claim 18, further comprising: configuring, by the processor, the allowance data to allow the at least one user to exit though one or more preset doors without activating the alarm device. 27. The method of claim 18, further comprising: configuring, by the processor, the allowance data to allow motion within the building without activating the alarm device. 28. The method of claim 18, further comprising: configuring, by the processor, the allowance data so as to allow window opening without activating the alarm device and providing a notification to the user. 29. The method of claim 18, further comprising: configuring, with the processor, the allowance data so that the sensor detects security events from a perimeter of the building. 30. The method of claim 18, further comprising: configuring, with the processor, the allowance data so that when the at least one user is occupying the building, permitting one or more other users to enter the building or be within a preset distance of the building without activating the alarm device. 31. The method of claim 30, wherein the one or more other users are users that are registered with the processor.
Systems and methods of security are provided, including a sensor to detect a location of at least one user, and generate detection data according to the detected location of the at least one user, a processor communicatively coupled to the sensor to receive the detection data, to determine whether the at least one user is occupying a building according to the detection data, and to store allowance data that sets one or more preferences for the at least one user, and an alarm device, communicatively coupled to at least the processor, that is armed or disarmed by the processor according to the allowance data and the determination as to whether the at least one user is occupying the building.1. A security system comprising: a sensor to detect a location of at least one user, and generate detection data according to the detected location of the at least one user; a processor communicatively coupled to the sensor to receive the detection data, to determine whether the at least one user occupying a building according to the detection data, and to store allowance data that sets one or more preferences for the at least one user; and an alarm device, communicatively coupled to at least the processor, that is armed or disarmed by the processor according to the allowance data and the determination as to whether the at least one user is occupying the building. 2. The system of claim 1, wherein the processor has a preset timer that is activated to count down when the at least one user has exited the building. 3. The system of claim 2, further comprising: a display that is coupled to at least one of the group consisting of the processor and the alarm device to display the countdown of the timer. 4. The system of claim 1, wherein the sensor is a motion sensor that is disabled by the processor when the at least one user is occupying the building. 5. The system of claim 1, wherein the sensor is one from the group consisting of: a door sensor and a window sensor, and wherein the processor determines whether a respective door or window is opened from inside the building. 6. The system of claim 5, wherein the processor controls the alarm device to refrain from activation when the door or window is opened from inside the building. 7. The system of claim 1, wherein the processor sets one or more items selected from a group consisting of: doors and windows for entry or exit after arming of the alarm device. 8. The system of claim 7, wherein the one or more doors are those that open from inside the building to outside the building. 9. The system of claim 1, wherein a failsafe period is set by the processor to arm the alarm device when the failsafe period is expired. 10. The system of claim 9, wherein the processor reduces the failsafe period when at least one of the sensor and the processor determines that the at least one user is outside the building. 11. The system of claim 1, wherein the processor configures the allowance data to allow the at least one user to exit though one or more preset doors without activating the alarm device. 12. The system of claim 1, wherein the processor configures the allowance data to allow motion within the building without activating the alarm device. 13. The system of claim 1, wherein the processor configures the allowance data so as to allow window opening without activating the alarm device and to provide a notification to the user. 14. The system of claim 1, wherein the processor configures the allowance data so that the sensor detects security events from a perimeter of the building. 15. The system of claim 1, wherein the processor configures the allowance data so that when the at least one user is occupying the building, one or more other users are permitted to enter the building or be within a preset distance of the building without activating the alarm device. 16. The system of claim 15, wherein the one or more other users are users that are registered with the processor. 17. The system of claim 1, wherein the processor configures the allowance data so as to allow the at least one user to exit the building without activating the alarm device. 18. A method comprising: detecting, by a sensor, a location of at least one user, and generating detection data according to the detected location of the at least one user; receiving, by a processor communicatively coupled to the sensor, the detection data, determining whether the at least one user is occupying a building according to the detection data, and storing allowance data that sets one or more preferences for the at least one user; and arming an alarm device that is communicatively coupled to the processor according to the allowance data and the determination as to whether the at least one user occupying the building. 19. The method of claim 18, further comprising: activating a preset timer to count down when the at least one user has exited the building. 20. The method of claim 19, further comprising: displaying, on a display coupled to the alarm device, the countdown of the timer. 21. The method of claim 18, further comprising: determining, with the processor, whether a respective door or window is opened from inside the building according to motion data received from the sensor. 22. The method of claim 21, further comprising: controlling, by the processor, the alarm device to refrain from activation when the door or window is opened from inside the building. 23. The method of claim 18, further comprising: configuring the allowance data, by the processor, so as to allow the at least one user to exit the building without activating the alarm device. 24. The method of claim 18, further comprising: setting, by the processor, a failsafe period to arm the alarm device when the failsafe period is expired. 25. The method of claim 24, further comprising: reducing, by the processor, the failsafe period when the sensor or the processor determines that the at least one user is outside the building. 26. The method of claim 18, further comprising: configuring, by the processor, the allowance data to allow the at least one user to exit though one or more preset doors without activating the alarm device. 27. The method of claim 18, further comprising: configuring, by the processor, the allowance data to allow motion within the building without activating the alarm device. 28. The method of claim 18, further comprising: configuring, by the processor, the allowance data so as to allow window opening without activating the alarm device and providing a notification to the user. 29. The method of claim 18, further comprising: configuring, with the processor, the allowance data so that the sensor detects security events from a perimeter of the building. 30. The method of claim 18, further comprising: configuring, with the processor, the allowance data so that when the at least one user is occupying the building, permitting one or more other users to enter the building or be within a preset distance of the building without activating the alarm device. 31. The method of claim 30, wherein the one or more other users are users that are registered with the processor.
2,600
10,138
10,138
14,992,974
2,651
Systems and methods for detecting a tampering and identifying a location of a digital recording are provided. A frequency sequence and a phase angle sequence may be extracted from the digital recording. A portion of the frequency sequence may be matched to one of a plurality of reference frequency sequences, and a portion of the phase angle sequence may be matched to one of a plurality of reference phase angle sequences. Tampering of the digital recording may be detected when the frequency and phase sequences differ from the matched reference sequences. Moreover, a noise sequence may be extracted from the extracted frequency sequence. A location of the digital recording may be identified by matching the noise sequence to one of a plurality of noise sequences of the plurality of reference frequency sequences.
1. A method of detecting a tampering and identifying a location of a digital recording, comprising: extracting a frequency sequence and a phase angle sequence from the digital recording; matching a portion of the frequency sequence to one of a plurality of reference frequency sequences, and a portion of the phase angle sequence to one of a plurality of reference phase angle sequences; detecting the tampering of the digital recording when the frequency sequence differs from the matched reference frequency sequence and the phase angle sequence differs from the matched reference phase angle sequence; extracting a noise sequence from the frequency sequence; and identifying the location of the digital recording by finding a match between the noise sequence and one of a plurality of noise sequences of the plurality of reference frequency sequences. 2. The method claim 1, wherein the extracting the frequency sequence and the phase angle sequence from the digital recording comprises using a short-time Fourier transform. 3. The method claim 1, wherein the matching the portion of the frequency sequence to one of the plurality of reference frequency sequences comprises: computing a mean square error between the portion of the frequency sequence and each of the plurality of reference frequency sequences; and selecting one of the plurality of reference frequency sequences when a corresponding mean square error is less than a predetermined threshold. 4. The method claim 1, wherein the matching the portion of the phase angle sequence to one of the plurality of reference phase angle sequences comprises: obtaining a starting time from the matching the portion of the frequency sequence to one of a plurality of reference frequency sequences; and selecting one of the plurality of reference phase angle sequences corresponding to the matched reference frequency sequence. 5. The method claim 1, wherein the detecting the tampering of the digital recording comprises detecting a deletion of a portion of the digital recording. 6. The method claim 5, wherein the deletion of a portion of the digital recording is detected when the frequency sequence and the phase angle sequence each includes one spike when compared to the matched reference frequency sequence and the matched reference phase angle sequence, respectively. 7. The method claim 1, wherein the detecting the tampering of the digital recording comprises detecting a replacement of a portion of the digital recording. 8. The method claim 7, wherein the replacement of a portion of the digital recording is detected when the frequency sequence and the phase angle sequence each includes two spikes when compared to the matched reference frequency sequence and the matched reference phase angle sequence, respectively. 9. The method claim 1, wherein the extracting the noise sequence comprises: computing a median of the frequency sequence and the reference frequency sequences; and subtracting the median from the frequency sequence. 10. The method claim 1, wherein the detecting the location of the digital recording comprises: performing a discrete Fourier transform on the noise sequence to generate a frequency spectrum; and inputting the frequency spectrum into a neural network to match a frequency spectrum of one of the reference frequency sequences. 11. A system, comprising: at least one electric network; a plurality of sensors to measure a reference frequency sequence and a reference phase angle sequence for each of a plurality of locations in the at least one electric network; and a computer system including at least one processor and at least one storage device storing the reference frequency sequences, the reference phase angle sequences, and instructions adapted to be executed by the at least one processor to perform operations comprising: extracting a frequency sequence and a phase angle sequence from a digital recording; matching a portion of the frequency sequence to one of the reference frequency sequences, and a portion of the phase angle sequence to one of the reference phase angle sequences; detecting a tampering of the digital recording when the frequency sequence differs from the matched reference frequency sequence and the phase angle sequence differs from the matched reference phase angle sequence; extracting a noise sequence from the frequency sequence; and identifying a location of the digital recording by finding a match between the noise sequence and one of a plurality of noise sequences of the plurality of reference frequency sequences. 12. The system of claim 11, wherein the extracting the frequency sequence and the phase angle sequence from the digital recording comprises using a short-time Fourier transform 13. The system of claim 11, wherein the matching the portion of the frequency sequence to one of the plurality of reference frequency sequences comprises: computing a mean square error between the portion of the frequency sequence and each of the plurality of reference frequency sequences; and selecting one of the plurality of reference frequency sequences when a corresponding mean square error is less than a predetermined threshold. 14. The system of claim 11, wherein the matching the portion of the phase angle sequence to one of the plurality of reference phase angle sequences comprises: obtaining a starting time from the matching the portion of the frequency sequence to one of a plurality of reference frequency sequences; and selecting one of the plurality of reference phase angle sequences corresponding to the matched reference frequency sequence. 15. The system of claim 11, wherein the detecting the tampering of the digital recording comprises detecting a deletion of a portion of the digital recording. 16. The system of claim 11, wherein the detecting the tampering of the digital recording comprises detecting a replacement of a portion of the digital recording. 17. The system of claim 11, wherein the extracting the noise sequence comprises: computing a median of the frequency sequence and the reference frequency sequences; and subtracting the median from the frequency sequence. 18. The system of claim 11, wherein the detecting the location of the digital recording comprises: performing a discrete Fourier transform on the noise sequence to generate a frequency spectrum; and inputting the frequency spectrum into a neural network to match a frequency spectrum of one of the reference frequency sequences.
Systems and methods for detecting a tampering and identifying a location of a digital recording are provided. A frequency sequence and a phase angle sequence may be extracted from the digital recording. A portion of the frequency sequence may be matched to one of a plurality of reference frequency sequences, and a portion of the phase angle sequence may be matched to one of a plurality of reference phase angle sequences. Tampering of the digital recording may be detected when the frequency and phase sequences differ from the matched reference sequences. Moreover, a noise sequence may be extracted from the extracted frequency sequence. A location of the digital recording may be identified by matching the noise sequence to one of a plurality of noise sequences of the plurality of reference frequency sequences.1. A method of detecting a tampering and identifying a location of a digital recording, comprising: extracting a frequency sequence and a phase angle sequence from the digital recording; matching a portion of the frequency sequence to one of a plurality of reference frequency sequences, and a portion of the phase angle sequence to one of a plurality of reference phase angle sequences; detecting the tampering of the digital recording when the frequency sequence differs from the matched reference frequency sequence and the phase angle sequence differs from the matched reference phase angle sequence; extracting a noise sequence from the frequency sequence; and identifying the location of the digital recording by finding a match between the noise sequence and one of a plurality of noise sequences of the plurality of reference frequency sequences. 2. The method claim 1, wherein the extracting the frequency sequence and the phase angle sequence from the digital recording comprises using a short-time Fourier transform. 3. The method claim 1, wherein the matching the portion of the frequency sequence to one of the plurality of reference frequency sequences comprises: computing a mean square error between the portion of the frequency sequence and each of the plurality of reference frequency sequences; and selecting one of the plurality of reference frequency sequences when a corresponding mean square error is less than a predetermined threshold. 4. The method claim 1, wherein the matching the portion of the phase angle sequence to one of the plurality of reference phase angle sequences comprises: obtaining a starting time from the matching the portion of the frequency sequence to one of a plurality of reference frequency sequences; and selecting one of the plurality of reference phase angle sequences corresponding to the matched reference frequency sequence. 5. The method claim 1, wherein the detecting the tampering of the digital recording comprises detecting a deletion of a portion of the digital recording. 6. The method claim 5, wherein the deletion of a portion of the digital recording is detected when the frequency sequence and the phase angle sequence each includes one spike when compared to the matched reference frequency sequence and the matched reference phase angle sequence, respectively. 7. The method claim 1, wherein the detecting the tampering of the digital recording comprises detecting a replacement of a portion of the digital recording. 8. The method claim 7, wherein the replacement of a portion of the digital recording is detected when the frequency sequence and the phase angle sequence each includes two spikes when compared to the matched reference frequency sequence and the matched reference phase angle sequence, respectively. 9. The method claim 1, wherein the extracting the noise sequence comprises: computing a median of the frequency sequence and the reference frequency sequences; and subtracting the median from the frequency sequence. 10. The method claim 1, wherein the detecting the location of the digital recording comprises: performing a discrete Fourier transform on the noise sequence to generate a frequency spectrum; and inputting the frequency spectrum into a neural network to match a frequency spectrum of one of the reference frequency sequences. 11. A system, comprising: at least one electric network; a plurality of sensors to measure a reference frequency sequence and a reference phase angle sequence for each of a plurality of locations in the at least one electric network; and a computer system including at least one processor and at least one storage device storing the reference frequency sequences, the reference phase angle sequences, and instructions adapted to be executed by the at least one processor to perform operations comprising: extracting a frequency sequence and a phase angle sequence from a digital recording; matching a portion of the frequency sequence to one of the reference frequency sequences, and a portion of the phase angle sequence to one of the reference phase angle sequences; detecting a tampering of the digital recording when the frequency sequence differs from the matched reference frequency sequence and the phase angle sequence differs from the matched reference phase angle sequence; extracting a noise sequence from the frequency sequence; and identifying a location of the digital recording by finding a match between the noise sequence and one of a plurality of noise sequences of the plurality of reference frequency sequences. 12. The system of claim 11, wherein the extracting the frequency sequence and the phase angle sequence from the digital recording comprises using a short-time Fourier transform 13. The system of claim 11, wherein the matching the portion of the frequency sequence to one of the plurality of reference frequency sequences comprises: computing a mean square error between the portion of the frequency sequence and each of the plurality of reference frequency sequences; and selecting one of the plurality of reference frequency sequences when a corresponding mean square error is less than a predetermined threshold. 14. The system of claim 11, wherein the matching the portion of the phase angle sequence to one of the plurality of reference phase angle sequences comprises: obtaining a starting time from the matching the portion of the frequency sequence to one of a plurality of reference frequency sequences; and selecting one of the plurality of reference phase angle sequences corresponding to the matched reference frequency sequence. 15. The system of claim 11, wherein the detecting the tampering of the digital recording comprises detecting a deletion of a portion of the digital recording. 16. The system of claim 11, wherein the detecting the tampering of the digital recording comprises detecting a replacement of a portion of the digital recording. 17. The system of claim 11, wherein the extracting the noise sequence comprises: computing a median of the frequency sequence and the reference frequency sequences; and subtracting the median from the frequency sequence. 18. The system of claim 11, wherein the detecting the location of the digital recording comprises: performing a discrete Fourier transform on the noise sequence to generate a frequency spectrum; and inputting the frequency spectrum into a neural network to match a frequency spectrum of one of the reference frequency sequences.
2,600
10,139
10,139
13,985,292
2,646
A method and apparatus for reducing power consumption in a base station of a cordless handset supporting a subset of a plurality of carrier frequencies supported by the base station, the method comprising disconnecting power to a receiver of the base station for carrier frequencies that are not supported by the cordless handset, thereby reducing power consumption by the base station.
1. A method for reducing power consumption in a base station of a cordless handset, the method comprising: powering a receiver of the base station for a carrier frequency supported by the base station and the cordless handset; and disconnecting power to a receiver of the base station for a carrier frequency that is supported by the base station and is not supported by the cordless handset. 2. The method according to claim 1, wherein the disconnecting power is for a predefined time. 3. The method according to claim 2, wherein the predefined time is sufficient for allowing subsequent listening by the base station to the cordless handset through a carrier frequency supported by the cordless handset. 4. The method according to claim 1 comprising informing the cordless handset that the base station uses carrier frequencies which the cordless handset can support in addition to carrier frequencies that the cordless handset can not support. 5. The method according to claim 1, comprising setting parts of the base station to lower power for the carrier frequency that is supported by the base station and is not supported by the cordless handset. 6. An apparatus for reducing power consumption in a base station of a cordless handset, the apparatus comprising: a base station supporting a plurality of carrier frequencies and having a receiver configured to receive a transmission from the cordless handset supporting a subset of the plurality of carrier frequencies, wherein the base station further comprises a controller that is configured for turning off power to the receiver for carrier frequencies that are of the plurality of carrier frequencies and are not supported by the cordless handset and for connecting the receiver to the power for a carrier frequency that belongs to the subset of the plurality of carrier frequencies. 7. The apparatus according to claim 6, further adapted to turn off wireless communication of the base station. 8. The apparatus according to claim 6, that is arranged to inform the cordless handset that the base station uses carrier frequencies which the cordless handset can support in addition to carrier frequencies that the cordless handset can not support. 9. The apparatus according to claim 6 that is arranged to set parts of the base station to lower power for carrier frequencies that belong to the plurality of carrier frequencies and are not supported by the cordless handset. 10. The method according to claim 4 further comprising turning off wireless communications for the carrier frequencies that the cordless handset can not support thereby allowing the cordless handset to assume that the carrier frequencies that are not supported by the cordless handset are in use by the base station. 11. The method according to claim 1 wherein the powering and the disconnecting are preceded by: selecting a next carrier frequency to be used by the base station; determining whether the next carrier frequency is supported by both the base station and the handset; and executing one of the powering and the disconnecting in response to the determination. 12. The method according to claim 11 comprising periodically selecting the next carrier frequency out of a plurality of carrier frequencies supported by the base station. 13. The method according to claim 1 comprising powering the receiver of the base station for all carrier frequencies supported by the base station and the cordless handset; and disconnecting power to the receiver of the base station for all carrier frequencies that are supported by the base station and are not supported by the cordless handset. 14. The method according to claim 2 comprising informing the cordless handset that the base station uses carrier frequencies which the cordless handset can support in addition to carrier frequencies that the cordless handset does not support. 15. The apparatus according to claim 8 further arranged to turn off wireless communications for the carrier frequencies that are of the plurality of carrier frequencies and are not supported by the cordless handset thereby allowing the cordless handset to assume that the carrier frequencies that are not supported by the cordless handset are in use by the base station. 16. The apparatus according to claim 6 further arranged to select a next carrier frequency to be used by the base station; provide a determination whether the next carrier frequency is supported by both the base station and the handset; and wherein the controller is arranged to either turning off power to the receiver or connect the receiver to the power in response to the determination. 17. The apparatus according to claim 7 arranged to inform the cordless handset that the base station uses carrier frequencies which the cordless handset can support in addition to carrier frequencies that the cordless handset does not support.
A method and apparatus for reducing power consumption in a base station of a cordless handset supporting a subset of a plurality of carrier frequencies supported by the base station, the method comprising disconnecting power to a receiver of the base station for carrier frequencies that are not supported by the cordless handset, thereby reducing power consumption by the base station.1. A method for reducing power consumption in a base station of a cordless handset, the method comprising: powering a receiver of the base station for a carrier frequency supported by the base station and the cordless handset; and disconnecting power to a receiver of the base station for a carrier frequency that is supported by the base station and is not supported by the cordless handset. 2. The method according to claim 1, wherein the disconnecting power is for a predefined time. 3. The method according to claim 2, wherein the predefined time is sufficient for allowing subsequent listening by the base station to the cordless handset through a carrier frequency supported by the cordless handset. 4. The method according to claim 1 comprising informing the cordless handset that the base station uses carrier frequencies which the cordless handset can support in addition to carrier frequencies that the cordless handset can not support. 5. The method according to claim 1, comprising setting parts of the base station to lower power for the carrier frequency that is supported by the base station and is not supported by the cordless handset. 6. An apparatus for reducing power consumption in a base station of a cordless handset, the apparatus comprising: a base station supporting a plurality of carrier frequencies and having a receiver configured to receive a transmission from the cordless handset supporting a subset of the plurality of carrier frequencies, wherein the base station further comprises a controller that is configured for turning off power to the receiver for carrier frequencies that are of the plurality of carrier frequencies and are not supported by the cordless handset and for connecting the receiver to the power for a carrier frequency that belongs to the subset of the plurality of carrier frequencies. 7. The apparatus according to claim 6, further adapted to turn off wireless communication of the base station. 8. The apparatus according to claim 6, that is arranged to inform the cordless handset that the base station uses carrier frequencies which the cordless handset can support in addition to carrier frequencies that the cordless handset can not support. 9. The apparatus according to claim 6 that is arranged to set parts of the base station to lower power for carrier frequencies that belong to the plurality of carrier frequencies and are not supported by the cordless handset. 10. The method according to claim 4 further comprising turning off wireless communications for the carrier frequencies that the cordless handset can not support thereby allowing the cordless handset to assume that the carrier frequencies that are not supported by the cordless handset are in use by the base station. 11. The method according to claim 1 wherein the powering and the disconnecting are preceded by: selecting a next carrier frequency to be used by the base station; determining whether the next carrier frequency is supported by both the base station and the handset; and executing one of the powering and the disconnecting in response to the determination. 12. The method according to claim 11 comprising periodically selecting the next carrier frequency out of a plurality of carrier frequencies supported by the base station. 13. The method according to claim 1 comprising powering the receiver of the base station for all carrier frequencies supported by the base station and the cordless handset; and disconnecting power to the receiver of the base station for all carrier frequencies that are supported by the base station and are not supported by the cordless handset. 14. The method according to claim 2 comprising informing the cordless handset that the base station uses carrier frequencies which the cordless handset can support in addition to carrier frequencies that the cordless handset does not support. 15. The apparatus according to claim 8 further arranged to turn off wireless communications for the carrier frequencies that are of the plurality of carrier frequencies and are not supported by the cordless handset thereby allowing the cordless handset to assume that the carrier frequencies that are not supported by the cordless handset are in use by the base station. 16. The apparatus according to claim 6 further arranged to select a next carrier frequency to be used by the base station; provide a determination whether the next carrier frequency is supported by both the base station and the handset; and wherein the controller is arranged to either turning off power to the receiver or connect the receiver to the power in response to the determination. 17. The apparatus according to claim 7 arranged to inform the cordless handset that the base station uses carrier frequencies which the cordless handset can support in addition to carrier frequencies that the cordless handset does not support.
2,600
10,140
10,140
15,156,304
2,648
A cover for a portable computing device includes a cover panel having a first portion of a solid silicone rubber sheet and a first stiffener panel that is fully encapsulated in the first portion of the solid silicone rubber sheet.
1. An article of manufacture, comprising: a seamless solid silicon rubber sheet; and a rigid member that is fully encapsulated in the seamless solid silicone rubber sheet. 2. The article of manufacture of claim 1, wherein the solid silicon rubber sheet comprises a thermally cured solid silicon rubber material. 3. The article of manufacture of claim 1, wherein the rigid member comprises a stiffener panel. 4. The article of manufacture of claim 1, further comprising a second rigid member that is fully encapsulated in the solid silicone rubber sheet and a flexible hinge region that is disposed between the rigid member and second rigid member and is formed from the seamless solid silicon rubber sheet. 5. A cover for a computing device, the cover comprising: a cover panel that includes: a first portion of a solid silicone rubber sheet, and a first stiffener panel that is fully encapsulated in the first portion of the solid silicone rubber sheet. 6. The cover of claim 5, wherein the solid silicone rubber sheet comprises a single seamless solid silicon rubber sheet. 7. The cover of claim 5, further comprising a second stiffener panel that is fully encapsulated in a second portion of the solid silicone rubber sheet. 8. The cover of claim 7, further comprising a first flexible hinge region that is disposed between the first stiffener panel and the second stiffener panel and is formed from the solid silicone rubber sheet. 9. The cover of claim 8, further comprising a hinge mechanism that is disposed in the first flexible hinge region. 10. The cover of claim 9, wherein the hinge mechanism is fully encapsulated in the solid silicone rubber sheet. 11. The cover of claim 7, further comprising a second cover panel that includes the second stiffener panel that is fully encapsulated in the second portion of the solid silicon rubber sheet. 12. The cover of claim 11, further comprising a first magnet that is fully encapsulated in the first portion of the solid silicone rubber sheet and a second magnet that is fully encapsulated in the second portion of the solid silicon rubber sheet, wherein the first magnet is positioned in the first solid silicone rubber sheet to align with the second magnet when the cover panel is folded along a first fold line. 13. The cover of claim 11, further comprising: a third stiffener panel that is fully encapsulated in a third portion of the solid silicone rubber sheet; and a second flexible hinge region that is disposed between the second stiffener panel and the third stiffener panel and is formed from the solid silicone rubber sheet. 14. The cover of claim 7, further comprising a first magnet that is fully encapsulated in the first portion of the solid silicone rubber sheet and a second magnet that is fully encapsulated in a different portion of the solid silicon rubber sheet, wherein the first magnet is positioned in the first solid silicone rubber sheet to align with the second magnet when the cover panel is folded in the first flexible hinge region. 15. The cover of claim 5, further comprising a first magnet that is fully encapsulated in the first portion of the solid silicone rubber sheet. 16. The cover of claim 15, wherein the first magnet is positioned in the first solid silicone rubber sheet to align with a coupling component disposed in the computing device to hold the cover in one of an open position or a closed position. 17. The cover of claim 16, wherein the coupling component comprises one of a ferro-magnetic material or a permanent magnet. 18. The cover of claim 5, wherein the solid silicon rubber sheet comprises a thermally cured solid silicon rubber material. 19. The cover of claim 5, further comprising a spine member that is fully encapsulated in the first portion of the solid silicone rubber sheet. 20. The cover of claim 19, wherein the spine member is connected to the first panel with a flexible hinge region formed from the solid silicon rubber sheet.
A cover for a portable computing device includes a cover panel having a first portion of a solid silicone rubber sheet and a first stiffener panel that is fully encapsulated in the first portion of the solid silicone rubber sheet.1. An article of manufacture, comprising: a seamless solid silicon rubber sheet; and a rigid member that is fully encapsulated in the seamless solid silicone rubber sheet. 2. The article of manufacture of claim 1, wherein the solid silicon rubber sheet comprises a thermally cured solid silicon rubber material. 3. The article of manufacture of claim 1, wherein the rigid member comprises a stiffener panel. 4. The article of manufacture of claim 1, further comprising a second rigid member that is fully encapsulated in the solid silicone rubber sheet and a flexible hinge region that is disposed between the rigid member and second rigid member and is formed from the seamless solid silicon rubber sheet. 5. A cover for a computing device, the cover comprising: a cover panel that includes: a first portion of a solid silicone rubber sheet, and a first stiffener panel that is fully encapsulated in the first portion of the solid silicone rubber sheet. 6. The cover of claim 5, wherein the solid silicone rubber sheet comprises a single seamless solid silicon rubber sheet. 7. The cover of claim 5, further comprising a second stiffener panel that is fully encapsulated in a second portion of the solid silicone rubber sheet. 8. The cover of claim 7, further comprising a first flexible hinge region that is disposed between the first stiffener panel and the second stiffener panel and is formed from the solid silicone rubber sheet. 9. The cover of claim 8, further comprising a hinge mechanism that is disposed in the first flexible hinge region. 10. The cover of claim 9, wherein the hinge mechanism is fully encapsulated in the solid silicone rubber sheet. 11. The cover of claim 7, further comprising a second cover panel that includes the second stiffener panel that is fully encapsulated in the second portion of the solid silicon rubber sheet. 12. The cover of claim 11, further comprising a first magnet that is fully encapsulated in the first portion of the solid silicone rubber sheet and a second magnet that is fully encapsulated in the second portion of the solid silicon rubber sheet, wherein the first magnet is positioned in the first solid silicone rubber sheet to align with the second magnet when the cover panel is folded along a first fold line. 13. The cover of claim 11, further comprising: a third stiffener panel that is fully encapsulated in a third portion of the solid silicone rubber sheet; and a second flexible hinge region that is disposed between the second stiffener panel and the third stiffener panel and is formed from the solid silicone rubber sheet. 14. The cover of claim 7, further comprising a first magnet that is fully encapsulated in the first portion of the solid silicone rubber sheet and a second magnet that is fully encapsulated in a different portion of the solid silicon rubber sheet, wherein the first magnet is positioned in the first solid silicone rubber sheet to align with the second magnet when the cover panel is folded in the first flexible hinge region. 15. The cover of claim 5, further comprising a first magnet that is fully encapsulated in the first portion of the solid silicone rubber sheet. 16. The cover of claim 15, wherein the first magnet is positioned in the first solid silicone rubber sheet to align with a coupling component disposed in the computing device to hold the cover in one of an open position or a closed position. 17. The cover of claim 16, wherein the coupling component comprises one of a ferro-magnetic material or a permanent magnet. 18. The cover of claim 5, wherein the solid silicon rubber sheet comprises a thermally cured solid silicon rubber material. 19. The cover of claim 5, further comprising a spine member that is fully encapsulated in the first portion of the solid silicone rubber sheet. 20. The cover of claim 19, wherein the spine member is connected to the first panel with a flexible hinge region formed from the solid silicon rubber sheet.
2,600
10,141
10,141
12,605,651
2,625
Systems and methods for using static surface features on a touch-screen for tactile feedback are disclosed. For example, one disclosed system includes a processor configured to transmit a display signal, the display signal comprising a plurality of display elements; and a display configured to output a visual representation of the display signal, the display including: touch-sensitive input device; and one or more static surface features covering at least a portion of the display.
1. A system comprising: a processor configured to transmit a display signal comprising a plurality of display elements; and a display configured to output a visual representation of the display signal, the display comprising: a touch-sensitive input device; and one or more static surface features covering at least a portion of the display. 2. The system of claim 1, wherein the display comprises a touch-screen display. 3. The system of claim 1, wherein the one or more static surface features comprise: a trough, a ridge, or a curvature. 4. The system of claim 1, wherein the one or more static surface features form: letters or numbers. 5. The system of claim 1, wherein the one or more static surface features form a grid. 6. The system of claim 1, wherein the one or more static surface features correspond to the visual representation of the display signal. 7. The system of claim 1, wherein the one or more static surface features are created by placing a skin over the surface of the display. 8. The system of claim 7, wherein the skin comprises a unique identifier. 9. The system of claim 8, further comprising a sensor capable of detecting the unique identifier. 10. The system of claim 1, wherein the one or more static surface features correspond to the image shown on the display. 11. A method comprising: receiving an indication that a skin comprising at least one static surface feature has been placed over the surface of a touch-screen display; receiving a signal corresponding to a unique identifier associated with the skin; transmitting a display signal to the touch-screen display; and outputting an image associated with the display signal. 12. The method of claim 11, wherein the signal corresponding to the unique identifier is transmitted by the touch-screen display. 13. The method of claim 11, wherein the signal corresponding to the unique identifier is transmitted by a sensor. 14. The method of claim 12, wherein the unique identifier is one or more of: a bar code, an RFID, or a magnetic identifier. 15. The method of claim 11, wherein the at least one static surface feature comprises: a trough, a ridge, or a curvature. 16. The method of claim 11, wherein the at least one static surface feature forms: a letter or a number. 17. The method of claim 11, wherein the image is associated with the at least one static surface feature. 18. The method of claim 11, further comprising receiving a signal associated with the at least one static surface feature from a data store. 19. The method of claim 18, wherein the data store is a remote data store accessed via a network interface. 20. A mobile device, comprising: a processor; a touch-sensitive display in communication with the processor and configured to receive a display signal from the processor, the touch-sensitive display further configured to transmit input signals to the processor, the touch-sensitive display comprising: at least one static surface feature covering at least a portion of the touch-sensitive display; and a data store comprising data associated with the at least one static surface feature.
Systems and methods for using static surface features on a touch-screen for tactile feedback are disclosed. For example, one disclosed system includes a processor configured to transmit a display signal, the display signal comprising a plurality of display elements; and a display configured to output a visual representation of the display signal, the display including: touch-sensitive input device; and one or more static surface features covering at least a portion of the display.1. A system comprising: a processor configured to transmit a display signal comprising a plurality of display elements; and a display configured to output a visual representation of the display signal, the display comprising: a touch-sensitive input device; and one or more static surface features covering at least a portion of the display. 2. The system of claim 1, wherein the display comprises a touch-screen display. 3. The system of claim 1, wherein the one or more static surface features comprise: a trough, a ridge, or a curvature. 4. The system of claim 1, wherein the one or more static surface features form: letters or numbers. 5. The system of claim 1, wherein the one or more static surface features form a grid. 6. The system of claim 1, wherein the one or more static surface features correspond to the visual representation of the display signal. 7. The system of claim 1, wherein the one or more static surface features are created by placing a skin over the surface of the display. 8. The system of claim 7, wherein the skin comprises a unique identifier. 9. The system of claim 8, further comprising a sensor capable of detecting the unique identifier. 10. The system of claim 1, wherein the one or more static surface features correspond to the image shown on the display. 11. A method comprising: receiving an indication that a skin comprising at least one static surface feature has been placed over the surface of a touch-screen display; receiving a signal corresponding to a unique identifier associated with the skin; transmitting a display signal to the touch-screen display; and outputting an image associated with the display signal. 12. The method of claim 11, wherein the signal corresponding to the unique identifier is transmitted by the touch-screen display. 13. The method of claim 11, wherein the signal corresponding to the unique identifier is transmitted by a sensor. 14. The method of claim 12, wherein the unique identifier is one or more of: a bar code, an RFID, or a magnetic identifier. 15. The method of claim 11, wherein the at least one static surface feature comprises: a trough, a ridge, or a curvature. 16. The method of claim 11, wherein the at least one static surface feature forms: a letter or a number. 17. The method of claim 11, wherein the image is associated with the at least one static surface feature. 18. The method of claim 11, further comprising receiving a signal associated with the at least one static surface feature from a data store. 19. The method of claim 18, wherein the data store is a remote data store accessed via a network interface. 20. A mobile device, comprising: a processor; a touch-sensitive display in communication with the processor and configured to receive a display signal from the processor, the touch-sensitive display further configured to transmit input signals to the processor, the touch-sensitive display comprising: at least one static surface feature covering at least a portion of the touch-sensitive display; and a data store comprising data associated with the at least one static surface feature.
2,600
10,142
10,142
15,843,359
2,643
A method and system involve a controller receiving from a phone an orientation signal indicative of an orientation of the phone and adjusting an object according to the orientation of the phone. The controller is configured to select one of the parts of the object according to an initial orientation of the phone and adjust the selected part of the object in correspondence with the orientation of the phone as the orientation of the phone changes. The controller is configured to receive from the phone a control signal indicative of a user input to the phone and to adjust the object according to the user input. The object may be a vehicle seat having a first part in the form of a seat bottom and a second part in the form of a seat back. The object may be a vehicle sunroof.
1. A system comprising: a phone configured to transmit an orientation signal indicative of an orientation of the phone as the orientation of the phone changes; a controller configured to receive the orientation signal from the phone and to adjust a vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes; and wherein the orientation of the phone changes independently of the vehicle seat while the vehicle seat is being adjusted in correspondence with the orientation of the phone as the orientation of the phone changes. 2. (canceled) 3. The system of claim 1 wherein: the controller is further configured to select one of a plurality of parts of the vehicle seat according to an initial orientation of the phone and to adjust the selected one of the parts of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes. 4. (canceled) 5. The system of claim 1 wherein: the controller is further configured to select a first part of the vehicle seat when an initial orientation of the phone is in a first orientation and to adjust the first part of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes relative to the first orientation; and the controller is further configured to select a second part of the vehicle seat when the initial orientation of the phone is in a second orientation and to adjust the second part of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes relative to the second orientation. 6. The system of claim 5 wherein the first part of the vehicle seat is a vehicle seat bottom and the second part of the vehicle seat is a vehicle seat back. 7. The system of claim 1 wherein: the phone is further configured to transmit a selection signal indicative of a user selection of the vehicle seat; and the controller is further configured to adjust the vehicle seat in correspondence with the orientation of the phone after receiving the selection signal indicative of the user selection of the vehicle seat. 8. (canceled) 9. The system of claim 1 wherein: the phone is further configured to transmit a control signal indicative of a user input to the phone; and the controller is further configured to receive the control signal from the phone and adjust the vehicle seat according to the user input. 10-20. (canceled) 21. A method comprising: wirelessly receiving from a phone, by a controller, an orientation signal indicative of an orientation of the phone as the orientation of the phone changes by a user physically moving the phone; and adjusting, by the controller, a vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes by the user physically moving the phone. 22. The method of claim 21 further comprising: selecting, by the controller, one of a plurality of parts of the vehicle seat according to an initial orientation of the phone and to adjust the selected one of the parts of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes. 23. (canceled) 24. The method of claim 21 further comprising: selecting, by the controller, a first part of the vehicle seat when an initial orientation of the phone is in a first orientation and adjusting, by the controller, the first part of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes relative to the first orientation; and selecting, by the controller, a second part of the vehicle seat when the initial orientation of the phone is in a second orientation and adjusting, by the controller, the second part of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes relative to the second orientation. 25. The method of claim 21 further comprising: wirelessly receiving from the phone, by the controller, a selection signal indicative of a user selection of the vehicle seat; and adjusting, by the controller, the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes after receiving the selection signal indicative of the user selection of the vehicle seat. 26. The method of claim 21 further comprising: wirelessly receiving from the phone, by the controller, a control signal indicative of a user input to the phone; and adjusting, by the controller, the vehicle seat according to the user input. 27. The system of claim 35 wherein: the controller is further configured to select a first part of the object for adjustment when the phone is in a first orientation and select a different second part of the object for adjustment when the phone is in a different second orientation. 28. The system of claim 27 wherein: the controller is further configured to adjust the selected part of the object in correspondence with the orientation of the phone as the orientation of the phone changes. 29. The system of claim 35 wherein: the object is a vehicle seat. 30. (canceled) 31. The system of claim 35 wherein: the object is a vehicle sunroof. 32. The system of claim 27 wherein: the phone is further configured to transmit a control signal indicative of a user input to the phone; and the controller is further configured to receive the control signal from the phone and adjust the selected part of the object according to the user input. 33. The system of claim 1 wherein: the orientation signal at a first time is indicative of the phone being rotated in a first rotating direction and at a second time is indicative of the phone being rotated in a second rotating direction opposite to the first rotating direction; and the controller is further configured to tilt the vehicle seat forward in correspondence with rotation of the phone in the first rotating direction and to tilt the vehicle seat rearward in correspondence with rotation of the phone in the second rotating direction. 34. The system of claim 1 wherein: the orientation signal at a first time is indicative of the phone being moved in a first lateral direction and at a second time is indicative of the phone being moved in a second lateral direction opposite to the first lateral direction; and the controller is further configured to laterally move the vehicle seat forward in correspondence with movement of the phone in the first lateral direction and to laterally move the vehicle seat rearward in correspondence with movement of the phone in the second lateral direction. 35. A system comprising: a phone configured to transmit an orientation signal indicative of an orientation of the phone; a controller configured to receive the orientation signal from the phone and select an object for adjustment when the phone is in an initial orientation assigned to the object; and wherein the phone is remotely located from the object and the initial orientation of the phone is independent of any orientation of the object. 36. The system of claim 35 wherein: the controller is further configured to, after having selected the object, adjust the object in correspondence with the orientation of the phone as the orientation of the phone changes.
A method and system involve a controller receiving from a phone an orientation signal indicative of an orientation of the phone and adjusting an object according to the orientation of the phone. The controller is configured to select one of the parts of the object according to an initial orientation of the phone and adjust the selected part of the object in correspondence with the orientation of the phone as the orientation of the phone changes. The controller is configured to receive from the phone a control signal indicative of a user input to the phone and to adjust the object according to the user input. The object may be a vehicle seat having a first part in the form of a seat bottom and a second part in the form of a seat back. The object may be a vehicle sunroof.1. A system comprising: a phone configured to transmit an orientation signal indicative of an orientation of the phone as the orientation of the phone changes; a controller configured to receive the orientation signal from the phone and to adjust a vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes; and wherein the orientation of the phone changes independently of the vehicle seat while the vehicle seat is being adjusted in correspondence with the orientation of the phone as the orientation of the phone changes. 2. (canceled) 3. The system of claim 1 wherein: the controller is further configured to select one of a plurality of parts of the vehicle seat according to an initial orientation of the phone and to adjust the selected one of the parts of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes. 4. (canceled) 5. The system of claim 1 wherein: the controller is further configured to select a first part of the vehicle seat when an initial orientation of the phone is in a first orientation and to adjust the first part of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes relative to the first orientation; and the controller is further configured to select a second part of the vehicle seat when the initial orientation of the phone is in a second orientation and to adjust the second part of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes relative to the second orientation. 6. The system of claim 5 wherein the first part of the vehicle seat is a vehicle seat bottom and the second part of the vehicle seat is a vehicle seat back. 7. The system of claim 1 wherein: the phone is further configured to transmit a selection signal indicative of a user selection of the vehicle seat; and the controller is further configured to adjust the vehicle seat in correspondence with the orientation of the phone after receiving the selection signal indicative of the user selection of the vehicle seat. 8. (canceled) 9. The system of claim 1 wherein: the phone is further configured to transmit a control signal indicative of a user input to the phone; and the controller is further configured to receive the control signal from the phone and adjust the vehicle seat according to the user input. 10-20. (canceled) 21. A method comprising: wirelessly receiving from a phone, by a controller, an orientation signal indicative of an orientation of the phone as the orientation of the phone changes by a user physically moving the phone; and adjusting, by the controller, a vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes by the user physically moving the phone. 22. The method of claim 21 further comprising: selecting, by the controller, one of a plurality of parts of the vehicle seat according to an initial orientation of the phone and to adjust the selected one of the parts of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes. 23. (canceled) 24. The method of claim 21 further comprising: selecting, by the controller, a first part of the vehicle seat when an initial orientation of the phone is in a first orientation and adjusting, by the controller, the first part of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes relative to the first orientation; and selecting, by the controller, a second part of the vehicle seat when the initial orientation of the phone is in a second orientation and adjusting, by the controller, the second part of the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes relative to the second orientation. 25. The method of claim 21 further comprising: wirelessly receiving from the phone, by the controller, a selection signal indicative of a user selection of the vehicle seat; and adjusting, by the controller, the vehicle seat in correspondence with the orientation of the phone as the orientation of the phone changes after receiving the selection signal indicative of the user selection of the vehicle seat. 26. The method of claim 21 further comprising: wirelessly receiving from the phone, by the controller, a control signal indicative of a user input to the phone; and adjusting, by the controller, the vehicle seat according to the user input. 27. The system of claim 35 wherein: the controller is further configured to select a first part of the object for adjustment when the phone is in a first orientation and select a different second part of the object for adjustment when the phone is in a different second orientation. 28. The system of claim 27 wherein: the controller is further configured to adjust the selected part of the object in correspondence with the orientation of the phone as the orientation of the phone changes. 29. The system of claim 35 wherein: the object is a vehicle seat. 30. (canceled) 31. The system of claim 35 wherein: the object is a vehicle sunroof. 32. The system of claim 27 wherein: the phone is further configured to transmit a control signal indicative of a user input to the phone; and the controller is further configured to receive the control signal from the phone and adjust the selected part of the object according to the user input. 33. The system of claim 1 wherein: the orientation signal at a first time is indicative of the phone being rotated in a first rotating direction and at a second time is indicative of the phone being rotated in a second rotating direction opposite to the first rotating direction; and the controller is further configured to tilt the vehicle seat forward in correspondence with rotation of the phone in the first rotating direction and to tilt the vehicle seat rearward in correspondence with rotation of the phone in the second rotating direction. 34. The system of claim 1 wherein: the orientation signal at a first time is indicative of the phone being moved in a first lateral direction and at a second time is indicative of the phone being moved in a second lateral direction opposite to the first lateral direction; and the controller is further configured to laterally move the vehicle seat forward in correspondence with movement of the phone in the first lateral direction and to laterally move the vehicle seat rearward in correspondence with movement of the phone in the second lateral direction. 35. A system comprising: a phone configured to transmit an orientation signal indicative of an orientation of the phone; a controller configured to receive the orientation signal from the phone and select an object for adjustment when the phone is in an initial orientation assigned to the object; and wherein the phone is remotely located from the object and the initial orientation of the phone is independent of any orientation of the object. 36. The system of claim 35 wherein: the controller is further configured to, after having selected the object, adjust the object in correspondence with the orientation of the phone as the orientation of the phone changes.
2,600
10,143
10,143
15,452,696
2,631
Devices work as both area lighting and ionizing radiation detectors. The lighting is adjustable in response to radiation detection, warning nearby users about radiation. Multiple devices can be plugged into electrical outlets throughout a plant or building to replace conventional lighting like lightbulbs or CFLs. Each device can transmit alerts to notify nearby users and transmit data to processors for aggregation and analysis. The data can be sent wirelessly, over fiber optic cable, as powerline communications or otherwise. The data can be multiplexed along a single line. The devices may be in known locations or located based on an ID in the data. This data can be used to locate radiation sources and facilitate analysis and alerting at the location. Operators may respond to the radiation detection by issuing commands to the devices to change lighting output, adjust radiation detection parameters, and take corrective or ameliorative action in the facility.
1. A combined lighting and radiation detection unit comprising: a ballast configured to removably join to and draw power from an electrical outlet; a light secured with and configured to draw power from the ballast; an ionizing radiation detector secured with and configured to draw power from the ballast; and a controller configured to change output of the light from a constant visible light to a light visibly varying in time in response to ionizing radiation detected by the ionizing radiation detector exceeding a threshold of a regulatory or safe ionizing radiation level. 2. The unit of claim 1, wherein the light, the ballast, and the detector are integrated into a single, modular lighting element, and wherein the lighting element is one of an LED bulb, a fluorescent bulb, and an incandescent bulb. 3. The unit of claim 1, wherein the ionizing radiation detector is a Geiger-Muller tube having a different operating current/voltage than the light, and wherein the light and the ionizing radiation detector are electrically connected in parallel to the ballast. 4. The unit of claim 3, wherein the ionizing radiation detector includes a receiver configured to step voltage and current to the operating current/voltage and a processor configured to transmit a digital signal indicating ionizing radiation detected by the radiation detector. 5. The unit of claim 1, further comprising: a sound alarm, wherein the controller is configured to change light output of the light to a strobe and output an audible alarm sound from the alarm in response to ionizing radiation detected by the ionizing radiation detector exceeding the threshold. 6. The unit of claim 1, wherein the light is an LED, wherein the controller, the light, the ballast, and the radiation detector are all an integrated part of an LED bulb, and wherein the changed light output is a strobing output. 7. The unit of claim 1, wherein the ionizing radiation detector is configured to output results of radiation detection through the ballast into the outlet. 8. The unit of claim 7, wherein the ionizing radiation detector further includes a processor configured to output the results as digital signals on a powerline communication through the ballast into the outlet, and wherein the results as digital signals include information of a type and an amount of radiation detected by the ionizing radiation detector. 9. The unit of claim 8, wherein the ballast is configured to electrically connect to a ground line of the outlet so that the digital signals are output on the ground line. 10. A system of combined illumination and radiation detection, the system comprising: a lighting circuit permanently installed in a facility, wherein the circuit has a plurality of electrical outlets; a plurality of combined lighting and radiation detection units each plugged into one of the plurality of outlets, wherein each of the units includes, a ballast configured to removably join to and draw power from the outlet, a light secured with and configured to draw power from the ballast, and an ionizing radiation detector secured with and configured to draw power from the ballast; and a controller configured to change output of the light from a constant visible light to a light visibly varying in time in response to ionizing radiation detected by the ionizing radiation detector exceeding a threshold of a regulatory or safe ionizing radiation level. 11. The system of claim 10, wherein each of the plurality of units are plugged into a ground line of the one of the plurality of outlets, and wherein the plurality of units are configured to output radiation detection information over the ground line. 12. The system of claim 11, wherein the controller is connected to the ground line and configured to extract the radiation detection information from the ground line. 13. The system of claim 12, wherein the controller is further configured to determine a location of a radiation source based on the radiation detection information from the plurality of units. 14. The system of claim 13, wherein the controller is further configured to output the locations of radiation sources on a display for human operators. 15. The system of claim 15, wherein the facility is a nuclear power plant, and wherein the display is in a control room of the plant for plant operators. 16. The system of claim 12, further comprising: a multiplexer on the lighting circuit configured to combine output from all of the units onto a single line, wherein the controller is further configured to issue individual commands to each of the plurality of units from the single line through the multiplexer. 17. The system of claim 10, wherein the controller is communicatively connected to all of the plurality of units and configured to receive all ionizing radiation detection information from the ionizing radiation detectors. 18. A method of combined illumination and radiation detection, the method comprising: joining a plurality of combined lighting and radiation detection units each into one of a plurality of electrical outlets, wherein each of the units includes, a ballast configured to removably join to and draw power from the outlet, a light secured with and configured to draw power from the ballast, and an ionizing radiation detector secured with and configured to draw power from the ballast; and changing output of the light in one of the units from a constant visible light to a light visibly varying in time in response to ionizing radiation detected by the ionizing radiation detector in the one of the units exceeding a threshold of a regulatory or safe ionizing radiation level. 19. The method of claim 18, further comprising: transmitting a powerline communication to from the one of the units through a lighting circuit permanently installed in a facility, wherein the circuit includes the plurality of electrical outlets, and wherein the powerline communication is a digital signal including information of a type and an amount of radiation detected by the ionizing radiation detector. 20. (canceled) 21. The unit of claim 1, wherein the ionizing radiation detector includes an alpha/beta radiation detector on an outward-most side of the light and a gamma radiation detector on an inward side of the light.
Devices work as both area lighting and ionizing radiation detectors. The lighting is adjustable in response to radiation detection, warning nearby users about radiation. Multiple devices can be plugged into electrical outlets throughout a plant or building to replace conventional lighting like lightbulbs or CFLs. Each device can transmit alerts to notify nearby users and transmit data to processors for aggregation and analysis. The data can be sent wirelessly, over fiber optic cable, as powerline communications or otherwise. The data can be multiplexed along a single line. The devices may be in known locations or located based on an ID in the data. This data can be used to locate radiation sources and facilitate analysis and alerting at the location. Operators may respond to the radiation detection by issuing commands to the devices to change lighting output, adjust radiation detection parameters, and take corrective or ameliorative action in the facility.1. A combined lighting and radiation detection unit comprising: a ballast configured to removably join to and draw power from an electrical outlet; a light secured with and configured to draw power from the ballast; an ionizing radiation detector secured with and configured to draw power from the ballast; and a controller configured to change output of the light from a constant visible light to a light visibly varying in time in response to ionizing radiation detected by the ionizing radiation detector exceeding a threshold of a regulatory or safe ionizing radiation level. 2. The unit of claim 1, wherein the light, the ballast, and the detector are integrated into a single, modular lighting element, and wherein the lighting element is one of an LED bulb, a fluorescent bulb, and an incandescent bulb. 3. The unit of claim 1, wherein the ionizing radiation detector is a Geiger-Muller tube having a different operating current/voltage than the light, and wherein the light and the ionizing radiation detector are electrically connected in parallel to the ballast. 4. The unit of claim 3, wherein the ionizing radiation detector includes a receiver configured to step voltage and current to the operating current/voltage and a processor configured to transmit a digital signal indicating ionizing radiation detected by the radiation detector. 5. The unit of claim 1, further comprising: a sound alarm, wherein the controller is configured to change light output of the light to a strobe and output an audible alarm sound from the alarm in response to ionizing radiation detected by the ionizing radiation detector exceeding the threshold. 6. The unit of claim 1, wherein the light is an LED, wherein the controller, the light, the ballast, and the radiation detector are all an integrated part of an LED bulb, and wherein the changed light output is a strobing output. 7. The unit of claim 1, wherein the ionizing radiation detector is configured to output results of radiation detection through the ballast into the outlet. 8. The unit of claim 7, wherein the ionizing radiation detector further includes a processor configured to output the results as digital signals on a powerline communication through the ballast into the outlet, and wherein the results as digital signals include information of a type and an amount of radiation detected by the ionizing radiation detector. 9. The unit of claim 8, wherein the ballast is configured to electrically connect to a ground line of the outlet so that the digital signals are output on the ground line. 10. A system of combined illumination and radiation detection, the system comprising: a lighting circuit permanently installed in a facility, wherein the circuit has a plurality of electrical outlets; a plurality of combined lighting and radiation detection units each plugged into one of the plurality of outlets, wherein each of the units includes, a ballast configured to removably join to and draw power from the outlet, a light secured with and configured to draw power from the ballast, and an ionizing radiation detector secured with and configured to draw power from the ballast; and a controller configured to change output of the light from a constant visible light to a light visibly varying in time in response to ionizing radiation detected by the ionizing radiation detector exceeding a threshold of a regulatory or safe ionizing radiation level. 11. The system of claim 10, wherein each of the plurality of units are plugged into a ground line of the one of the plurality of outlets, and wherein the plurality of units are configured to output radiation detection information over the ground line. 12. The system of claim 11, wherein the controller is connected to the ground line and configured to extract the radiation detection information from the ground line. 13. The system of claim 12, wherein the controller is further configured to determine a location of a radiation source based on the radiation detection information from the plurality of units. 14. The system of claim 13, wherein the controller is further configured to output the locations of radiation sources on a display for human operators. 15. The system of claim 15, wherein the facility is a nuclear power plant, and wherein the display is in a control room of the plant for plant operators. 16. The system of claim 12, further comprising: a multiplexer on the lighting circuit configured to combine output from all of the units onto a single line, wherein the controller is further configured to issue individual commands to each of the plurality of units from the single line through the multiplexer. 17. The system of claim 10, wherein the controller is communicatively connected to all of the plurality of units and configured to receive all ionizing radiation detection information from the ionizing radiation detectors. 18. A method of combined illumination and radiation detection, the method comprising: joining a plurality of combined lighting and radiation detection units each into one of a plurality of electrical outlets, wherein each of the units includes, a ballast configured to removably join to and draw power from the outlet, a light secured with and configured to draw power from the ballast, and an ionizing radiation detector secured with and configured to draw power from the ballast; and changing output of the light in one of the units from a constant visible light to a light visibly varying in time in response to ionizing radiation detected by the ionizing radiation detector in the one of the units exceeding a threshold of a regulatory or safe ionizing radiation level. 19. The method of claim 18, further comprising: transmitting a powerline communication to from the one of the units through a lighting circuit permanently installed in a facility, wherein the circuit includes the plurality of electrical outlets, and wherein the powerline communication is a digital signal including information of a type and an amount of radiation detected by the ionizing radiation detector. 20. (canceled) 21. The unit of claim 1, wherein the ionizing radiation detector includes an alpha/beta radiation detector on an outward-most side of the light and a gamma radiation detector on an inward side of the light.
2,600
10,144
10,144
15,365,146
2,642
A time-dependent information management system and method for a mobile phone that enable the management of time-related information in consideration of a change of time zone are provided. The method includes registering an event planned to occur at a first local time based on a first time zone together with a reference time calculated from the first local time; if the mobile phone enters into a second time zone, selecting the event in accordance with a key input; and setting the event to occur at a second local time calculated from the reference time. The system and method enable selectively adapting, when a mobile phone moves from a first time zone to a second time zone, times of schedules planned in the first time zone to a local time of second time zone, thereby effectively managing the schedules in multiple time zone-crossing environment.
1. A method for managing time information by a terminal, the method comprising: registering an event planned to occur at a first local time depending on a first time zone together with a reference time calculated from the first local time; selecting, if the terminal enters into a second time zone, the event in accordance with a key input; and setting the event to occur at a second local time calculated from the reference time. 2. The method of claim 1, wherein registering the event comprises: determining whether the first local time is in a summer time period of the first time zone; calculating, if the first local time is not in a summer time period of the first time zone, the reference time without consideration of the summer time; calculating, if the first local time is in a summer time period of the first time zone, the reference time to be one hour advanced; and associating the first local time to the event. 3. The method of claim 1, wherein selecting the event comprises: displaying a list of events registered in the first time zone; and opening the event selected in accordance with a key input in an editing mode. 4. The method of claim 1, wherein setting the event to occur at the second local time comprises: calculating the second local time dependent on the second time zone; determining whether a day of the second local time is in a summer time period of the second time zone; applying, if a day of the second local time is in a summer time period of the second time zone, the summer time to the second local time; and associating the second local time to the event. 5. The method of claim 1, wherein the reference time is Greenwich Mean Time (GMT). 6. An apparatus for managing time information for a terminal, the apparatus comprising: a memory for storing at least one event with a first local time at which the event is planned to occur in a first time zone and with a reference time calculated from the first local time; a display for displaying, if the terminal enters into a second time zone, a list of events stored in the memory in accordance with a key input; and a controller for calculating, if an event is selected from the list, a second local time dependent on the second time zone from the reference time and associating the second local time to the event. 7. The apparatus of claim 6, further comprising a transceiver for receiving time zone information in real time in a time zone where the terminal is located. 8. The apparatus of claim 7, wherein the controller analyzes the time zone information in real time, determines whether the first local time is in a summer time period of the first time zone on the basis of an analysis result, and calculates, if the first time is in a summer time, the reference time after subtracting the summer time from first local time. 9. The apparatus of claim 7, wherein the controller analyzes the time zone information, determines whether the second local time is in a summer time period of the second time zone on the basis of an analysis result, and calculates, if the second local time is in the summer time period, the second local time by applying a summer time. 10. The apparatus of claim 6, wherein the reference time is Greenwich Mean Time (GMT).
A time-dependent information management system and method for a mobile phone that enable the management of time-related information in consideration of a change of time zone are provided. The method includes registering an event planned to occur at a first local time based on a first time zone together with a reference time calculated from the first local time; if the mobile phone enters into a second time zone, selecting the event in accordance with a key input; and setting the event to occur at a second local time calculated from the reference time. The system and method enable selectively adapting, when a mobile phone moves from a first time zone to a second time zone, times of schedules planned in the first time zone to a local time of second time zone, thereby effectively managing the schedules in multiple time zone-crossing environment.1. A method for managing time information by a terminal, the method comprising: registering an event planned to occur at a first local time depending on a first time zone together with a reference time calculated from the first local time; selecting, if the terminal enters into a second time zone, the event in accordance with a key input; and setting the event to occur at a second local time calculated from the reference time. 2. The method of claim 1, wherein registering the event comprises: determining whether the first local time is in a summer time period of the first time zone; calculating, if the first local time is not in a summer time period of the first time zone, the reference time without consideration of the summer time; calculating, if the first local time is in a summer time period of the first time zone, the reference time to be one hour advanced; and associating the first local time to the event. 3. The method of claim 1, wherein selecting the event comprises: displaying a list of events registered in the first time zone; and opening the event selected in accordance with a key input in an editing mode. 4. The method of claim 1, wherein setting the event to occur at the second local time comprises: calculating the second local time dependent on the second time zone; determining whether a day of the second local time is in a summer time period of the second time zone; applying, if a day of the second local time is in a summer time period of the second time zone, the summer time to the second local time; and associating the second local time to the event. 5. The method of claim 1, wherein the reference time is Greenwich Mean Time (GMT). 6. An apparatus for managing time information for a terminal, the apparatus comprising: a memory for storing at least one event with a first local time at which the event is planned to occur in a first time zone and with a reference time calculated from the first local time; a display for displaying, if the terminal enters into a second time zone, a list of events stored in the memory in accordance with a key input; and a controller for calculating, if an event is selected from the list, a second local time dependent on the second time zone from the reference time and associating the second local time to the event. 7. The apparatus of claim 6, further comprising a transceiver for receiving time zone information in real time in a time zone where the terminal is located. 8. The apparatus of claim 7, wherein the controller analyzes the time zone information in real time, determines whether the first local time is in a summer time period of the first time zone on the basis of an analysis result, and calculates, if the first time is in a summer time, the reference time after subtracting the summer time from first local time. 9. The apparatus of claim 7, wherein the controller analyzes the time zone information, determines whether the second local time is in a summer time period of the second time zone on the basis of an analysis result, and calculates, if the second local time is in the summer time period, the second local time by applying a summer time. 10. The apparatus of claim 6, wherein the reference time is Greenwich Mean Time (GMT).
2,600
10,145
10,145
12,717,893
2,651
As a user interacts with a voice application, a history of the prompts played to the user and the users responses are displayed to the user. The displayed prompts and displayed responses could be summaries of the prompts and responses, or they could be full transcriptions of the prompts and responses. A user may be able to select a prompt or response in the history to return to a certain point in the voice application. It may be possible for a user to save a history of the interactions that occurred when a voice application was performed, and to recall the history to continue on from a selected location in the history.
1. A method of displaying a history of an interactive voice response (IVR) session, comprising: recording audio prompts played to a user by a voice application; recording a user's responses to audio prompts from a voice application; and displaying a history of the audio prompts and the user responses, wherein each time that a new audio prompt is played to a user, the prompt added to the history, and wherein each time that a user makes a response, the response is added to the history.
As a user interacts with a voice application, a history of the prompts played to the user and the users responses are displayed to the user. The displayed prompts and displayed responses could be summaries of the prompts and responses, or they could be full transcriptions of the prompts and responses. A user may be able to select a prompt or response in the history to return to a certain point in the voice application. It may be possible for a user to save a history of the interactions that occurred when a voice application was performed, and to recall the history to continue on from a selected location in the history.1. A method of displaying a history of an interactive voice response (IVR) session, comprising: recording audio prompts played to a user by a voice application; recording a user's responses to audio prompts from a voice application; and displaying a history of the audio prompts and the user responses, wherein each time that a new audio prompt is played to a user, the prompt added to the history, and wherein each time that a user makes a response, the response is added to the history.
2,600
10,146
10,146
12,798,613
2,689
During any kind of traffic management involving consolidation or compression, or converging readouts or outputs, shortening of individual vehicle headway, there is a necessity that vehicles in a moving pattern must get closer together. For the particular case of taking random traffic approaching a traffic signal and consolidating traffic to go through the signal during the green phase, vehicles must be substantially consolidated at a ratio of the service cycle of the traffic signal to the “net” green during which time they all pass through the signal. This remains true for both autonomic as well as adjustable adaptable phase-changing traffic management systems. Traffic density (passing vehicle number per time) is measured before or at the beginning of traffic management. Density figures are compared under a predetermined scheduling/convergence mechanism that takes into account the densest traffic will be, or the closest vehicles will be with respect to one another. If the pre-compressed, pre-converging density is found to be lean or sparse enough for traffic management to function, outputs are allowed to continue and traffic management is allowed to remain open. If the pre-compressed, pre-converging density is found to be too dense, that is, if the densest place or duration in traffic management is too close for a safe headway or reaction time, than traffic management/outputs are suspended.
1. A safety shutoff device for a traffic management system that comprises of: a counter that counts density of traffic flow including passing vehicles per time, a processor that analyzes whether the count is lean/sparse enough, or too dense to allow traffic management system to function. 2. the device of claim 1 wherein traffic might converge/be brought closer together in time and space or with reducing individual vehicle headway due to management. 3. The device of claim 2 wherein processor might take into account the closeness in space and time, for example as a headway, following distance, reaction time, and analyze for the closest that said following distance might be, and process and allow for that closeness when said traffic was far apart and including when said far apart traffic was a random pattern approaching said place where traffic management occurs. 4. A device of claim 3 that includes means of sensing vehicles approaching an area that is traffic managed, Said device including means for detecting and counting individual vehicles per time in one or more lanes of a roadway in one or more directions, primarily where moving zones or patterns are ultimately to be filled with traffic, wherein heightened mobility is in use, wherein said heightened mobility zones are associated with a traffic signal, wherein said traffic management includes compressing, condensing, converging a previously random string of traffic into a net green pattern length and time period such that said pattern is intended to go through intersection of said traffic signal while said signal is open or green, wherein said previously random strings of traffic are divided up in lengths and time periods of the service cycle of said traffic signal wherein said traffic management system is conducive to counting individual vehicles per time and switching on or off, depending on whether net density is safe enough, a means of ascertaining as to whether there are too many vehicles counted per time for said vehicles to have a net safe following distance as they are in their final run, or most dense part of run, while going through traffic management, wherein there is a Boolean condition, or “go/no go” condition choice that dictates whether it is either safe, or not safe for traffic management, wherein place or slot in said previous random pattern associated with following distance, space time, reaction time at beginning of where traffic is managed is determined by the product of a standardized safety following factor and the ratio of the service cycle of the intersection and the net green, wherein said net green is space time that total service cycle Pi space time in previous before-management random string was compressed, converged, or condensed to, and wherein the density of said space time during net green Tng is either too dense or lean enough to still maintain vehicles, all with adequate headway, reaction time, relative safe following distance, wherein said standardized headway/safety following factor is a standardized safe following space time wherein said space time can be taken as the time that accumulates between the instant when a leading vehicle passes a relative static reference point by roadside, and the instant a following vehicle passes the same reference point, wherein that time can also be interpreted as a relative distance between the two moving vehicles, wherein service cycle is a repeating cycle of signaled intersection, wherein the net green is the time intended for the managed traffic to go through while signal is open, and containing the necessary buffers, secondary buffers that may add to said net green wherein said buffers would account for late arrivals, early arrivals, wayward traffic, absorption of specially assigned vehicles . . . , wherein there may be more mobility involved as a result of traffic management, but more importantly, more safety in prevention of traffic management causing too dense of traffic to result because of too close of headways or following distances or reaction times. 5. The device of claim 4 wherein the equation that describes the safe following headway in terms of time is: P safe   following ≥ P i t ng  ( SF ) Wherein Pi is service cycle period of intersection, SF, ‘Safe follow Factor’ is the minimum safest headway, reaction time; space time; following distance, Tng is the “net” green part of green phase where the consolidated compressed traffic is intended to go through, P safe following is the pre-processed bracket for eventual reaction time; space time following distance, taken at beginning of traffic management or before, wherein there can be a certain number of Psafe following places or slots per Tng. 6. The system of claim 4 wherein safety limit is established at an initial contact headway “space-time” expansion wherein there is an expanded time measured, counted, processed such that after any consolidation, compression (per time), convergence, traffic management, that final safety following “space-time” or established headway would be no smaller than a net safety following “space-time” or established headway wherein said “space-time” or established headway will create what is equal to or greater than a commonly agreed upon setting of safe relative following distance/“space time” or established headway, wherein said “lean enough traffic” is defined where a set number of places or slots, each where its space-time/headway is still ok for motorist to have an adequate reaction time. 7. The system of claim 4 wherein said process of analyzing whether or not traffic is too dense or lean enough works under a feeding in condition. 8. The device of claim 7 wherein open random traffic developing into “aggregate” time considerations and open random traffic developing into “aggregate” pattern-length considerations apply to moving developing patterns relative to a static access point, and to relative positions of moving vehicles; moving traffic, Including the possibility of shutting traffic management systems/functions off or on, making decisions part-way into traffic management patterns or part way out of said patterns (including within patterns and multiple times), wherein fragments of patterns could still be OK to turn on or off traffic management systems, wherein portions of patterns, i.e. Pi and Tng can still remain operational, wherein there can be more mobility, wherein there can be allowance for continuity in traffic management, and wherein too much traffic density in a phase fragment can be avoided as could happen if shut-off had to start early allowing for the rear portion of a phase to still be open and provide for more mobility, and wherein the danger of too much traffic density in a phase fragment can be avoided as could happen if shut-off had to wait till end of phase providing for more safety. wherein system processor can wait for partial Pi through multiple Pi before turning back on 9. The system of claim 7 wherein said analysis can be applied on a differential basis, wherein said differential increments can range in the smaller magnitudes (higher frequency) from many thousands of increments per second, or many increments per second, wherein other extreme of said range can include increments that are large enough to include combination of vehicle and following distance, headway, reaction time, space time, and confidence factor, wherein said combination could be amount to a multiple number of whole seconds. 10. The system of claim 4 wherein said system can be installed in an integrated condition. 11. The system of claim 4 including option for manual, automatic inputs, wherein said manual and/or automatic inputs can range from being partial or totally in control of said system, 12. The system of claim 4 wherein while the traffic management system is functioning, that readouts take place, and while system is in “shut down” or “no go” mode that an appropriate message corresponding to “shut down” or “no go” perceivable by motorist still remains. 13. The system of claim 10 wherein said message is appropriately integrated with readout methodology, wherein said easily perceived offline output can include a “FP”-type readout (i.e. “Full Pattern”) for alphanumeric readouts; a “Drive Safely”-type sentence for message board type readouts; an appropriate graphical semaphore for non-sentence, non alphanumeric readouts. wherein easily perceived output can include audio messages 14. The “go/no go system of “safe or not safe” in claim 4 wherein “go/no go” is established by whether a set number of places per net green Tng is exceeded or not, wherein said set number can also include closest rounded off number of places wherein traffic management can happen, wherein said place can be any combination of vehicle plus appropriate following distance, reaction time, space time, and confidence factor, wherein said places are detected within a Pi where they are spread out substantially the most that they will be (i.e. in random approach substantially before detection/counting/traffic managing). 15. The system of claim 4 that acts with dynamically changing and/or adapting traffic management systems (i.e. sensor based) wherein said set number of places (in claim 14) can change along with the adaptability of the traffic management system. 16. The system of claim 15 wherein there is a shrinking and or expanding “net green” Tng, and therefore number of place changes wherein said Pi is constant while said Tng changes. 17. The system of claim 15 wherein there is a shrinking or expanding Service Cycle Pi, and including number of place changes in an also-changing Tng. 18. The system of claim 15 where the safety shutoff adaptively changes in “jumps” or increments, wherein there is a range between numerous (i.e. multiple seconds per time, or many times a second) calculations or analytical scans and a multitude (i.e. many or more thousands of times per second) of analytical scans, wherein said scans individually determine whether the dynamic change is going in one direction or the other wherein the frequency of scans could range between “independent analyses” and a constant “vision” of the direction the dynamic change is going in, wherein the increments either add to/subtract from a buffer, or if past a certain limit/delineation, add or subtract a whole “place” (as from claims 14, 15), and wherein if need be, said delineation would be required to surpass the necessary extra safety time buffers (to prevent loss of continuity by too rapidly changing back and fourth), wherein the increasing or decreasing places can apply to expanding contracting Pi, wherein the increasing or decreasing places can apply to expanding contracting Tng, including through unified phase increases or decreases, or opposing direction tradeoff increase/decreases. 19. The system of claim 10 wherein there is an option for parasitic installation i.e. wherein safety hardware can be mounted in with already existing hardware in a piggyback or parasite condition, and wherein safety hardware can be worked in with already existing infrastructure, 20. The system of claim 11 wherein said manual inputs can include for example, manual intervention, manual shut-down, prompting, preempting, prompting, preempting aboard emergency vehicles, prompting or preempting from remote locations (i.e. control consoles), wherein examples of automatic inputs include, time of day shutoff, bad conditions shutoff, scheduled shutoff, emergency vehicle shutoff, automatic proximity, i.e. lights, sound, emergency automatic on-board rf/radio frequency, school bus proximity shutoff, prompting, preempting, delay shutoff, wherein with said delay shutoff, system can wait for periods ranging from partial Pi through multiple Pi before turning back on, wherein said manual and automatic inputs can make system perform more safely.
During any kind of traffic management involving consolidation or compression, or converging readouts or outputs, shortening of individual vehicle headway, there is a necessity that vehicles in a moving pattern must get closer together. For the particular case of taking random traffic approaching a traffic signal and consolidating traffic to go through the signal during the green phase, vehicles must be substantially consolidated at a ratio of the service cycle of the traffic signal to the “net” green during which time they all pass through the signal. This remains true for both autonomic as well as adjustable adaptable phase-changing traffic management systems. Traffic density (passing vehicle number per time) is measured before or at the beginning of traffic management. Density figures are compared under a predetermined scheduling/convergence mechanism that takes into account the densest traffic will be, or the closest vehicles will be with respect to one another. If the pre-compressed, pre-converging density is found to be lean or sparse enough for traffic management to function, outputs are allowed to continue and traffic management is allowed to remain open. If the pre-compressed, pre-converging density is found to be too dense, that is, if the densest place or duration in traffic management is too close for a safe headway or reaction time, than traffic management/outputs are suspended.1. A safety shutoff device for a traffic management system that comprises of: a counter that counts density of traffic flow including passing vehicles per time, a processor that analyzes whether the count is lean/sparse enough, or too dense to allow traffic management system to function. 2. the device of claim 1 wherein traffic might converge/be brought closer together in time and space or with reducing individual vehicle headway due to management. 3. The device of claim 2 wherein processor might take into account the closeness in space and time, for example as a headway, following distance, reaction time, and analyze for the closest that said following distance might be, and process and allow for that closeness when said traffic was far apart and including when said far apart traffic was a random pattern approaching said place where traffic management occurs. 4. A device of claim 3 that includes means of sensing vehicles approaching an area that is traffic managed, Said device including means for detecting and counting individual vehicles per time in one or more lanes of a roadway in one or more directions, primarily where moving zones or patterns are ultimately to be filled with traffic, wherein heightened mobility is in use, wherein said heightened mobility zones are associated with a traffic signal, wherein said traffic management includes compressing, condensing, converging a previously random string of traffic into a net green pattern length and time period such that said pattern is intended to go through intersection of said traffic signal while said signal is open or green, wherein said previously random strings of traffic are divided up in lengths and time periods of the service cycle of said traffic signal wherein said traffic management system is conducive to counting individual vehicles per time and switching on or off, depending on whether net density is safe enough, a means of ascertaining as to whether there are too many vehicles counted per time for said vehicles to have a net safe following distance as they are in their final run, or most dense part of run, while going through traffic management, wherein there is a Boolean condition, or “go/no go” condition choice that dictates whether it is either safe, or not safe for traffic management, wherein place or slot in said previous random pattern associated with following distance, space time, reaction time at beginning of where traffic is managed is determined by the product of a standardized safety following factor and the ratio of the service cycle of the intersection and the net green, wherein said net green is space time that total service cycle Pi space time in previous before-management random string was compressed, converged, or condensed to, and wherein the density of said space time during net green Tng is either too dense or lean enough to still maintain vehicles, all with adequate headway, reaction time, relative safe following distance, wherein said standardized headway/safety following factor is a standardized safe following space time wherein said space time can be taken as the time that accumulates between the instant when a leading vehicle passes a relative static reference point by roadside, and the instant a following vehicle passes the same reference point, wherein that time can also be interpreted as a relative distance between the two moving vehicles, wherein service cycle is a repeating cycle of signaled intersection, wherein the net green is the time intended for the managed traffic to go through while signal is open, and containing the necessary buffers, secondary buffers that may add to said net green wherein said buffers would account for late arrivals, early arrivals, wayward traffic, absorption of specially assigned vehicles . . . , wherein there may be more mobility involved as a result of traffic management, but more importantly, more safety in prevention of traffic management causing too dense of traffic to result because of too close of headways or following distances or reaction times. 5. The device of claim 4 wherein the equation that describes the safe following headway in terms of time is: P safe   following ≥ P i t ng  ( SF ) Wherein Pi is service cycle period of intersection, SF, ‘Safe follow Factor’ is the minimum safest headway, reaction time; space time; following distance, Tng is the “net” green part of green phase where the consolidated compressed traffic is intended to go through, P safe following is the pre-processed bracket for eventual reaction time; space time following distance, taken at beginning of traffic management or before, wherein there can be a certain number of Psafe following places or slots per Tng. 6. The system of claim 4 wherein safety limit is established at an initial contact headway “space-time” expansion wherein there is an expanded time measured, counted, processed such that after any consolidation, compression (per time), convergence, traffic management, that final safety following “space-time” or established headway would be no smaller than a net safety following “space-time” or established headway wherein said “space-time” or established headway will create what is equal to or greater than a commonly agreed upon setting of safe relative following distance/“space time” or established headway, wherein said “lean enough traffic” is defined where a set number of places or slots, each where its space-time/headway is still ok for motorist to have an adequate reaction time. 7. The system of claim 4 wherein said process of analyzing whether or not traffic is too dense or lean enough works under a feeding in condition. 8. The device of claim 7 wherein open random traffic developing into “aggregate” time considerations and open random traffic developing into “aggregate” pattern-length considerations apply to moving developing patterns relative to a static access point, and to relative positions of moving vehicles; moving traffic, Including the possibility of shutting traffic management systems/functions off or on, making decisions part-way into traffic management patterns or part way out of said patterns (including within patterns and multiple times), wherein fragments of patterns could still be OK to turn on or off traffic management systems, wherein portions of patterns, i.e. Pi and Tng can still remain operational, wherein there can be more mobility, wherein there can be allowance for continuity in traffic management, and wherein too much traffic density in a phase fragment can be avoided as could happen if shut-off had to start early allowing for the rear portion of a phase to still be open and provide for more mobility, and wherein the danger of too much traffic density in a phase fragment can be avoided as could happen if shut-off had to wait till end of phase providing for more safety. wherein system processor can wait for partial Pi through multiple Pi before turning back on 9. The system of claim 7 wherein said analysis can be applied on a differential basis, wherein said differential increments can range in the smaller magnitudes (higher frequency) from many thousands of increments per second, or many increments per second, wherein other extreme of said range can include increments that are large enough to include combination of vehicle and following distance, headway, reaction time, space time, and confidence factor, wherein said combination could be amount to a multiple number of whole seconds. 10. The system of claim 4 wherein said system can be installed in an integrated condition. 11. The system of claim 4 including option for manual, automatic inputs, wherein said manual and/or automatic inputs can range from being partial or totally in control of said system, 12. The system of claim 4 wherein while the traffic management system is functioning, that readouts take place, and while system is in “shut down” or “no go” mode that an appropriate message corresponding to “shut down” or “no go” perceivable by motorist still remains. 13. The system of claim 10 wherein said message is appropriately integrated with readout methodology, wherein said easily perceived offline output can include a “FP”-type readout (i.e. “Full Pattern”) for alphanumeric readouts; a “Drive Safely”-type sentence for message board type readouts; an appropriate graphical semaphore for non-sentence, non alphanumeric readouts. wherein easily perceived output can include audio messages 14. The “go/no go system of “safe or not safe” in claim 4 wherein “go/no go” is established by whether a set number of places per net green Tng is exceeded or not, wherein said set number can also include closest rounded off number of places wherein traffic management can happen, wherein said place can be any combination of vehicle plus appropriate following distance, reaction time, space time, and confidence factor, wherein said places are detected within a Pi where they are spread out substantially the most that they will be (i.e. in random approach substantially before detection/counting/traffic managing). 15. The system of claim 4 that acts with dynamically changing and/or adapting traffic management systems (i.e. sensor based) wherein said set number of places (in claim 14) can change along with the adaptability of the traffic management system. 16. The system of claim 15 wherein there is a shrinking and or expanding “net green” Tng, and therefore number of place changes wherein said Pi is constant while said Tng changes. 17. The system of claim 15 wherein there is a shrinking or expanding Service Cycle Pi, and including number of place changes in an also-changing Tng. 18. The system of claim 15 where the safety shutoff adaptively changes in “jumps” or increments, wherein there is a range between numerous (i.e. multiple seconds per time, or many times a second) calculations or analytical scans and a multitude (i.e. many or more thousands of times per second) of analytical scans, wherein said scans individually determine whether the dynamic change is going in one direction or the other wherein the frequency of scans could range between “independent analyses” and a constant “vision” of the direction the dynamic change is going in, wherein the increments either add to/subtract from a buffer, or if past a certain limit/delineation, add or subtract a whole “place” (as from claims 14, 15), and wherein if need be, said delineation would be required to surpass the necessary extra safety time buffers (to prevent loss of continuity by too rapidly changing back and fourth), wherein the increasing or decreasing places can apply to expanding contracting Pi, wherein the increasing or decreasing places can apply to expanding contracting Tng, including through unified phase increases or decreases, or opposing direction tradeoff increase/decreases. 19. The system of claim 10 wherein there is an option for parasitic installation i.e. wherein safety hardware can be mounted in with already existing hardware in a piggyback or parasite condition, and wherein safety hardware can be worked in with already existing infrastructure, 20. The system of claim 11 wherein said manual inputs can include for example, manual intervention, manual shut-down, prompting, preempting, prompting, preempting aboard emergency vehicles, prompting or preempting from remote locations (i.e. control consoles), wherein examples of automatic inputs include, time of day shutoff, bad conditions shutoff, scheduled shutoff, emergency vehicle shutoff, automatic proximity, i.e. lights, sound, emergency automatic on-board rf/radio frequency, school bus proximity shutoff, prompting, preempting, delay shutoff, wherein with said delay shutoff, system can wait for periods ranging from partial Pi through multiple Pi before turning back on, wherein said manual and automatic inputs can make system perform more safely.
2,600
10,147
10,147
14,240,853
2,619
The present invention relates to vascular treatment outcome visualization. To provide an enhanced possibility to check that a vascular treatment has been correctly performed, it is proposed to provide ( 112 ) a first image data ( 114 ) of a region of interest of a vascular structure at a first point in time, and to provide ( 116 ) at least one second image data ( 118 ) of the region of interest of the vascular structure at a second point in time, wherein a vascular treatment has been applied to the vascular structure between the first point in time and the second point in time. Further, the first and the at least one second image data are combined ( 120 ) generating a joint outcome visualization image data ( 122 ) and the joint outcome visualization image data is displayed ( 124 ).
1. A device (10) for vascular treatment outcome visualization, comprising; a processing unit (12); an interface unit (14); and a display unit (16); wherein the interface unit is configured to provide the processing unit with a first image data (20) of a region of interest of a vascular structure at a first point in time; and to provide the processing unit with at least one second image data (22) of a region of interest of a vascular structure at a second point in time; wherein, between the first point in time and the second point in time, a vascular treatment is applied to the vascular structure; wherein the first point in time relates to a pre-treatment state, and the second point in time relates to a post-treatment state; wherein between the pre-treatment state and the post-treatment state, a medical intervention has been performed affecting the vascular structure in the region of interest; wherein the processing unit is configured to combine the first and the at least one second image data to generate a joint outcome visualization image data (24); and wherein the display unit is configured to display the joint outcome visualization image data. 2. Device according to claim 1, wherein the processing unit is configured to register the first and the second image data for the combination. 3. Device according to claim 1, wherein the vascular treatment comprises a placement of a predetermined medical device inside a vascular structure; wherein the interface unit is configured to provide a device image data of the placed device; wherein the processing unit is configured to combine the device image data, in addition to the first image data and the second image data, to generate the joint outcome visualisation image data. 4. Device according to claim 3, wherein the processing unit is configured to register the device image data with at least the first or/and the second image data. 5. Device according to claim 3, wherein the processing device is configured to provide the device image data as an outcome of at least one image processing sub-step, which image processing sub-step is based on a plurality of secondary image data of the region of interest of the vascular structure after the vascular treatment. 6. Device according to claim 3, wherein for the second image data: the interface unit is configured to provide a plurality of images of a first subset of images, in which the device is visible; and to provide at least one image of a second subset of images as a mask image data, in which the vascular structure is visible; wherein the first subset of images and the second subset of images relate to a point in time after the vascular treatment has been applied; and the processing unit is configured to register and combine the plurality of images of the first subset of images to each other along time to generate a boosted device image data in which regions relating to the device are boosted; and wherein the processing unit is configured to combine the first image data, the boosted device image data and the mask image data to generate the joint outcome visualization image data. 7. Device according to claim 1, wherein the interface unit is configured to provide a first sequence of first images; and to provide a second sequence of second images; wherein the sequences comprise several images along time; and wherein the processing unit is configured to temporally register the first and the second sequences. 8. Device according to claim 1, wherein the interface unit is configured to provide further image data of the region of interest of the vascular structure at a further point in time; wherein the further point in time is arranged between the first and the second point in time; and wherein the processing unit is configured to also combine the further image data for the generation of the joint outcome visualization image data. 9. A medical imaging system (50) for vascular treatment outcome visualization, comprising: an image acquisition unit (52); and a device (54) for vascular treatment outcome visualization according t claim 1; wherein the image acquisition unit is configured to provide the first image data of the region of interest of the vascular structure; and to provide the second image data of the region of interest of the vascular structure. 10. A method (100) for vascular treatment outcome visualization, comprising the following steps: a) providing (112) a first image data (114) of a region of interest of a vascular structure at a first point in time; b) providing (116) at least one second image data (118) of the region of interest of the vascular structure at a second point in time; wherein a vascular treatment has been applied to the vascular structure between the first point in time and the second point in time; c) combining (120) the first and the at least one second image data generating a joint outcome visualization image data (122); and d) displaying (124) the joint outcome visualization image data. 11. Method according to claim 10, wherein the vascular treatment comprises placing a predetermined medical device inside the vascular structure; wherein, in addition to the first image data and the second image data, a device image data (132) of the placed device is also combined (134) to generate the joint outcome visualisation image data (122). 12. Method according to claim 10, wherein for the second image data: a plurality of images of a first subset (146) of images, in which the device is visible, are registered (148) to each other along time generating a boosted device image data (150) in which regions relating to the device are boosted; and at least one image of a second subset (152) of images is provided as a mask image data (154), in which the vascular structure is visible; wherein the first subset of images and the second subset of images relate to a point in time after the vascular treatment has been applied; and wherein in step c), the first image data, the boosted device image data and the mask image data are combined (156) to generate the joint outcome visualization image data (122). 13. Method according to claim 10, wherein further image data of the region of interest of the vascular structure at a further point in time is provided; wherein the further point in time is arranged between the first and the second point in time; and wherein the further image data is also combined in step c). 14. Computer program element for controlling an apparatus according to claim 1, which, when being executed by a processing unit. 15. Computer readable medium having stored the program element of claim 14.
The present invention relates to vascular treatment outcome visualization. To provide an enhanced possibility to check that a vascular treatment has been correctly performed, it is proposed to provide ( 112 ) a first image data ( 114 ) of a region of interest of a vascular structure at a first point in time, and to provide ( 116 ) at least one second image data ( 118 ) of the region of interest of the vascular structure at a second point in time, wherein a vascular treatment has been applied to the vascular structure between the first point in time and the second point in time. Further, the first and the at least one second image data are combined ( 120 ) generating a joint outcome visualization image data ( 122 ) and the joint outcome visualization image data is displayed ( 124 ).1. A device (10) for vascular treatment outcome visualization, comprising; a processing unit (12); an interface unit (14); and a display unit (16); wherein the interface unit is configured to provide the processing unit with a first image data (20) of a region of interest of a vascular structure at a first point in time; and to provide the processing unit with at least one second image data (22) of a region of interest of a vascular structure at a second point in time; wherein, between the first point in time and the second point in time, a vascular treatment is applied to the vascular structure; wherein the first point in time relates to a pre-treatment state, and the second point in time relates to a post-treatment state; wherein between the pre-treatment state and the post-treatment state, a medical intervention has been performed affecting the vascular structure in the region of interest; wherein the processing unit is configured to combine the first and the at least one second image data to generate a joint outcome visualization image data (24); and wherein the display unit is configured to display the joint outcome visualization image data. 2. Device according to claim 1, wherein the processing unit is configured to register the first and the second image data for the combination. 3. Device according to claim 1, wherein the vascular treatment comprises a placement of a predetermined medical device inside a vascular structure; wherein the interface unit is configured to provide a device image data of the placed device; wherein the processing unit is configured to combine the device image data, in addition to the first image data and the second image data, to generate the joint outcome visualisation image data. 4. Device according to claim 3, wherein the processing unit is configured to register the device image data with at least the first or/and the second image data. 5. Device according to claim 3, wherein the processing device is configured to provide the device image data as an outcome of at least one image processing sub-step, which image processing sub-step is based on a plurality of secondary image data of the region of interest of the vascular structure after the vascular treatment. 6. Device according to claim 3, wherein for the second image data: the interface unit is configured to provide a plurality of images of a first subset of images, in which the device is visible; and to provide at least one image of a second subset of images as a mask image data, in which the vascular structure is visible; wherein the first subset of images and the second subset of images relate to a point in time after the vascular treatment has been applied; and the processing unit is configured to register and combine the plurality of images of the first subset of images to each other along time to generate a boosted device image data in which regions relating to the device are boosted; and wherein the processing unit is configured to combine the first image data, the boosted device image data and the mask image data to generate the joint outcome visualization image data. 7. Device according to claim 1, wherein the interface unit is configured to provide a first sequence of first images; and to provide a second sequence of second images; wherein the sequences comprise several images along time; and wherein the processing unit is configured to temporally register the first and the second sequences. 8. Device according to claim 1, wherein the interface unit is configured to provide further image data of the region of interest of the vascular structure at a further point in time; wherein the further point in time is arranged between the first and the second point in time; and wherein the processing unit is configured to also combine the further image data for the generation of the joint outcome visualization image data. 9. A medical imaging system (50) for vascular treatment outcome visualization, comprising: an image acquisition unit (52); and a device (54) for vascular treatment outcome visualization according t claim 1; wherein the image acquisition unit is configured to provide the first image data of the region of interest of the vascular structure; and to provide the second image data of the region of interest of the vascular structure. 10. A method (100) for vascular treatment outcome visualization, comprising the following steps: a) providing (112) a first image data (114) of a region of interest of a vascular structure at a first point in time; b) providing (116) at least one second image data (118) of the region of interest of the vascular structure at a second point in time; wherein a vascular treatment has been applied to the vascular structure between the first point in time and the second point in time; c) combining (120) the first and the at least one second image data generating a joint outcome visualization image data (122); and d) displaying (124) the joint outcome visualization image data. 11. Method according to claim 10, wherein the vascular treatment comprises placing a predetermined medical device inside the vascular structure; wherein, in addition to the first image data and the second image data, a device image data (132) of the placed device is also combined (134) to generate the joint outcome visualisation image data (122). 12. Method according to claim 10, wherein for the second image data: a plurality of images of a first subset (146) of images, in which the device is visible, are registered (148) to each other along time generating a boosted device image data (150) in which regions relating to the device are boosted; and at least one image of a second subset (152) of images is provided as a mask image data (154), in which the vascular structure is visible; wherein the first subset of images and the second subset of images relate to a point in time after the vascular treatment has been applied; and wherein in step c), the first image data, the boosted device image data and the mask image data are combined (156) to generate the joint outcome visualization image data (122). 13. Method according to claim 10, wherein further image data of the region of interest of the vascular structure at a further point in time is provided; wherein the further point in time is arranged between the first and the second point in time; and wherein the further image data is also combined in step c). 14. Computer program element for controlling an apparatus according to claim 1, which, when being executed by a processing unit. 15. Computer readable medium having stored the program element of claim 14.
2,600
10,148
10,148
13,809,981
2,616
From only a single two-dimensional source image ( 20 ) of a scene, multiple images ( 28 ) of the scene are generated, each image from a different viewing direction or angle. For each of the multiple images, a disparity ( 24 ) is generated corresponding to the viewing direction and combined with significant pixels (e.g., edge detected pixels) in the source image. The disparity may be filtered ( 26 ) (e.g., low-pass filtered) prior to combining with the significant pixels. The multiple images are combined ( 64 ) into an integrated image ( 62 ) for display, for example, on an autostereoscopic monitor ( 10 ). The process can be repeated for multiple related source images to create a video sequence.
1. A method of generating a plurality of multi-view images of a scene, the method comprising: obtaining a single two-dimensional source image of a scene, the source image comprising a plurality of source pixels (20); and automatically generating at least two multi-view images (28) of the scene from only at least some of the plurality of source pixels, each of the at least two multi-view images having a different viewing direction for the scene. 2. The method of claim 1, further comprising combining (64) the at least two multi-view images (60) into a single integrated image of the scene (62). 3. The method of claim 2, further comprising displaying the single integrated image on a display (10). 4. The method of claim 3, wherein the display comprises an autostereoscopic monitor. 5. The method of claim 1, wherein the automatically generating comprises, for each of the at least two multi-view images (28), generating a disparity (24) for each of the at least some of the plurality of source pixels. 6. The method of claim 5, wherein the disparity comprises weighted values for each of red, blue and green colors. 7. The method of claim 5, wherein the automatically generating further comprises, for each of the at least two multi-view images (28), combining (29) the disparity with each of the at least some of the plurality of source pixels. 8. The method of claim 7, wherein the automatically generating further comprises, prior to the combining, filtering (26) to create a filtered disparity (27), and wherein the combining comprises combining the filtered disparity with each of the at least some of the plurality of source pixels (20). 9. The method of claim 8, wherein the filtering comprises low-pass filtering. 10. The method of claim 1, wherein the automatically generating comprises identifying (35) the at least some of the plurality of source pixels as having at least a predetermined level of relevance. 11. The method of claim 10, wherein the identifying comprises edge detection. 12. The method of claim 1, further comprising repeating the obtaining and the automatically generating for a series of related images of the scene to create a video sequence. 13. A computing unit (50), comprising: a memory (56); and a processor (52) in communication with the memory for generating a plurality of multi-view images of a scene according to the method of claim 1. 14. At least one hardware chip for generating a plurality of multi-view images of a scene according to the method claim 1. 15. The at least one hardware chip of claim 14, wherein the at least one hardware chip comprises a Field Programmable Gate Array chip. 16. A computer program product (40) for generating multi-view images of a scene, the computer program product comprising a storage medium (42) readable by a processing circuit and storing instructions (44) for execution by the processing circuit for performing a method according to claim 1.
From only a single two-dimensional source image ( 20 ) of a scene, multiple images ( 28 ) of the scene are generated, each image from a different viewing direction or angle. For each of the multiple images, a disparity ( 24 ) is generated corresponding to the viewing direction and combined with significant pixels (e.g., edge detected pixels) in the source image. The disparity may be filtered ( 26 ) (e.g., low-pass filtered) prior to combining with the significant pixels. The multiple images are combined ( 64 ) into an integrated image ( 62 ) for display, for example, on an autostereoscopic monitor ( 10 ). The process can be repeated for multiple related source images to create a video sequence.1. A method of generating a plurality of multi-view images of a scene, the method comprising: obtaining a single two-dimensional source image of a scene, the source image comprising a plurality of source pixels (20); and automatically generating at least two multi-view images (28) of the scene from only at least some of the plurality of source pixels, each of the at least two multi-view images having a different viewing direction for the scene. 2. The method of claim 1, further comprising combining (64) the at least two multi-view images (60) into a single integrated image of the scene (62). 3. The method of claim 2, further comprising displaying the single integrated image on a display (10). 4. The method of claim 3, wherein the display comprises an autostereoscopic monitor. 5. The method of claim 1, wherein the automatically generating comprises, for each of the at least two multi-view images (28), generating a disparity (24) for each of the at least some of the plurality of source pixels. 6. The method of claim 5, wherein the disparity comprises weighted values for each of red, blue and green colors. 7. The method of claim 5, wherein the automatically generating further comprises, for each of the at least two multi-view images (28), combining (29) the disparity with each of the at least some of the plurality of source pixels. 8. The method of claim 7, wherein the automatically generating further comprises, prior to the combining, filtering (26) to create a filtered disparity (27), and wherein the combining comprises combining the filtered disparity with each of the at least some of the plurality of source pixels (20). 9. The method of claim 8, wherein the filtering comprises low-pass filtering. 10. The method of claim 1, wherein the automatically generating comprises identifying (35) the at least some of the plurality of source pixels as having at least a predetermined level of relevance. 11. The method of claim 10, wherein the identifying comprises edge detection. 12. The method of claim 1, further comprising repeating the obtaining and the automatically generating for a series of related images of the scene to create a video sequence. 13. A computing unit (50), comprising: a memory (56); and a processor (52) in communication with the memory for generating a plurality of multi-view images of a scene according to the method of claim 1. 14. At least one hardware chip for generating a plurality of multi-view images of a scene according to the method claim 1. 15. The at least one hardware chip of claim 14, wherein the at least one hardware chip comprises a Field Programmable Gate Array chip. 16. A computer program product (40) for generating multi-view images of a scene, the computer program product comprising a storage medium (42) readable by a processing circuit and storing instructions (44) for execution by the processing circuit for performing a method according to claim 1.
2,600
10,149
10,149
14,641,135
2,641
Systems and method for improving performance of a radio frequency system are provided. One embodiment describes a radio frequency system, which may be modified based upon a detected housing and/or accessory of an electronic device. The modifications may counteract impacts of the housings and/or accessories on the radio frequency transmission.
1. An electronic device, comprising: a radio frequency system, configured to communicate with a radio frequency reader; tangible, non-transitory storage, comprising configuration adjustment logic that comprises machine-readable instructions that associate one or more settings for the radio frequency system with one or more electronic device housings, one or more electronic device accessories, or both; and a processor configured to: determine at least one identity, characteristic, or both of at least one housing of the electronic device, at least one proximate accessory of the electronic device, or both; receive at least one particular setting from the one or more settings that is associated with the identity, the characteristic, or both; and apply the at least one particular setting to the radio frequency system. 2. The electronic device of claim 1, wherein the processor is configured to: re-determine the at least one identity, characteristic, or both; re-receive the at least one particular setting associated with re-determined identity, characteristic, or both; and apply the at least one re-received particular setting. 3. The electronic device of claim 2, wherein the processor is configured to re-determine the at least one identity, characteristic, or both upon determining that the at least one proximate accessory is replaced or removed. 4. The electronic device of claim 2, wherein the processor is configured to re-determine the at least one identity, characteristic, or both upon power up of the electronic device. 5. The electronic device of claim 2, wherein the processor is configured to re-determine the at least one identity, characteristic, or both at a periodic interval. 6. The electronic device of claim 1, wherein the configuration adjustment logic comprises a lookup table (LUT). 7. The electronic device of claim 1, wherein the one or more settings comprise a load modulation amplitude (LMA) setting. 8. The electronic device of claim 1, wherein the one or more settings comprise an automatic power control (APC) setting. 9. The electronic device of claim 1, wherein the one or more settings comprise a frame delay time (FDT) setting. 10. The electronic device of claim 1, wherein the one or more settings comprise a frame delay time (FDT) setting. 11. The electronic device of claim 1, wherein the at least one characteristic comprises a size or shape of at least one of the one or more electronic device housings. 12. The electronic device of claim 1, wherein the at least one characteristic comprises a material of at least one of the one or more electronic device housings. 13. The electronic device of claim 1, wherein the at least one characteristic comprises a size, shape, style, density, or combination thereof of at least one of the one or more electronic device accessories. 14. The electronic device of claim 1, wherein the at least one characteristic comprises a material of at least one of the one or more electronic device accessories. 15. The electronic device of claim 1, wherein the electronic device comprises a watch. 16. The electronic device of claim 15, wherein at least one of the one or more electronic device accessories comprises an interchangeable watch band. 17. The electronic device of claim 1, wherein at least one of the one or more electronic device accessories comprises an interchangeable protective case, cover, or both. 18. The electronic device of claim 1, wherein at least one of the one or more electronic device accessories comprises a removable keyboard. 19. The electronic device of claim 1, wherein the processor is configured to: determine the at least one identity, characteristic, or both of the at least one housing of the electronic device based upon information that is statically stored in the electronic device's firmware or other storage. 20. The electronic device of claim 1, wherein the processor is configured to: determine the at least one identity, characteristic, or both of the at least one proximate accessory of the electronic device based upon a signal provided by the at least one proximate accessory to the electronic device. 21. The electronic device of claim 20, wherein the processor is configured to determine the at least one identity, characteristic, or both via a manual entry provided via a graphical user interface associated with the electronic device. 22. A processor-implemented method, comprising: determining at least one identity, characteristic, or both of at least one housing of an electronic device, at least one proximate accessory of the electronic device, or both; querying for and receiving at least one particular setting associated with the identity, the characteristic, or both from a set of settings associated with one or more housings, one or more accessories, or both, wherein the particular setting is configured to counteract an impact of the one or more housings, one or more accessories, or both; and applying the at least one particular setting to a radio frequency system of the electronic device. 23. The processor-implemented method of claim 22, wherein the method is executed each time the electronic device is powered on. 24. The processor-implemented method of claim 22, comprising: detecting an accessory modification of the electronic device; wherein the method is executed each time the accessory modification is detected. 25. The processor-implemented method of claim 22, wherein the set of settings comprise one or more settings associated with a model identifier of the electronic device, the at least one housing, the at least one proximate accessory, or any combination thereof. 26. The processor-implemented method of claim 22, wherein the at least one characteristic comprises a material of the housing, comprising: gold, stainless steel, aluminum, ceramic, or any combination thereof. 27. The processor-implemented method of claim 22, wherein the at least one characteristic comprises a material of a watch band, comprising: leather, rubber, metal, or any combination thereof. 28. The processor-implemented method of claim 22, wherein the particular setting comprises a load modulation amplitude (LMA) setting, an automatic power control (APC) setting, a phase adjustment, a frame delay time (FDT) setting, or any combination thereof. 29. A radio frequency system, configured to: receive one or more particular settings associated with at least one housing, at least one accessory, or both of an electronic device associated with the radio frequency system, wherein the one or more particular settings are configured to counteract an impact of the at least one housing, at least one accessory, or both communications of the radio frequency system; and communicate with a radio frequency reader, using the one or more particular settings. 30. A tangible, non-transitory, machine-readable medium, comprising machine-readable instructions to: determine one or more housings, proximate accessories, or both associated with a current electronic device; determine one or more settings, field strengths, or both of a radio frequency system of the current electronic device; perform an interoperability test between the radio frequency system and at least one radio frequency reader to determine whether or not a threshold performance has been reached; if the threshold performance has not been reached: modify at least one of the one or more settings; and re-run the interoperability test, repeating these steps until the threshold performance has been reached; once the threshold performance has been reached: store the one or more settings and associate them with the one or more housings, proximate accessories, or both; and provide the one or more settings associated with the one or more housings, proximate accessories, or both to the electronic device. 31. The machine-readable medium of claim 30, comprising machine readable instructions to repeat each of the machine-readable instructions of claim 30 until all desired housing and accessory combinations are stored with associated settings.
Systems and method for improving performance of a radio frequency system are provided. One embodiment describes a radio frequency system, which may be modified based upon a detected housing and/or accessory of an electronic device. The modifications may counteract impacts of the housings and/or accessories on the radio frequency transmission.1. An electronic device, comprising: a radio frequency system, configured to communicate with a radio frequency reader; tangible, non-transitory storage, comprising configuration adjustment logic that comprises machine-readable instructions that associate one or more settings for the radio frequency system with one or more electronic device housings, one or more electronic device accessories, or both; and a processor configured to: determine at least one identity, characteristic, or both of at least one housing of the electronic device, at least one proximate accessory of the electronic device, or both; receive at least one particular setting from the one or more settings that is associated with the identity, the characteristic, or both; and apply the at least one particular setting to the radio frequency system. 2. The electronic device of claim 1, wherein the processor is configured to: re-determine the at least one identity, characteristic, or both; re-receive the at least one particular setting associated with re-determined identity, characteristic, or both; and apply the at least one re-received particular setting. 3. The electronic device of claim 2, wherein the processor is configured to re-determine the at least one identity, characteristic, or both upon determining that the at least one proximate accessory is replaced or removed. 4. The electronic device of claim 2, wherein the processor is configured to re-determine the at least one identity, characteristic, or both upon power up of the electronic device. 5. The electronic device of claim 2, wherein the processor is configured to re-determine the at least one identity, characteristic, or both at a periodic interval. 6. The electronic device of claim 1, wherein the configuration adjustment logic comprises a lookup table (LUT). 7. The electronic device of claim 1, wherein the one or more settings comprise a load modulation amplitude (LMA) setting. 8. The electronic device of claim 1, wherein the one or more settings comprise an automatic power control (APC) setting. 9. The electronic device of claim 1, wherein the one or more settings comprise a frame delay time (FDT) setting. 10. The electronic device of claim 1, wherein the one or more settings comprise a frame delay time (FDT) setting. 11. The electronic device of claim 1, wherein the at least one characteristic comprises a size or shape of at least one of the one or more electronic device housings. 12. The electronic device of claim 1, wherein the at least one characteristic comprises a material of at least one of the one or more electronic device housings. 13. The electronic device of claim 1, wherein the at least one characteristic comprises a size, shape, style, density, or combination thereof of at least one of the one or more electronic device accessories. 14. The electronic device of claim 1, wherein the at least one characteristic comprises a material of at least one of the one or more electronic device accessories. 15. The electronic device of claim 1, wherein the electronic device comprises a watch. 16. The electronic device of claim 15, wherein at least one of the one or more electronic device accessories comprises an interchangeable watch band. 17. The electronic device of claim 1, wherein at least one of the one or more electronic device accessories comprises an interchangeable protective case, cover, or both. 18. The electronic device of claim 1, wherein at least one of the one or more electronic device accessories comprises a removable keyboard. 19. The electronic device of claim 1, wherein the processor is configured to: determine the at least one identity, characteristic, or both of the at least one housing of the electronic device based upon information that is statically stored in the electronic device's firmware or other storage. 20. The electronic device of claim 1, wherein the processor is configured to: determine the at least one identity, characteristic, or both of the at least one proximate accessory of the electronic device based upon a signal provided by the at least one proximate accessory to the electronic device. 21. The electronic device of claim 20, wherein the processor is configured to determine the at least one identity, characteristic, or both via a manual entry provided via a graphical user interface associated with the electronic device. 22. A processor-implemented method, comprising: determining at least one identity, characteristic, or both of at least one housing of an electronic device, at least one proximate accessory of the electronic device, or both; querying for and receiving at least one particular setting associated with the identity, the characteristic, or both from a set of settings associated with one or more housings, one or more accessories, or both, wherein the particular setting is configured to counteract an impact of the one or more housings, one or more accessories, or both; and applying the at least one particular setting to a radio frequency system of the electronic device. 23. The processor-implemented method of claim 22, wherein the method is executed each time the electronic device is powered on. 24. The processor-implemented method of claim 22, comprising: detecting an accessory modification of the electronic device; wherein the method is executed each time the accessory modification is detected. 25. The processor-implemented method of claim 22, wherein the set of settings comprise one or more settings associated with a model identifier of the electronic device, the at least one housing, the at least one proximate accessory, or any combination thereof. 26. The processor-implemented method of claim 22, wherein the at least one characteristic comprises a material of the housing, comprising: gold, stainless steel, aluminum, ceramic, or any combination thereof. 27. The processor-implemented method of claim 22, wherein the at least one characteristic comprises a material of a watch band, comprising: leather, rubber, metal, or any combination thereof. 28. The processor-implemented method of claim 22, wherein the particular setting comprises a load modulation amplitude (LMA) setting, an automatic power control (APC) setting, a phase adjustment, a frame delay time (FDT) setting, or any combination thereof. 29. A radio frequency system, configured to: receive one or more particular settings associated with at least one housing, at least one accessory, or both of an electronic device associated with the radio frequency system, wherein the one or more particular settings are configured to counteract an impact of the at least one housing, at least one accessory, or both communications of the radio frequency system; and communicate with a radio frequency reader, using the one or more particular settings. 30. A tangible, non-transitory, machine-readable medium, comprising machine-readable instructions to: determine one or more housings, proximate accessories, or both associated with a current electronic device; determine one or more settings, field strengths, or both of a radio frequency system of the current electronic device; perform an interoperability test between the radio frequency system and at least one radio frequency reader to determine whether or not a threshold performance has been reached; if the threshold performance has not been reached: modify at least one of the one or more settings; and re-run the interoperability test, repeating these steps until the threshold performance has been reached; once the threshold performance has been reached: store the one or more settings and associate them with the one or more housings, proximate accessories, or both; and provide the one or more settings associated with the one or more housings, proximate accessories, or both to the electronic device. 31. The machine-readable medium of claim 30, comprising machine readable instructions to repeat each of the machine-readable instructions of claim 30 until all desired housing and accessory combinations are stored with associated settings.
2,600
10,150
10,150
15,265,330
2,683
A system and method of displaying optimized aircraft energy level to a flight crew includes processing flight plan data, in a processor, to determine the optimized aircraft energy level along a descent profile of the aircraft from cruise altitude down to aircraft destination, and continuously processing aircraft data, in the processor, to continuously determine, in real-time, an actual aircraft energy level. The actual aircraft energy level of the aircraft is continuously compared, in the processor, to the optimized aircraft energy level. The processor is use to command a display device to render an image that indicates: (i) the optimized aircraft energy level, (ii) how the actual aircraft energy level differs from the optimized aircraft energy level, and (iii) how the actual aircraft energy level is trending relative to the optimized aircraft energy level.
1. A method of displaying optimized aircraft energy level to a flight crew, the method comprising the steps of: processing flight plan data, in a processor, to determine the optimized aircraft energy level along a descent profile of the aircraft from cruise altitude down to aircraft destination; continuously processing aircraft data, in the processor, to continuously determine, in real-time, an actual aircraft energy level; continuously comparing, in the processor, the actual aircraft energy level of the aircraft to the optimized aircraft energy level; and commanding, using the processor, a display device to render an image that indicates: (i) the optimized aircraft energy level, (ii) how the actual aircraft energy level differs from the optimized aircraft energy level, and (iii) how the actual aircraft energy level is trending relative to the optimized aircraft energy level. 2. The method of claim 1, further comprising: commanding, using the processor, the display device to render indicia of one or more thresholds to indicate when corrective action at least could be taken to correct how the actual aircraft energy level is trending relative to the optimized aircraft energy level. 3. The method of claim 2, further comprising: commanding, using the processor, the display device to render one or more alerts when the actual aircraft energy level reaches the one or more thresholds. 4. The method of claim 1, further comprising: determining, in the processor, one or more flight path constraints along the descent profile; and determining, in the processor, a criticality level based on a likelihood of the aircraft meeting the one or more flight constraints at the actual aircraft energy; and commanding, using the processor, the display device to render the image based on the determined criticality level and how the determined criticality level is trending. 5. The method of claim 1, wherein the step of processing the flight plan data comprises: processing optimized aircraft speeds along the descent profile to determine optimized aircraft kinetic energy levels along the descent profile; processing optimized aircraft altitudes along the descent profile to determine optimized aircraft potential energy levels along the descent profile; and summing the optimized kinetic energy levels and the optimized potential energy levels to determine the optimized aircraft energy level along the descent profile. 6. The method of claim 1, further comprising: continuously determining, in the processor, actual aircraft kinetic energy level and actual aircraft potential energy level; and continuously summing the actual aircraft kinetic energy level and the actual aircraft potential energy level to determine the actual aircraft energy level. 7. The method of claim 6, further comprising: sensing aircraft speed; sensing aircraft altitude; continuously processing the sensed aircraft speed, in the processor, to determine the actual aircraft kinetic energy level; and continuously processing the sensed aircraft altitude, in the processor, to determine the actual aircraft potential energy level. 8. The method of claim 1, further comprising: commanding, using the processor, the display device to render one or more visual cues that indicate actions the flight crew could take to converge the actual aircraft energy level toward the optimized aircraft energy level. 9. The method of claim 1, further comprising: automatically supplying commands, using the processor, to one or more aircraft control systems that will cause the actual aircraft energy level to converge toward the optimized aircraft energy level. 10. A system for displaying optimized aircraft energy level to a flight crew, comprising: a display device coupled to receive image rendering display commands and configured, upon receipt thereof, to render various images; and a processor coupled to receive flight plan data and aircraft data and configured to: process the flight plan data to determine the optimized aircraft energy level along a descent profile of the aircraft from cruise altitude down to aircraft destination, continuously process the aircraft data to continuously determine, in real-time, an actual aircraft energy level, continuously compare the actual aircraft energy level of the aircraft to the optimized aircraft energy level, and supply image rendering display commands to the display device that cause the display device to render an image that indicates (i) the optimized aircraft energy level, (ii) how the actual aircraft energy level differs from the optimized aircraft energy level, and (iii) how the actual aircraft energy level is trending relative to the optimized aircraft energy level. 11. The system of claim 10, wherein the processor is further configured to command the display device to render indicia of one or more thresholds to indicate when corrective action at least could be taken to correct how the actual aircraft energy level is trending relative to the optimized aircraft energy level. 12. The system of claim 10, wherein the processor is further configured to command the display device to render one or more alerts when the actual aircraft energy level reaches the one or more thresholds. 13. The system of claim 10, wherein the processor is further configured to: process the flight plan data to detect one or more flight path constraints along the descent profile; determine a criticality level based on a likelihood of the aircraft meeting the one or more flight constraints at the actual aircraft energy; and supply the image rendering display commands to the display device that cause the display device to render the image based on the determined criticality level. 14. The system of claim 10, wherein the processor is further configured to: process optimized aircraft speeds along the descent profile to determine optimized aircraft kinetic energy levels along the descent profile; process optimized aircraft altitudes along the descent profile to determine optimized aircraft potential energy levels along the descent profile; and sum the optimized kinetic energy levels and the optimized potential energy levels to determine the optimized aircraft energy level along the descent profile. 15. The system of claim 10, wherein the processor is further configured to: continuously determine actual aircraft kinetic energy level and actual aircraft potential energy level; and continuously sum the actual aircraft kinetic energy level and the actual aircraft potential energy level to determine the actual aircraft energy level. 16. The system of claim 15, further comprising: an aircraft speed sensor configured to sense aircraft speed and supply an aircraft speed signal representative thereof; an aircraft altitude sensor configured to sense aircraft altitude and supply an aircraft altitude signal representative thereof, wherein the processor is coupled to receive the aircraft speed signal and the aircraft altitude signal and is further configured to: continuously process the sensed aircraft speed to determine the actual aircraft kinetic energy level, and continuously process the sensed aircraft altitude to determine the actual aircraft potential energy level. 17. The system of claim 10, wherein the processor is further configured to supply image rendering display commands to the display device that cause the display device to render one or more visual cues that indicate actions the flight crew could take to converge the actual aircraft energy level toward the optimized aircraft energy level. 18. The system of claim 10, wherein the processor is further configured to automatically supply commands to one or more aircraft control systems that will cause the actual aircraft energy level to converge toward the optimized aircraft energy level. 19. A system for displaying optimized aircraft energy level to a flight crew, comprising: an aircraft speed sensor configured to sense aircraft speed and supply an aircraft speed signal representative thereof; an aircraft altitude sensor configured to sense aircraft altitude and supply an aircraft altitude signal representative thereof, a display device coupled to receive image rendering display commands and configured, upon receipt thereof, to render various images; and a processor coupled to receive flight plan data, the aircraft speed signal, and the aircraft altitude signal, the processor configured to: process the flight plan data to determine the optimized aircraft energy level along a descent profile of the aircraft from cruise altitude down to aircraft destination, continuously process the sensed aircraft speed to determine the actual aircraft kinetic energy level, continuously process the sensed aircraft altitude to determine the actual aircraft potential energy level continuously compare the actual aircraft energy level of the aircraft to the optimized aircraft energy level, continuously sum the actual aircraft kinetic energy level and the actual aircraft potential energy level to determine the actual aircraft energy level, and supply image rendering display commands to the display device that cause the display device to: render an image that indicates (i) the optimized aircraft energy level, (ii) how the actual aircraft energy level differs from the optimized aircraft energy level, and (iii) how the actual aircraft energy level is trending relative to the optimized aircraft energy level, and render one or more visual cues that indicate actions the flight crew could take to converge the actual aircraft energy level toward the optimized aircraft energy level. 20. The system of claim 19, wherein the processor is further configured to: process the flight plan data to detect one or more flight path constraints along the descent profile; determine a criticality level based on a likelihood of the aircraft meeting the one or more flight constraints at the actual aircraft energy; supply the image rendering display commands to the display device that cause the display device to render the image based on the determined criticality level; process optimized aircraft speeds along the descent profile to determine optimized aircraft kinetic energy levels along the descent profile; process optimized aircraft altitudes along the descent profile to determine optimized aircraft potential energy levels along the descent profile; and sum the optimized kinetic energy levels and the optimized potential energy levels to determine the optimized aircraft energy level along the descent profile.
A system and method of displaying optimized aircraft energy level to a flight crew includes processing flight plan data, in a processor, to determine the optimized aircraft energy level along a descent profile of the aircraft from cruise altitude down to aircraft destination, and continuously processing aircraft data, in the processor, to continuously determine, in real-time, an actual aircraft energy level. The actual aircraft energy level of the aircraft is continuously compared, in the processor, to the optimized aircraft energy level. The processor is use to command a display device to render an image that indicates: (i) the optimized aircraft energy level, (ii) how the actual aircraft energy level differs from the optimized aircraft energy level, and (iii) how the actual aircraft energy level is trending relative to the optimized aircraft energy level.1. A method of displaying optimized aircraft energy level to a flight crew, the method comprising the steps of: processing flight plan data, in a processor, to determine the optimized aircraft energy level along a descent profile of the aircraft from cruise altitude down to aircraft destination; continuously processing aircraft data, in the processor, to continuously determine, in real-time, an actual aircraft energy level; continuously comparing, in the processor, the actual aircraft energy level of the aircraft to the optimized aircraft energy level; and commanding, using the processor, a display device to render an image that indicates: (i) the optimized aircraft energy level, (ii) how the actual aircraft energy level differs from the optimized aircraft energy level, and (iii) how the actual aircraft energy level is trending relative to the optimized aircraft energy level. 2. The method of claim 1, further comprising: commanding, using the processor, the display device to render indicia of one or more thresholds to indicate when corrective action at least could be taken to correct how the actual aircraft energy level is trending relative to the optimized aircraft energy level. 3. The method of claim 2, further comprising: commanding, using the processor, the display device to render one or more alerts when the actual aircraft energy level reaches the one or more thresholds. 4. The method of claim 1, further comprising: determining, in the processor, one or more flight path constraints along the descent profile; and determining, in the processor, a criticality level based on a likelihood of the aircraft meeting the one or more flight constraints at the actual aircraft energy; and commanding, using the processor, the display device to render the image based on the determined criticality level and how the determined criticality level is trending. 5. The method of claim 1, wherein the step of processing the flight plan data comprises: processing optimized aircraft speeds along the descent profile to determine optimized aircraft kinetic energy levels along the descent profile; processing optimized aircraft altitudes along the descent profile to determine optimized aircraft potential energy levels along the descent profile; and summing the optimized kinetic energy levels and the optimized potential energy levels to determine the optimized aircraft energy level along the descent profile. 6. The method of claim 1, further comprising: continuously determining, in the processor, actual aircraft kinetic energy level and actual aircraft potential energy level; and continuously summing the actual aircraft kinetic energy level and the actual aircraft potential energy level to determine the actual aircraft energy level. 7. The method of claim 6, further comprising: sensing aircraft speed; sensing aircraft altitude; continuously processing the sensed aircraft speed, in the processor, to determine the actual aircraft kinetic energy level; and continuously processing the sensed aircraft altitude, in the processor, to determine the actual aircraft potential energy level. 8. The method of claim 1, further comprising: commanding, using the processor, the display device to render one or more visual cues that indicate actions the flight crew could take to converge the actual aircraft energy level toward the optimized aircraft energy level. 9. The method of claim 1, further comprising: automatically supplying commands, using the processor, to one or more aircraft control systems that will cause the actual aircraft energy level to converge toward the optimized aircraft energy level. 10. A system for displaying optimized aircraft energy level to a flight crew, comprising: a display device coupled to receive image rendering display commands and configured, upon receipt thereof, to render various images; and a processor coupled to receive flight plan data and aircraft data and configured to: process the flight plan data to determine the optimized aircraft energy level along a descent profile of the aircraft from cruise altitude down to aircraft destination, continuously process the aircraft data to continuously determine, in real-time, an actual aircraft energy level, continuously compare the actual aircraft energy level of the aircraft to the optimized aircraft energy level, and supply image rendering display commands to the display device that cause the display device to render an image that indicates (i) the optimized aircraft energy level, (ii) how the actual aircraft energy level differs from the optimized aircraft energy level, and (iii) how the actual aircraft energy level is trending relative to the optimized aircraft energy level. 11. The system of claim 10, wherein the processor is further configured to command the display device to render indicia of one or more thresholds to indicate when corrective action at least could be taken to correct how the actual aircraft energy level is trending relative to the optimized aircraft energy level. 12. The system of claim 10, wherein the processor is further configured to command the display device to render one or more alerts when the actual aircraft energy level reaches the one or more thresholds. 13. The system of claim 10, wherein the processor is further configured to: process the flight plan data to detect one or more flight path constraints along the descent profile; determine a criticality level based on a likelihood of the aircraft meeting the one or more flight constraints at the actual aircraft energy; and supply the image rendering display commands to the display device that cause the display device to render the image based on the determined criticality level. 14. The system of claim 10, wherein the processor is further configured to: process optimized aircraft speeds along the descent profile to determine optimized aircraft kinetic energy levels along the descent profile; process optimized aircraft altitudes along the descent profile to determine optimized aircraft potential energy levels along the descent profile; and sum the optimized kinetic energy levels and the optimized potential energy levels to determine the optimized aircraft energy level along the descent profile. 15. The system of claim 10, wherein the processor is further configured to: continuously determine actual aircraft kinetic energy level and actual aircraft potential energy level; and continuously sum the actual aircraft kinetic energy level and the actual aircraft potential energy level to determine the actual aircraft energy level. 16. The system of claim 15, further comprising: an aircraft speed sensor configured to sense aircraft speed and supply an aircraft speed signal representative thereof; an aircraft altitude sensor configured to sense aircraft altitude and supply an aircraft altitude signal representative thereof, wherein the processor is coupled to receive the aircraft speed signal and the aircraft altitude signal and is further configured to: continuously process the sensed aircraft speed to determine the actual aircraft kinetic energy level, and continuously process the sensed aircraft altitude to determine the actual aircraft potential energy level. 17. The system of claim 10, wherein the processor is further configured to supply image rendering display commands to the display device that cause the display device to render one or more visual cues that indicate actions the flight crew could take to converge the actual aircraft energy level toward the optimized aircraft energy level. 18. The system of claim 10, wherein the processor is further configured to automatically supply commands to one or more aircraft control systems that will cause the actual aircraft energy level to converge toward the optimized aircraft energy level. 19. A system for displaying optimized aircraft energy level to a flight crew, comprising: an aircraft speed sensor configured to sense aircraft speed and supply an aircraft speed signal representative thereof; an aircraft altitude sensor configured to sense aircraft altitude and supply an aircraft altitude signal representative thereof, a display device coupled to receive image rendering display commands and configured, upon receipt thereof, to render various images; and a processor coupled to receive flight plan data, the aircraft speed signal, and the aircraft altitude signal, the processor configured to: process the flight plan data to determine the optimized aircraft energy level along a descent profile of the aircraft from cruise altitude down to aircraft destination, continuously process the sensed aircraft speed to determine the actual aircraft kinetic energy level, continuously process the sensed aircraft altitude to determine the actual aircraft potential energy level continuously compare the actual aircraft energy level of the aircraft to the optimized aircraft energy level, continuously sum the actual aircraft kinetic energy level and the actual aircraft potential energy level to determine the actual aircraft energy level, and supply image rendering display commands to the display device that cause the display device to: render an image that indicates (i) the optimized aircraft energy level, (ii) how the actual aircraft energy level differs from the optimized aircraft energy level, and (iii) how the actual aircraft energy level is trending relative to the optimized aircraft energy level, and render one or more visual cues that indicate actions the flight crew could take to converge the actual aircraft energy level toward the optimized aircraft energy level. 20. The system of claim 19, wherein the processor is further configured to: process the flight plan data to detect one or more flight path constraints along the descent profile; determine a criticality level based on a likelihood of the aircraft meeting the one or more flight constraints at the actual aircraft energy; supply the image rendering display commands to the display device that cause the display device to render the image based on the determined criticality level; process optimized aircraft speeds along the descent profile to determine optimized aircraft kinetic energy levels along the descent profile; process optimized aircraft altitudes along the descent profile to determine optimized aircraft potential energy levels along the descent profile; and sum the optimized kinetic energy levels and the optimized potential energy levels to determine the optimized aircraft energy level along the descent profile.
2,600
10,151
10,151
15,244,584
2,672
Methods and systems for using contextual information to generate reports for image studies. One method includes determining contextual information associated with an image study wherein at least one image included in the image study loaded in a reporting application. The method also includes automatically selecting, with an electronic processor, a vocabulary for a natural language processing engine based on the contextual information. In addition, the method includes receiving, from a microphone, audio data and processing the audio data with the natural language processing engine using the vocabulary to generate data for a report for the image study generated using the reporting application.
1. A method of generating reports for image studies, the method comprising: determining contextual information associated with an image study, at least one image included in the image study loaded in a reporting application; automatically selecting, with an electronic processor, a vocabulary for a natural language processing engine based on the contextual information; receiving, from a microphone, audio data; and processing the audio data with the natural language processing engine using the vocabulary to generate data for a report for the image study generated using the reporting application. 2. The method of claim 1, further comprising: automatically selecting, with the electronic processor, one or more discrete data elements included in the report based on the contextual information; and prompting a reviewer for a value for each of the one or more discrete data elements. 3. The method of claim 2, further comprising prompting the reviewer to accept or reject each of the one or more discrete data elements. 4. The method of claim 1, wherein determining the contextual information includes processing the image study to determine at least one selected from a group consisting of a type of the at least one image, a number of images included in the image study, a portion of the at least one image displayed within the reporting application, header information for the at least one image, and a hanging protocol associated with the image study. 5. The method of claim 1, wherein determining the contextual information includes automatically identifying an anatomical structure represented in the at least one image and determining the contextual information based on the anatomical structure. 6. The method of claim 1, wherein determining the contextual information includes determining at least one of an annotation created by a reviewer within the reporting application for the image study and a position of a cursor within the reporting application. 7. The method of claim 1 wherein determining the contextual information includes determining a current focus of a reviewer within the at least one image displayed within the reporting application using eye tracking. 8. The method of claim 1, wherein determining the contextual information includes accessing at least one of patient information for a patient associated with the image study and an order associated with the image study. 9. The method of claim 1, wherein determining the contextual information includes determining at least one of a discrete data element previously selected by a reviewer for the report and a value previously specified by the reviewer for the report. 10. The method of claim 1, wherein processing the audio data with the natural language processing engine using the vocabulary to generate the data for the report includes processing the audio data to populate a data element of a structured report. 11. The method of claim 1, further comprising automatically select a report parameter for the report based on the contextual information, wherein the report parameter includes at least one selected from a group consisting of an annotation tool, a display option, and a report format. 12. A system for generating a structured report for data, the system comprising: an electronic processor configured to: determine contextual information associated with the data; automatically select a vocabulary for a natural language processing engine based on the contextual information; receive audio data from a microphone; and process the audio data with the natural language processing engine using the vocabulary to generate the structured report for the data. 13. The system of claim 12, wherein the data includes at least one image and wherein the contextual information includes at least one selected from a group consisting of a type of the at least one image, a number of images included in an image study including the at least one image, a portion of the at least one image displayed within the reporting application, header information for the at least one image, and a hanging protocol associated with the at least one image. 14. The system of claim 12, wherein the data includes at least one image and wherein the contextual information includes an anatomical structure represented in the at least one image. 15. The system of claim 12, wherein the contextual information includes at least one of patient information for a patient associated with the data and an order associated with the data. 16. Non-transitory computer-readable medium including instructions that, when executed by an electronic processor, perform a set of functions, the set of functions comprising: determining contextual information associated with at least one image loaded in a reporting application; automatically selecting a vocabulary for a natural language processing engine based on the contextual information; receiving audio data; and processing the audio data with the natural language processing engine using the vocabulary to generate data for a report for the at least one image generated using the reporting application. 17. The computer-readable medium of claim 16, wherein the contextual information includes at least one selected from a group consisting of a type of the at least one image, a number of images included in an image study including the at least one image, a portion of the at least one image displayed within the reporting application, header information for the at least one image, and a hanging protocol associated with the at least one image. 18. The computer-readable medium of claim 16, wherein the report includes a structured report.
Methods and systems for using contextual information to generate reports for image studies. One method includes determining contextual information associated with an image study wherein at least one image included in the image study loaded in a reporting application. The method also includes automatically selecting, with an electronic processor, a vocabulary for a natural language processing engine based on the contextual information. In addition, the method includes receiving, from a microphone, audio data and processing the audio data with the natural language processing engine using the vocabulary to generate data for a report for the image study generated using the reporting application.1. A method of generating reports for image studies, the method comprising: determining contextual information associated with an image study, at least one image included in the image study loaded in a reporting application; automatically selecting, with an electronic processor, a vocabulary for a natural language processing engine based on the contextual information; receiving, from a microphone, audio data; and processing the audio data with the natural language processing engine using the vocabulary to generate data for a report for the image study generated using the reporting application. 2. The method of claim 1, further comprising: automatically selecting, with the electronic processor, one or more discrete data elements included in the report based on the contextual information; and prompting a reviewer for a value for each of the one or more discrete data elements. 3. The method of claim 2, further comprising prompting the reviewer to accept or reject each of the one or more discrete data elements. 4. The method of claim 1, wherein determining the contextual information includes processing the image study to determine at least one selected from a group consisting of a type of the at least one image, a number of images included in the image study, a portion of the at least one image displayed within the reporting application, header information for the at least one image, and a hanging protocol associated with the image study. 5. The method of claim 1, wherein determining the contextual information includes automatically identifying an anatomical structure represented in the at least one image and determining the contextual information based on the anatomical structure. 6. The method of claim 1, wherein determining the contextual information includes determining at least one of an annotation created by a reviewer within the reporting application for the image study and a position of a cursor within the reporting application. 7. The method of claim 1 wherein determining the contextual information includes determining a current focus of a reviewer within the at least one image displayed within the reporting application using eye tracking. 8. The method of claim 1, wherein determining the contextual information includes accessing at least one of patient information for a patient associated with the image study and an order associated with the image study. 9. The method of claim 1, wherein determining the contextual information includes determining at least one of a discrete data element previously selected by a reviewer for the report and a value previously specified by the reviewer for the report. 10. The method of claim 1, wherein processing the audio data with the natural language processing engine using the vocabulary to generate the data for the report includes processing the audio data to populate a data element of a structured report. 11. The method of claim 1, further comprising automatically select a report parameter for the report based on the contextual information, wherein the report parameter includes at least one selected from a group consisting of an annotation tool, a display option, and a report format. 12. A system for generating a structured report for data, the system comprising: an electronic processor configured to: determine contextual information associated with the data; automatically select a vocabulary for a natural language processing engine based on the contextual information; receive audio data from a microphone; and process the audio data with the natural language processing engine using the vocabulary to generate the structured report for the data. 13. The system of claim 12, wherein the data includes at least one image and wherein the contextual information includes at least one selected from a group consisting of a type of the at least one image, a number of images included in an image study including the at least one image, a portion of the at least one image displayed within the reporting application, header information for the at least one image, and a hanging protocol associated with the at least one image. 14. The system of claim 12, wherein the data includes at least one image and wherein the contextual information includes an anatomical structure represented in the at least one image. 15. The system of claim 12, wherein the contextual information includes at least one of patient information for a patient associated with the data and an order associated with the data. 16. Non-transitory computer-readable medium including instructions that, when executed by an electronic processor, perform a set of functions, the set of functions comprising: determining contextual information associated with at least one image loaded in a reporting application; automatically selecting a vocabulary for a natural language processing engine based on the contextual information; receiving audio data; and processing the audio data with the natural language processing engine using the vocabulary to generate data for a report for the at least one image generated using the reporting application. 17. The computer-readable medium of claim 16, wherein the contextual information includes at least one selected from a group consisting of a type of the at least one image, a number of images included in an image study including the at least one image, a portion of the at least one image displayed within the reporting application, header information for the at least one image, and a hanging protocol associated with the at least one image. 18. The computer-readable medium of claim 16, wherein the report includes a structured report.
2,600
10,152
10,152
15,439,385
2,632
A power door system for a vehicle including a plurality of power-operated doors is provided. The system includes a passive remote entry device configured to emit a signal to one or more cooperating vehicle receivers and a controller configured to cause the power door system to open the at least one power-operated door when the passive remote entry device signal is received and an individual is detected in a predefined activation zone for a predetermined period of time. The system further includes a vehicle-mounted user detection device. The passive remote entry device may be selected from the group consisting of a key fob, a smart key, a key card, a cellular telephone or smartphone configured with a phone-as-a-key function, a Bluetooth®-activated and vehicle-recognized cellular telephone, a Bluetooth®-activated and vehicle-recognized smartphone, and a Bluetooth®-activated and vehicle-recognized smartwatch. Methods for controlling a power door system for a vehicle are described.
1. In a vehicle including at least one power-operated door, a control system comprising: a passive remote entry device configured to emit a signal to one or more cooperating vehicle receivers; at least one vehicle-mounted user detection system; and a controller configured to cause the power door system to perform a predetermined power-operated door opening sequence when the passive remote entry device signal is received and an individual is detected in a predefined activation zone for a predetermined period of time; wherein the predetermined power-operated door opening sequence is selected from the group consisting of: opening all of the power-operated doors; opening only the power-operated doors located on a side of the vehicle nearest the passive remote entry device; opening only the power-operated door nearest the passive remote entry device; opening any power-operated door adjacent to a detected individual; and combinations thereof. 2. (canceled) 3. The control system of claim 2, wherein the at least one vehicle-mounted user detection system comprises devices selected from the group consisting of at least one vehicle-mounted proximity and/or presence sensor, at least one vehicle-mounted imager, at least one ultrasonic sensor-based gesture reading device, and combinations. 4. The control system of claim 1, wherein the passive remote entry device is selected from the group consisting of a key fob, a smart key, a key card, a cellular telephone or smartphone configured with a phone-as-a-key function, a Bluetooth®-activated and vehicle-recognized cellular telephone, a Bluetooth®-activated and vehicle-recognized smartphone, and a Bluetooth®-activated and vehicle-recognized smartwatch. 5. The control system of claim 1, wherein the predetermined period of time is from about 300 milliseconds to about 2 seconds. 6. The control system of claim 2, wherein the predefined activation zone is defined by an operative range of the at least one vehicle-mounted user detection system. 7. (canceled) . 8. The control system of claim 3, wherein the controller is further configured to authenticate one or more individuals attempting to gain entry to the vehicle. 9. The control system of claim 8, wherein the controller is further configured to authenticate the one or more individuals attempting to gain entry to the vehicle by one or more of a determination of an authorized passive remote entry device, an image analysis and a gesture analysis. 10. The control system of claim 9, wherein the controller is further configured to authenticate the one or more individuals attempting to gain entry to the vehicle by one or more of a determination of an authorized passive remote entry device identification code, a determination of a predefined gesture pattern provided by the one or more individuals attempting to gain entry to the vehicle, an facial recognition analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle, a gait analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle, or a clothing analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle. 11. A method for controlling a vehicle power door system comprising at least one power-operated door, comprising: providing a passive remote entry device capable of emitting a signal to one or more cooperating vehicle receivers; providing at least one vehicle-mounted user detection system; and providing a controller configured to cause the power door system to perform a predetermined power-operated door opening sequence when the passive remote entry device signal is received and an individual is detected in a predefined activation zone for a predetermined period of time; further including, by the controller, selecting the predetermined power-operated door opening sequence from the group consisting of: opening all of the power-operated doors; opening only the power-operated doors located on a side of the vehicle nearest the passive remote entry device; opening only the power-operated door nearest the passive remote entry device; opening any power-operated door adjacent to a detected individual; and combinations thereof. 12. (canceled) 13. The method of claim 12, including providing the at least one vehicle-mounted user detection system comprising devices selected from the group consisting of at least one vehicle-mounted proximity and/or presence sensor, at least one vehicle-mounted imager, at least one ultrasonic sensor-based gesture reading device, and combinations. 14. The method of claim 11, including selecting the passive remote entry device from the group consisting of a key fob, a smart key, a key card, a cellular telephone or smartphone configured with a phone-as-a-key function, a Bluetooth®-activated and vehicle-recognized cellular telephone, a Bluetooth®-activated and vehicle-recognized smartphone, and a Bluetooth®-activated and vehicle-recognized smartwatch. 15. The method of claim 11, including configuring the controller to cause the vehicle power door system to open the one or more of the at least one power-operated door if the individual is detected in the predefined activation zone for from about 300 milliseconds to about 2 seconds. 16. The method of claim 12, including defining the predefined activation zone as an operative range of the at least one vehicle-mounted user detection system. 17. (canceled) 18. The method of claim 13, further including configuring the controller to authenticate one or more individuals attempting to gain entry to the vehicle. 19. The method of claim 18, further including configuring the controller to authenticate the one or more individuals attempting to gain entry to the vehicle by one or more of a determination of an authorized passive remote entry device, an image analysis and a gesture analysis. 20. The method of claim 19, including further configuring the controller to authenticate the one or more individuals attempting to gain entry to the vehicle by one or more of a determination of an authorized passive remote entry device identification code, a determination of a predefined gesture pattern provided by the one or more individuals attempting to gain entry to the vehicle, an facial recognition analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle, a gait analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle, or a clothing analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle.
A power door system for a vehicle including a plurality of power-operated doors is provided. The system includes a passive remote entry device configured to emit a signal to one or more cooperating vehicle receivers and a controller configured to cause the power door system to open the at least one power-operated door when the passive remote entry device signal is received and an individual is detected in a predefined activation zone for a predetermined period of time. The system further includes a vehicle-mounted user detection device. The passive remote entry device may be selected from the group consisting of a key fob, a smart key, a key card, a cellular telephone or smartphone configured with a phone-as-a-key function, a Bluetooth®-activated and vehicle-recognized cellular telephone, a Bluetooth®-activated and vehicle-recognized smartphone, and a Bluetooth®-activated and vehicle-recognized smartwatch. Methods for controlling a power door system for a vehicle are described.1. In a vehicle including at least one power-operated door, a control system comprising: a passive remote entry device configured to emit a signal to one or more cooperating vehicle receivers; at least one vehicle-mounted user detection system; and a controller configured to cause the power door system to perform a predetermined power-operated door opening sequence when the passive remote entry device signal is received and an individual is detected in a predefined activation zone for a predetermined period of time; wherein the predetermined power-operated door opening sequence is selected from the group consisting of: opening all of the power-operated doors; opening only the power-operated doors located on a side of the vehicle nearest the passive remote entry device; opening only the power-operated door nearest the passive remote entry device; opening any power-operated door adjacent to a detected individual; and combinations thereof. 2. (canceled) 3. The control system of claim 2, wherein the at least one vehicle-mounted user detection system comprises devices selected from the group consisting of at least one vehicle-mounted proximity and/or presence sensor, at least one vehicle-mounted imager, at least one ultrasonic sensor-based gesture reading device, and combinations. 4. The control system of claim 1, wherein the passive remote entry device is selected from the group consisting of a key fob, a smart key, a key card, a cellular telephone or smartphone configured with a phone-as-a-key function, a Bluetooth®-activated and vehicle-recognized cellular telephone, a Bluetooth®-activated and vehicle-recognized smartphone, and a Bluetooth®-activated and vehicle-recognized smartwatch. 5. The control system of claim 1, wherein the predetermined period of time is from about 300 milliseconds to about 2 seconds. 6. The control system of claim 2, wherein the predefined activation zone is defined by an operative range of the at least one vehicle-mounted user detection system. 7. (canceled) . 8. The control system of claim 3, wherein the controller is further configured to authenticate one or more individuals attempting to gain entry to the vehicle. 9. The control system of claim 8, wherein the controller is further configured to authenticate the one or more individuals attempting to gain entry to the vehicle by one or more of a determination of an authorized passive remote entry device, an image analysis and a gesture analysis. 10. The control system of claim 9, wherein the controller is further configured to authenticate the one or more individuals attempting to gain entry to the vehicle by one or more of a determination of an authorized passive remote entry device identification code, a determination of a predefined gesture pattern provided by the one or more individuals attempting to gain entry to the vehicle, an facial recognition analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle, a gait analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle, or a clothing analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle. 11. A method for controlling a vehicle power door system comprising at least one power-operated door, comprising: providing a passive remote entry device capable of emitting a signal to one or more cooperating vehicle receivers; providing at least one vehicle-mounted user detection system; and providing a controller configured to cause the power door system to perform a predetermined power-operated door opening sequence when the passive remote entry device signal is received and an individual is detected in a predefined activation zone for a predetermined period of time; further including, by the controller, selecting the predetermined power-operated door opening sequence from the group consisting of: opening all of the power-operated doors; opening only the power-operated doors located on a side of the vehicle nearest the passive remote entry device; opening only the power-operated door nearest the passive remote entry device; opening any power-operated door adjacent to a detected individual; and combinations thereof. 12. (canceled) 13. The method of claim 12, including providing the at least one vehicle-mounted user detection system comprising devices selected from the group consisting of at least one vehicle-mounted proximity and/or presence sensor, at least one vehicle-mounted imager, at least one ultrasonic sensor-based gesture reading device, and combinations. 14. The method of claim 11, including selecting the passive remote entry device from the group consisting of a key fob, a smart key, a key card, a cellular telephone or smartphone configured with a phone-as-a-key function, a Bluetooth®-activated and vehicle-recognized cellular telephone, a Bluetooth®-activated and vehicle-recognized smartphone, and a Bluetooth®-activated and vehicle-recognized smartwatch. 15. The method of claim 11, including configuring the controller to cause the vehicle power door system to open the one or more of the at least one power-operated door if the individual is detected in the predefined activation zone for from about 300 milliseconds to about 2 seconds. 16. The method of claim 12, including defining the predefined activation zone as an operative range of the at least one vehicle-mounted user detection system. 17. (canceled) 18. The method of claim 13, further including configuring the controller to authenticate one or more individuals attempting to gain entry to the vehicle. 19. The method of claim 18, further including configuring the controller to authenticate the one or more individuals attempting to gain entry to the vehicle by one or more of a determination of an authorized passive remote entry device, an image analysis and a gesture analysis. 20. The method of claim 19, including further configuring the controller to authenticate the one or more individuals attempting to gain entry to the vehicle by one or more of a determination of an authorized passive remote entry device identification code, a determination of a predefined gesture pattern provided by the one or more individuals attempting to gain entry to the vehicle, an facial recognition analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle, a gait analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle, or a clothing analysis of one or more images taken of the one or more individuals attempting to gain entry to the vehicle.
2,600
10,153
10,153
16,167,394
2,651
Ear buds are provided that communicate wirelessly with an electronic device. To determine the current status of the ear buds and thereby take suitable action in controlling the operation of the electronic device and ear buds, the ear buds may be provided with sensor circuitry. The sensor circuitry may include proximity sensors. The ear buds may each have a housing with a main body portion that is configured to be inserted into the ear of the user and an elongated stem portion that extends from the main body portion. The proximity sensors may include sensors on the main body and sensors on the stem. The proximity sensors may be light-based sensors that emit light that passes through the housing.
1. A wireless ear bud, comprising: a housing having a main body portion and a stem portion extending from the main body portion; a speaker in the main body portion; a first sensor in the main body portion that produces a first sensor output; a second sensor in the stem portion that produces a second sensor output; and control circuitry that: determines whether the ear bud has been placed in a user's ear using the first and second sensor outputs; and determines whether the ear bud has been removed from the user's ear using the first sensor output without using the second sensor output. 2. The wireless ear bud defined in claim 1 further comprising a third sensor in the main body portion that produces a third sensor output. 3. The wireless ear bud defined in claim 2 further comprising a fourth sensor in the stem portion that produces a fourth sensor output. 4. The wireless ear bud defined in claim 3 wherein the control circuitry determines whether the ear bud has been removed from the user's ear using the third sensor output without using the fourth sensor output. 5. The wireless ear bud defined in claim 4 wherein the control circuitry determines whether the ear bud has been placed in the user's ear using the third and fourth sensor outputs. 6. The wireless ear bud defined in claim 5 wherein the first sensor comprises a tragus sensor and the third sensor comprises a concha sensor. 7. The wireless ear bud defined in claim 1 wherein the concha and tragus sensors comprise light-based proximity sensors. 8. The wireless ear bud defined in claim 7 wherein the concha and tragus sensors each have an infrared light-emitting diode and a light detector. 9. The wireless ear bud defined in claim 8 wherein the housing comprises a wall and wherein the infrared light-emitting diodes in the concha and tragus sensors emit infrared light that passes through the wall. 10. The wireless ear bud defined in claim 1 further comprising an accelerometer that detects movement of the housing. 11. An ear bud, comprising: control circuitry; wireless circuitry that the control circuitry uses to communicate wirelessly with an electronic device; a housing having a first portion that is configured to be inserted into an ear of a user and a second portion that extends from the first portion; a speaker in the first portion; a first proximity sensor in the first portion; and a second proximity sensor in the second portion, wherein the control circuitry determines that the ear bud has been placed in a user's ear when the first proximity sensor is covered and when the second proximity sensor is uncovered. 12. The ear bud defined in claim 11 wherein the first and second proximity sensors are light-based proximity sensors. 13. The ear bud defined in claim 12 wherein the first and second proximity sensors each include an infrared light-emitting diode that produces infrared light that passes through the housing. 14. The ear bud defined in claim 13 wherein the second portion of the housing comprises an elongated stem. 15. The ear bud defined in claim 11 wherein the first proximity sensor produces a first output and the second proximity sensor produces a second output, and wherein the control circuitry determines whether the ear bud has been removed from the user's ears using the first output without using the second output. 16. A wireless ear bud, comprising: a housing; a speaker in the housing; a first light-based sensor that produces a first output indicating whether the first light-based sensor is covered or uncovered; a second light-based sensor that produces a second output indicating whether the second light-based sensor is covered or uncovered; and control circuitry that determines that the ear bud has been placed in a user's ear when the first light-based sensor is covered and when the second light-based sensor is uncovered. 17. The wireless ear bud defined in claim 16 wherein the control circuitry determines that the ear bud has been removed from the user's ear using the first output without using the second output. 18. The wireless ear bud defined in claim 16 wherein the housing comprises a main body portion and a stem portion. 19. The wireless ear bud defined in claim 18 wherein the first light-based sensor is located in the main body portion and the second light-based sensor is located in the stem portion. 20. The wireless ear bud defined in claim 19 wherein the first light-based sensor comprises a sensor selected from the group consisting of: a tragus sensor and a concha sensor.
Ear buds are provided that communicate wirelessly with an electronic device. To determine the current status of the ear buds and thereby take suitable action in controlling the operation of the electronic device and ear buds, the ear buds may be provided with sensor circuitry. The sensor circuitry may include proximity sensors. The ear buds may each have a housing with a main body portion that is configured to be inserted into the ear of the user and an elongated stem portion that extends from the main body portion. The proximity sensors may include sensors on the main body and sensors on the stem. The proximity sensors may be light-based sensors that emit light that passes through the housing.1. A wireless ear bud, comprising: a housing having a main body portion and a stem portion extending from the main body portion; a speaker in the main body portion; a first sensor in the main body portion that produces a first sensor output; a second sensor in the stem portion that produces a second sensor output; and control circuitry that: determines whether the ear bud has been placed in a user's ear using the first and second sensor outputs; and determines whether the ear bud has been removed from the user's ear using the first sensor output without using the second sensor output. 2. The wireless ear bud defined in claim 1 further comprising a third sensor in the main body portion that produces a third sensor output. 3. The wireless ear bud defined in claim 2 further comprising a fourth sensor in the stem portion that produces a fourth sensor output. 4. The wireless ear bud defined in claim 3 wherein the control circuitry determines whether the ear bud has been removed from the user's ear using the third sensor output without using the fourth sensor output. 5. The wireless ear bud defined in claim 4 wherein the control circuitry determines whether the ear bud has been placed in the user's ear using the third and fourth sensor outputs. 6. The wireless ear bud defined in claim 5 wherein the first sensor comprises a tragus sensor and the third sensor comprises a concha sensor. 7. The wireless ear bud defined in claim 1 wherein the concha and tragus sensors comprise light-based proximity sensors. 8. The wireless ear bud defined in claim 7 wherein the concha and tragus sensors each have an infrared light-emitting diode and a light detector. 9. The wireless ear bud defined in claim 8 wherein the housing comprises a wall and wherein the infrared light-emitting diodes in the concha and tragus sensors emit infrared light that passes through the wall. 10. The wireless ear bud defined in claim 1 further comprising an accelerometer that detects movement of the housing. 11. An ear bud, comprising: control circuitry; wireless circuitry that the control circuitry uses to communicate wirelessly with an electronic device; a housing having a first portion that is configured to be inserted into an ear of a user and a second portion that extends from the first portion; a speaker in the first portion; a first proximity sensor in the first portion; and a second proximity sensor in the second portion, wherein the control circuitry determines that the ear bud has been placed in a user's ear when the first proximity sensor is covered and when the second proximity sensor is uncovered. 12. The ear bud defined in claim 11 wherein the first and second proximity sensors are light-based proximity sensors. 13. The ear bud defined in claim 12 wherein the first and second proximity sensors each include an infrared light-emitting diode that produces infrared light that passes through the housing. 14. The ear bud defined in claim 13 wherein the second portion of the housing comprises an elongated stem. 15. The ear bud defined in claim 11 wherein the first proximity sensor produces a first output and the second proximity sensor produces a second output, and wherein the control circuitry determines whether the ear bud has been removed from the user's ears using the first output without using the second output. 16. A wireless ear bud, comprising: a housing; a speaker in the housing; a first light-based sensor that produces a first output indicating whether the first light-based sensor is covered or uncovered; a second light-based sensor that produces a second output indicating whether the second light-based sensor is covered or uncovered; and control circuitry that determines that the ear bud has been placed in a user's ear when the first light-based sensor is covered and when the second light-based sensor is uncovered. 17. The wireless ear bud defined in claim 16 wherein the control circuitry determines that the ear bud has been removed from the user's ear using the first output without using the second output. 18. The wireless ear bud defined in claim 16 wherein the housing comprises a main body portion and a stem portion. 19. The wireless ear bud defined in claim 18 wherein the first light-based sensor is located in the main body portion and the second light-based sensor is located in the stem portion. 20. The wireless ear bud defined in claim 19 wherein the first light-based sensor comprises a sensor selected from the group consisting of: a tragus sensor and a concha sensor.
2,600
10,154
10,154
14,143,110
2,612
The invention relates to a method for displaying a three-dimensional (3D) scene graph on a screen, the method comprising: attaching 3D resources to a set of application scene nodes; separating a first process running in a first application context on an operating system of a computer system from a second process running in a second application context on the operating system by connecting a first sub-set of the application scene nodes to the first process and connecting a second sub-set of the application scene nodes to the second process; loading the first process and the second process to a 3D display server of the computer system; constructing the 3D scene graph based on the first process and the second process; and displaying the 3D scene graph on the screen.
1. A method for displaying a three-dimensional (3D) scene graph on a screen, the method comprising: attaching 3D resources to a set of application scene nodes; separating a first process running in a first application context on an operating system of a computer system from a second process running in a second application context on the operating system by connecting a first sub-set of the application scene nodes to the first process and connecting a second sub-set of the application scene nodes to the second process; loading the first process and the second process to a 3D display server of the computer system; constructing the 3D scene graph based on the first process and the second process; and displaying the 3D scene graph on the screen. 2. The method of claim 1, wherein the 3D resources represent elementary 3D objects comprising textures, shades and meshes. 3. The method of claim 1, wherein loading the first process and the second process comprise separately loading the first process and the second process by using a process separation interface. 4. The method of claim 3, further comprising separately processing the first process and the second process in order to avoid conflicting accesses of the first process and the second process to a same application scene node. 5. The method of claim 3, further comprising controlling a sharing of application scene nodes by the first process and the second process. 6. The method of claim 3, further comprising loading the first process running in a 3D application context and loading the second process running in a 2D application context to the 3D display server. 7. The method of claim 3, further comprising connecting the 3D display server to multiple application connections at the same time. 8. The method of claim 1, further comprising loading processes to the 3D display server for which processes connections have been changed without loading processes to the 3D display server for which processes connections have not been changed. 9. The method of claim 1, wherein connecting the first sub-set of application scene nodes to the first process and connecting the second sub-set of application scene nodes to the second process comprise connecting further application scene nodes as child nodes to elements of the first sub-set or the second sub-set of the application scene nodes, wherein the elements represent parent nodes. 10. The method of claim 9, wherein the further application scene nodes comprise location and rotation difference information with respect to their parent nodes. 11. The method of claim 10, wherein the location and rotation difference information comprises a 4×4 matrix. 12. The method of claim 1, wherein constructing the 3D scene graph comprises computing reflections, refractions, shadowing, shading and/or overlapping of the 3D resources with respect to each other. 13. An operating system for a three-dimensional (3D) computer system, the operating system comprising: application software configured to: attach 3D resources to a set of application scene nodes; and separate a first process running in a first application context of the application software from a second process running in a second application context of the application software by connecting a first sub-set of the application scene nodes to the first process and connecting a second sub-set of the application scene nodes to the second process; a 3D display server configured to construct a 3D scene graph based on the 3D resources of the application scene nodes and display the 3D scene graph on a screen; and a process separation interface between the application software and the 3D display server, wherein the process separation interface is configured to separately load the first process and the second process to the 3D display server. 14. The operating system of claim 13, wherein the process separation interface is configured to connect different applications to the 3D display server, wherein the different applications comprise 3D applications and two-dimensional (2D) applications. 15. The operating system of claim 13, further comprising: a 3D widget toolkit software providing user interface components for 3D application creation and providing the 3D resources to the application software; and a platform graphics interface between the 3D display server and a kernel of the operating system configured to control computer hardware on which the operating system is running.
The invention relates to a method for displaying a three-dimensional (3D) scene graph on a screen, the method comprising: attaching 3D resources to a set of application scene nodes; separating a first process running in a first application context on an operating system of a computer system from a second process running in a second application context on the operating system by connecting a first sub-set of the application scene nodes to the first process and connecting a second sub-set of the application scene nodes to the second process; loading the first process and the second process to a 3D display server of the computer system; constructing the 3D scene graph based on the first process and the second process; and displaying the 3D scene graph on the screen.1. A method for displaying a three-dimensional (3D) scene graph on a screen, the method comprising: attaching 3D resources to a set of application scene nodes; separating a first process running in a first application context on an operating system of a computer system from a second process running in a second application context on the operating system by connecting a first sub-set of the application scene nodes to the first process and connecting a second sub-set of the application scene nodes to the second process; loading the first process and the second process to a 3D display server of the computer system; constructing the 3D scene graph based on the first process and the second process; and displaying the 3D scene graph on the screen. 2. The method of claim 1, wherein the 3D resources represent elementary 3D objects comprising textures, shades and meshes. 3. The method of claim 1, wherein loading the first process and the second process comprise separately loading the first process and the second process by using a process separation interface. 4. The method of claim 3, further comprising separately processing the first process and the second process in order to avoid conflicting accesses of the first process and the second process to a same application scene node. 5. The method of claim 3, further comprising controlling a sharing of application scene nodes by the first process and the second process. 6. The method of claim 3, further comprising loading the first process running in a 3D application context and loading the second process running in a 2D application context to the 3D display server. 7. The method of claim 3, further comprising connecting the 3D display server to multiple application connections at the same time. 8. The method of claim 1, further comprising loading processes to the 3D display server for which processes connections have been changed without loading processes to the 3D display server for which processes connections have not been changed. 9. The method of claim 1, wherein connecting the first sub-set of application scene nodes to the first process and connecting the second sub-set of application scene nodes to the second process comprise connecting further application scene nodes as child nodes to elements of the first sub-set or the second sub-set of the application scene nodes, wherein the elements represent parent nodes. 10. The method of claim 9, wherein the further application scene nodes comprise location and rotation difference information with respect to their parent nodes. 11. The method of claim 10, wherein the location and rotation difference information comprises a 4×4 matrix. 12. The method of claim 1, wherein constructing the 3D scene graph comprises computing reflections, refractions, shadowing, shading and/or overlapping of the 3D resources with respect to each other. 13. An operating system for a three-dimensional (3D) computer system, the operating system comprising: application software configured to: attach 3D resources to a set of application scene nodes; and separate a first process running in a first application context of the application software from a second process running in a second application context of the application software by connecting a first sub-set of the application scene nodes to the first process and connecting a second sub-set of the application scene nodes to the second process; a 3D display server configured to construct a 3D scene graph based on the 3D resources of the application scene nodes and display the 3D scene graph on a screen; and a process separation interface between the application software and the 3D display server, wherein the process separation interface is configured to separately load the first process and the second process to the 3D display server. 14. The operating system of claim 13, wherein the process separation interface is configured to connect different applications to the 3D display server, wherein the different applications comprise 3D applications and two-dimensional (2D) applications. 15. The operating system of claim 13, further comprising: a 3D widget toolkit software providing user interface components for 3D application creation and providing the 3D resources to the application software; and a platform graphics interface between the 3D display server and a kernel of the operating system configured to control computer hardware on which the operating system is running.
2,600
10,155
10,155
14,747,003
2,674
The present application relates to an apparatus for verifying fragment processing related data and a method of operating thereof. The fragment shader unit is coupled to the at least one data buffer. A fragment shader unit of a graphics processing pipeline receives fragment data and records fragment processing related data in the at least one data buffer on processing one or more fragments in accordance with the received fragment data. A comparator unit coupled to the at least one data buffer compares the recorded fragment processing related data in the at least one data buffer to reference data and issues a fault indication signal in case the recorded fragment processing related data and the reference data mismatch.
1. An apparatus for verifying fragment processing related data, said apparatus comprising: at least one data buffer provided to record fragment processing related data; a graphics processing pipeline comprising a fragment shader unit, wherein the fragment shader unit is coupled to the at least one data buffer, wherein the fragment shader unit is arranged to receive fragment data and configured record fragment processing related data in the at least one data buffer on processing one or more fragments in accordance with the received fragment data; and a comparator unit coupled to the at least one data buffer, wherein the comparator unit is configured to compare the recorded fragment processing related data in the at least one data buffer to reference data and to issue a fault indication signal in case the recorded fragment processing related data and the reference data mismatch. 2. The apparatus according to claim 1, wherein the comparator unit is further configured to determine a checksum based on the recorded fragment processing related data in the at least one data buffer and compare the checksum to the reference data comprising a reference checksum. 3. The apparatus according to claim 1, wherein the at least one data buffer comprises a first frame buffer provided for storing displayable image data. 4. The apparatus according to claim 3, wherein the frame buffer is further arranged for recording the fragment processing related data. 5. The apparatus according to claim 3, wherein the at least at least one data buffer further comprises a second data buffer, wherein the second data buffer is provided for recording the fragment processing related data. 6. The apparatus according to claim 5, wherein the second data buffer is a second frame buffer. 7. The apparatus according to claim 5, wherein the fragment shader unit is coupled to the first frame buffer and configured to output processed fragments to the first frame buffer. 8. The apparatus according to claim 4, wherein the processed fragments comprise texture-mapped fragments. 9. The apparatus according to claim 1, wherein the fragment shader unit is configured to receive a stream of fragment data generated by a rasterizer unit arranged upstream to the fragment shader unit in the graphics processing pipeline. 10. The apparatus according to claim 1, wherein the fragment shader unit is configured to record texture coordinates in the least one data buffer. 11. The apparatus according to claim 7, wherein the fragment shader unit is configured to record texture coordinates in the least one data buffer for each texture-mapped fragment. 12. The apparatus according to claim 5, wherein the comparator unit is further configured to compare tile-wise the recorded fragment processing related data to reference data. 13. A method for verifying fragment processing related data, said method comprising: providing at least one data buffer for recording fragment processing related data; receiving fragment data at a fragment shader unit of a graphics processing pipeline; recording fragment processing related data in the at least one data buffer on processing a fragment in accordance with the received fragment data using the fragment shader unit; comparing the recorded fragment processing related data in the at least one data buffer to reference data; and issuing a fault indication signal in case the recorded fragment processing related data and the reference data mismatch. 14. The method according to claim 13, wherein the least one data buffer comprises a first frame buffer; wherein the recording further comprises recording fragment processing related data in one channel of the first frame buffer. 15. The method according to claim 14, further comprising: outputting processed fragments to the first frame buffer. 16. The method according to claim 14, wherein the least one data buffer further comprises a second data buffer; wherein the recording further comprises recording fragment processing related data to the second data buffer. 17. The method according to claim 13, further comprising: determining a checksum based on the recorded fragment processing related data; and comparing the checksum with the reference data comprising a reference checksum. 18. The method according to claim 13, further comprising: recording texture coordinates in the at least one data buffer for each texture-mapped fragment. 19. The method according to claim 13, further comprising: tile-wise comparing the recorded fragment processing related data to reference data. 20. A non-transitory, tangible computer readable storage medium bearing computer executable instructions for verifying fragment processing related data, wherein the instructions, when executing on one or more processing devices, cause the one or more processing devices to perform a method comprising: receiving fragment data at a fragment shader unit of a graphics processing pipeline; recording fragment processing related data in at least one data buffer on processing a fragment in accordance with the received fragment data using the fragment shader unit, wherein the at least one data buffer is provided for recording fragment processing related data; comparing the recorded fragment processing related data in the at least one data buffer to reference data; and issuing a fault indication signal in case the recorded fragment processing related data and the reference data mismatch.
The present application relates to an apparatus for verifying fragment processing related data and a method of operating thereof. The fragment shader unit is coupled to the at least one data buffer. A fragment shader unit of a graphics processing pipeline receives fragment data and records fragment processing related data in the at least one data buffer on processing one or more fragments in accordance with the received fragment data. A comparator unit coupled to the at least one data buffer compares the recorded fragment processing related data in the at least one data buffer to reference data and issues a fault indication signal in case the recorded fragment processing related data and the reference data mismatch.1. An apparatus for verifying fragment processing related data, said apparatus comprising: at least one data buffer provided to record fragment processing related data; a graphics processing pipeline comprising a fragment shader unit, wherein the fragment shader unit is coupled to the at least one data buffer, wherein the fragment shader unit is arranged to receive fragment data and configured record fragment processing related data in the at least one data buffer on processing one or more fragments in accordance with the received fragment data; and a comparator unit coupled to the at least one data buffer, wherein the comparator unit is configured to compare the recorded fragment processing related data in the at least one data buffer to reference data and to issue a fault indication signal in case the recorded fragment processing related data and the reference data mismatch. 2. The apparatus according to claim 1, wherein the comparator unit is further configured to determine a checksum based on the recorded fragment processing related data in the at least one data buffer and compare the checksum to the reference data comprising a reference checksum. 3. The apparatus according to claim 1, wherein the at least one data buffer comprises a first frame buffer provided for storing displayable image data. 4. The apparatus according to claim 3, wherein the frame buffer is further arranged for recording the fragment processing related data. 5. The apparatus according to claim 3, wherein the at least at least one data buffer further comprises a second data buffer, wherein the second data buffer is provided for recording the fragment processing related data. 6. The apparatus according to claim 5, wherein the second data buffer is a second frame buffer. 7. The apparatus according to claim 5, wherein the fragment shader unit is coupled to the first frame buffer and configured to output processed fragments to the first frame buffer. 8. The apparatus according to claim 4, wherein the processed fragments comprise texture-mapped fragments. 9. The apparatus according to claim 1, wherein the fragment shader unit is configured to receive a stream of fragment data generated by a rasterizer unit arranged upstream to the fragment shader unit in the graphics processing pipeline. 10. The apparatus according to claim 1, wherein the fragment shader unit is configured to record texture coordinates in the least one data buffer. 11. The apparatus according to claim 7, wherein the fragment shader unit is configured to record texture coordinates in the least one data buffer for each texture-mapped fragment. 12. The apparatus according to claim 5, wherein the comparator unit is further configured to compare tile-wise the recorded fragment processing related data to reference data. 13. A method for verifying fragment processing related data, said method comprising: providing at least one data buffer for recording fragment processing related data; receiving fragment data at a fragment shader unit of a graphics processing pipeline; recording fragment processing related data in the at least one data buffer on processing a fragment in accordance with the received fragment data using the fragment shader unit; comparing the recorded fragment processing related data in the at least one data buffer to reference data; and issuing a fault indication signal in case the recorded fragment processing related data and the reference data mismatch. 14. The method according to claim 13, wherein the least one data buffer comprises a first frame buffer; wherein the recording further comprises recording fragment processing related data in one channel of the first frame buffer. 15. The method according to claim 14, further comprising: outputting processed fragments to the first frame buffer. 16. The method according to claim 14, wherein the least one data buffer further comprises a second data buffer; wherein the recording further comprises recording fragment processing related data to the second data buffer. 17. The method according to claim 13, further comprising: determining a checksum based on the recorded fragment processing related data; and comparing the checksum with the reference data comprising a reference checksum. 18. The method according to claim 13, further comprising: recording texture coordinates in the at least one data buffer for each texture-mapped fragment. 19. The method according to claim 13, further comprising: tile-wise comparing the recorded fragment processing related data to reference data. 20. A non-transitory, tangible computer readable storage medium bearing computer executable instructions for verifying fragment processing related data, wherein the instructions, when executing on one or more processing devices, cause the one or more processing devices to perform a method comprising: receiving fragment data at a fragment shader unit of a graphics processing pipeline; recording fragment processing related data in at least one data buffer on processing a fragment in accordance with the received fragment data using the fragment shader unit, wherein the at least one data buffer is provided for recording fragment processing related data; comparing the recorded fragment processing related data in the at least one data buffer to reference data; and issuing a fault indication signal in case the recorded fragment processing related data and the reference data mismatch.
2,600
10,156
10,156
15,047,784
2,683
A method of operating an alarm device including a processor and a light detector, the method including for operating the light detector to sample a light intensity within an interior space a plurality of times to produce a plurality of light intensity measurements, operating the processor to determine a light intensity value, wherein the light intensity value is based upon the plurality of light intensity measurements, and operating the processor to decide whether a night cycle can be determined based on the light intensity values.
1. A method of operating a hazard detector, the hazard detector including a processor operably coupled to a light detector, and a memory, the method comprising the steps of: (a) operating the light detector to sample a light intensity within an enclosed space a plurality of times to produce a plurality of light intensity measurements; (b) operating the processor to determine a light intensity value for a first pre-determined interval, wherein the light intensity value is based upon the plurality of light intensity measurements taken during the first predetermined interval; (c) repeating steps (a) and (b) a plurality of times; and (d) operating the processor to decide whether a night cycle can be determined based on the light intensity values. 2. The method of claim 1, wherein step (b) further comprises: operating the processor to record each light intensity value in the memory. 3. The method of claim 1, wherein the light intensity value comprises a running average of the plurality of light intensity measurements. 4. The method of claim 3, wherein the first pre-determined interval is adjustable. 5. The method of claim 1, wherein step (d) comprises: (i) determining whether a difference between a highest light intensity value (HLIV) and a lowest light intensity value (LLIV) is greater than or equal to a minimum light threshold; and (ii) determining a start time and an end time of the night cycle based on a darkness threshold. 6. The method of claim 5, wherein the darkness threshold is calculated as the lowest light intensity value (LLIV) plus a percentage of the difference between a highest light intensity value (HLIV) and a lowest light intensity value (LLIV). 7. The method of claim 5, wherein step (ii) comprises: (a) determining whether a difference between the end time and the start time is less than or equal to a predetermined darkness duration; (b) determining whether a light period occurs during the night cycle; and (c) determining whether the light period is less than or equal to a predetermined light period threshold. 8. The method of claim 7, wherein the pre-determined darkness duration is adjustable 9. The method of claim 8, wherein the light period comprises a duration of time during which a light intensity value is greater than the minimum light intensity value. 10. The method of claim 7, wherein the pre-determined light period threshold is adjustable. 11. The method of claim 1, further comprising: (e) operating the processor to adjust an operation of a signaling device during the night cycle. 12. The method of claim 11, wherein the signaling device comprises at least one of a digital display, a speaker, and a night light. 13. A hazard detector comprising: a processor and a memory; a light detector operably coupled to the processor, the light detector configured to sample a light intensity within an interior space a plurality of times to produce a plurality of light intensity measurements; one or more programs stored in said memory and configured to be executed by said processor, wherein said programs are configured to: determine a light intensity value for a first pre-determined interval, wherein the light intensity value is based upon the plurality of light intensity measurements; and decide whether a night cycle can be determined based on the light intensity values. 14. The hazard detector of claim 13, wherein the programs are further configured to record each light intensity value in the memory. 15. The hazard detector of claim 13, wherein the light intensity value comprises a running average of the plurality of light intensity measurements. 16. The hazard detector of claim 13, wherein the first pre-determined interval is adjustable. 17. The hazard detector of claim 13, wherein the processor is further configured to: determine whether a difference between a highest light intensity value (HLIV) and a lowest light intensity value (LLIV) is greater than or equal to a minimum light threshold; and determine a start time and an end time of the night cycle based on a darkness threshold. 18. The hazard detector of claim 17, wherein the darkness threshold is calculated as the lowest light intensity value (LLIV) plus a percentage of the difference between a highest light intensity value (HLIV) and a lowest light intensity value (LLIV). 19. The hazard detector of claim 17, wherein the processor is further configured to: determine whether a difference between the end time and the start time is less than or equal to a predetermined darkness duration; determine whether a light period occurs during the night cycle; and determine whether the light period is less than or equal to a pre-determined light period threshold. 20. The hazard detector of claim 19, wherein the darkness duration is adjustable. 21. The hazard detector of claim 19, wherein the light period comprises a duration of time during which a light intensity value is greater than the minimum light intensity value. 22. The hazard detector of claim 19, wherein the pre-determined light period is adjustable. 23. The hazard detector of claim 13 further comprising a signaling device operably coupled to the processor, wherein the processor is further configured to adjust an operation of the signaling device during the night cycle. 24. The hazard detector of claim 23, wherein the signaling device comprises at least one of a digital display, a speaker, and a night light.
A method of operating an alarm device including a processor and a light detector, the method including for operating the light detector to sample a light intensity within an interior space a plurality of times to produce a plurality of light intensity measurements, operating the processor to determine a light intensity value, wherein the light intensity value is based upon the plurality of light intensity measurements, and operating the processor to decide whether a night cycle can be determined based on the light intensity values.1. A method of operating a hazard detector, the hazard detector including a processor operably coupled to a light detector, and a memory, the method comprising the steps of: (a) operating the light detector to sample a light intensity within an enclosed space a plurality of times to produce a plurality of light intensity measurements; (b) operating the processor to determine a light intensity value for a first pre-determined interval, wherein the light intensity value is based upon the plurality of light intensity measurements taken during the first predetermined interval; (c) repeating steps (a) and (b) a plurality of times; and (d) operating the processor to decide whether a night cycle can be determined based on the light intensity values. 2. The method of claim 1, wherein step (b) further comprises: operating the processor to record each light intensity value in the memory. 3. The method of claim 1, wherein the light intensity value comprises a running average of the plurality of light intensity measurements. 4. The method of claim 3, wherein the first pre-determined interval is adjustable. 5. The method of claim 1, wherein step (d) comprises: (i) determining whether a difference between a highest light intensity value (HLIV) and a lowest light intensity value (LLIV) is greater than or equal to a minimum light threshold; and (ii) determining a start time and an end time of the night cycle based on a darkness threshold. 6. The method of claim 5, wherein the darkness threshold is calculated as the lowest light intensity value (LLIV) plus a percentage of the difference between a highest light intensity value (HLIV) and a lowest light intensity value (LLIV). 7. The method of claim 5, wherein step (ii) comprises: (a) determining whether a difference between the end time and the start time is less than or equal to a predetermined darkness duration; (b) determining whether a light period occurs during the night cycle; and (c) determining whether the light period is less than or equal to a predetermined light period threshold. 8. The method of claim 7, wherein the pre-determined darkness duration is adjustable 9. The method of claim 8, wherein the light period comprises a duration of time during which a light intensity value is greater than the minimum light intensity value. 10. The method of claim 7, wherein the pre-determined light period threshold is adjustable. 11. The method of claim 1, further comprising: (e) operating the processor to adjust an operation of a signaling device during the night cycle. 12. The method of claim 11, wherein the signaling device comprises at least one of a digital display, a speaker, and a night light. 13. A hazard detector comprising: a processor and a memory; a light detector operably coupled to the processor, the light detector configured to sample a light intensity within an interior space a plurality of times to produce a plurality of light intensity measurements; one or more programs stored in said memory and configured to be executed by said processor, wherein said programs are configured to: determine a light intensity value for a first pre-determined interval, wherein the light intensity value is based upon the plurality of light intensity measurements; and decide whether a night cycle can be determined based on the light intensity values. 14. The hazard detector of claim 13, wherein the programs are further configured to record each light intensity value in the memory. 15. The hazard detector of claim 13, wherein the light intensity value comprises a running average of the plurality of light intensity measurements. 16. The hazard detector of claim 13, wherein the first pre-determined interval is adjustable. 17. The hazard detector of claim 13, wherein the processor is further configured to: determine whether a difference between a highest light intensity value (HLIV) and a lowest light intensity value (LLIV) is greater than or equal to a minimum light threshold; and determine a start time and an end time of the night cycle based on a darkness threshold. 18. The hazard detector of claim 17, wherein the darkness threshold is calculated as the lowest light intensity value (LLIV) plus a percentage of the difference between a highest light intensity value (HLIV) and a lowest light intensity value (LLIV). 19. The hazard detector of claim 17, wherein the processor is further configured to: determine whether a difference between the end time and the start time is less than or equal to a predetermined darkness duration; determine whether a light period occurs during the night cycle; and determine whether the light period is less than or equal to a pre-determined light period threshold. 20. The hazard detector of claim 19, wherein the darkness duration is adjustable. 21. The hazard detector of claim 19, wherein the light period comprises a duration of time during which a light intensity value is greater than the minimum light intensity value. 22. The hazard detector of claim 19, wherein the pre-determined light period is adjustable. 23. The hazard detector of claim 13 further comprising a signaling device operably coupled to the processor, wherein the processor is further configured to adjust an operation of the signaling device during the night cycle. 24. The hazard detector of claim 23, wherein the signaling device comprises at least one of a digital display, a speaker, and a night light.
2,600
10,157
10,157
15,440,172
2,699
An image forming apparatus receives image data by a receiver from an external device and forms an image based on the received image data. The image forming apparatus transmits an address of the receiver to the external device. The image forming apparatus receives image data transmitted with the address by the external device. The communication between the external device and the image forming apparatus is performed wirelessly. The image forming apparatus prints image data in accordance with an accepted printing condition.
1. An image forming apparatus that receives data from an external device and forms an image based on the received data, comprising: a wireless communication unit that receives data sent from the external device, wherein when a thumbnail associated with data of an image is selected by a user of the external device, the selected data of the image is displayed at the external device for recognition, the selected data of the image is sent from the external device, and the wireless communication unit receives the selected data of the image, and wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection. 2. A communication system that has an external device and has an image forming apparatus which comprises a wireless communication unit receiving data from the external device and which forms an image based on the received data, wherein the external device comprises: a display that displays a thumbnail associated with data of an image to be selected by a user of the external device; and a wireless communication unit that sends the selected data of the image to the image forming apparatus, wherein the image forming apparatus receives the selected data of the image through the wireless communication unit when the selected data of the image is sent from the external device, and wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection. 3. A communication method with an external device and with an image forming apparatus that includes a wireless communication unit receiving data from the external device and forms an image based on the received data, wherein when a thumbnail associated with data of an image is selected by a user of the external device, the selected data of the image is displayed at the external device for recognition, the selected data of the image is sent from the external device, the wireless communication unit receives the selected data of the image, and wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection. 4. An external device that communicates wirelessly to an image forming apparatus which comprises a wireless communication unit receiving data from the external device, which makes the wireless communication unit receive data sent from the external device, and which forms an image based on the received data, comprising: a display that displays a thumbnail associated with data of an image to be selected by a user of the external device; and a wireless communication unit that sends the selected data of the image to the image forming apparatus, wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection. 5. The external device according to claim 4, wherein the external device is a mobile phone. 6. A non-transitory recording medium recording a program executed by an external device that communicates wirelessly to an image forming apparatus which comprises a wireless communication unit receiving data from the external device, which makes the wireless communication unit receive data sent from the external device, and which forms an image based on the received data, wherein the program causes the external device to execute displaying a thumbnail associated with data of an image stored in a storage to be selected by a user of the external device, wherein the program causes the external device to execute sending the selected data of the image to the image forming apparatus, and wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection.
An image forming apparatus receives image data by a receiver from an external device and forms an image based on the received image data. The image forming apparatus transmits an address of the receiver to the external device. The image forming apparatus receives image data transmitted with the address by the external device. The communication between the external device and the image forming apparatus is performed wirelessly. The image forming apparatus prints image data in accordance with an accepted printing condition.1. An image forming apparatus that receives data from an external device and forms an image based on the received data, comprising: a wireless communication unit that receives data sent from the external device, wherein when a thumbnail associated with data of an image is selected by a user of the external device, the selected data of the image is displayed at the external device for recognition, the selected data of the image is sent from the external device, and the wireless communication unit receives the selected data of the image, and wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection. 2. A communication system that has an external device and has an image forming apparatus which comprises a wireless communication unit receiving data from the external device and which forms an image based on the received data, wherein the external device comprises: a display that displays a thumbnail associated with data of an image to be selected by a user of the external device; and a wireless communication unit that sends the selected data of the image to the image forming apparatus, wherein the image forming apparatus receives the selected data of the image through the wireless communication unit when the selected data of the image is sent from the external device, and wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection. 3. A communication method with an external device and with an image forming apparatus that includes a wireless communication unit receiving data from the external device and forms an image based on the received data, wherein when a thumbnail associated with data of an image is selected by a user of the external device, the selected data of the image is displayed at the external device for recognition, the selected data of the image is sent from the external device, the wireless communication unit receives the selected data of the image, and wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection. 4. An external device that communicates wirelessly to an image forming apparatus which comprises a wireless communication unit receiving data from the external device, which makes the wireless communication unit receive data sent from the external device, and which forms an image based on the received data, comprising: a display that displays a thumbnail associated with data of an image to be selected by a user of the external device; and a wireless communication unit that sends the selected data of the image to the image forming apparatus, wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection. 5. The external device according to claim 4, wherein the external device is a mobile phone. 6. A non-transitory recording medium recording a program executed by an external device that communicates wirelessly to an image forming apparatus which comprises a wireless communication unit receiving data from the external device, which makes the wireless communication unit receive data sent from the external device, and which forms an image based on the received data, wherein the program causes the external device to execute displaying a thumbnail associated with data of an image stored in a storage to be selected by a user of the external device, wherein the program causes the external device to execute sending the selected data of the image to the image forming apparatus, and wherein after the thumbnail is selected, the selected data of the image is sent to the wireless communication unit in a state of the selected data of the image being displayed larger than the thumbnail displayed prior to the selection.
2,600
10,158
10,158
15,295,068
2,619
When map data is to be processed for display in a mapping or navigation apparatus, the application processor of the apparatus first checks to see whether any existing data in a local cache memory represents a similar map part to the new data that is to be processed. This is done by means of a similarity analysis. If the map data already in the cache is determined to be sufficiently similar to the map data that is to be displayed, new map data from a main map storage is not loaded and processed, but instead the existing data in the cache is processed for display of the map part in question. This has the benefit of a reduced system load, since already processed data does not need to be reloaded and reprocessed.
1-16. (canceled) 17. A method of rendering a map for display on a mapping or navigation apparatus, in which the map is rendered for display by loading data representing particular parts of the map to be displayed from a main map data store for processing and for display by a rendering processor of the mapping or navigation apparatus and the rendering processor of the mapping or navigation apparatus has a local memory in which data representing particular parts of a map to be displayed can be stored, the method comprising: determining whether a part of the map for which data is already stored in the local memory is similar to a new part of the map to be processed, when a part of the map is to be processed by the rendering processor for display; and re-using data for the existing part of the map stored in the local memory for the rendering processor for displaying the new part of the map, upon the determination that a part of the map for which data is already stored in the local memory for the rendering processor is sufficiently similar to the new part of the map to be processed. 18. The method of claim 17, wherein the stored map data is in the form of a library of base map display elements, wherein the map display elements may then be used to reproduce a desired map or maps, and the method comprises: assessing the similarity between one or more map display elements already stored in the local memory for the rendering processor and a new map display element or elements to be processed, and then loading and processing a new map display element or elements or reusing a map display element or elements that is already stored in the local memory for the rendering processor on the basis of the similarity assessment. 19. The method of claim 17, wherein the similarity determination uses a function that calculates a relative error between the already stored map part and the new map part and then determines whether the relative error is below a selected threshold value or not. 20. The method of claim 19, wherein the relative error is determined using a root mean squared error (RMSE) between the already stored map part and the new map part. 21. The method of claim 17, wherein the stored map data includes meta information that can be used to determine how similar respective parts of the map are. 22. The method of claim 17, wherein the similarity determination is configured to take account of run time parameters. 23. The method of claim 17, wherein the similarity determination is dependent upon the distance from a viewer of the map part that is to be displayed. 24. The method of claim 17, wherein the similarity determination further includes determining a transformation to apply to the map part already stored in the local memory in order for the rendering processor to compensate for misalignment between the map parts being considered. 25. The method of claim 24, wherein the applied transformation is based on a comparison of contributing components of the map part in the local memory to corresponding components of the new map part to be displayed. 26. The method of claim 17, further comprising displaying the map on a mapping or navigation apparatus. 27. A mapping or navigation apparatus, comprising: a display for displaying a digital map to a user; a rendering processor configured to process digital map data and cause a digital map to be displayed on the display; and a local memory for use by the rendering processor in which data representing particular parts of the map to be displayed can be stored, and in which: a map is rendered for display by loading data representing particular parts of the map to be displayed from a main map data store for processing for display by the rendering processor, and wherein the rendering processor is operable to: determine whether a part of the map for which data is already stored in the local memory for the rendering processor is similar to a new part of the map to be processed, when data representing a part of a map is to be processed by the rendering processor for display; and use data for an existing part of the map stored in the local memory for the rendering processor for displaying the new part of the map, upon the determination that a part of the map for which data is already stored in the local memory for the rendering processor is sufficiently similar to the new part of the map to be processed. 28. The apparatus of claim 27, wherein the stored map data is in the form of a library of base map display elements, which elements can then be used to reproduce a desired map or maps, and wherein the rendering processor is operable to: determine whether a part of the map for which data is already stored in the local memory for the rendering processor is similar to the new part of the map to be processed by assessing the similarity between one or more map display elements already stored in the local memory for the rendering processor and a new map display element or elements to be processed; and re-use data for an existing part of the map stored in the local memory for the rendering processor for displaying the new part of the map by loading and processing a new map display element or elements or reusing a map display element or elements that is already stored in the local memory for the rendering processor on the basis of the similarity assessment, upon a determination that a part of the map for which data is already stored in the local memory for the rendering processor is sufficiently similar to the new part of the map to be processed. 29. The apparatus of claim 27, wherein the similarity determination uses a function that calculates a relative error between the already stored map part and the new map part and then determines whether the relative error is below a selected threshold value or not. 30. The apparatus of claim 29, wherein a relative error margin increases as the distance from the viewer of the map part that is to be displayed increases. 31. The apparatus of claim 27, wherein the stored map data includes meta information that can be used to determine how similar respective parts of the map are. 32. The apparatus of claim 27, wherein the determination of whether a part of the map for which data is already stored in the local memory for the rendering processor is similar to the new part of the map to be processed is configured to take account of run time parameters. 33. The apparatus of claim 27, wherein the similarity determination further includes determining a transformation to apply to the map part already stored in the local memory in order for the rendering processor to compensate for misalignment between the map parts being considered. 34. The apparatus of claim 27, wherein the apparatus is a portable navigation device (PND) or an integrated navigation system. 35. The apparatus of claim 27, wherein the apparatus displays the map on the display of the mapping or navigation apparatus. 36. A non-transitory computer-readable medium comprising computer readable instructions which, when executed on one or more processors, cause the one or more processors to perform a method of rendering a map for display on a mapping or navigation apparatus, in which the map is rendered for display by loading data representing particular parts of the map to be displayed from a main map data store for processing and for display by a rendering processor of the mapping or navigation apparatus and the rendering processor of the mapping or navigation apparatus has a local memory in which data representing particular parts of a map to be displayed can be stored, the method comprising: determining whether a part of the map for which data is already stored in the local memory is similar to a new part of the map to be processed, when a part of the map is to be processed by the rendering processor for display; and re-using data for the existing part of the map stored in the local memory for the rendering processor for displaying the new part of the map, upon the determination that a part of the map for which data is already stored in the local memory for the rendering processor is sufficiently similar to the new part of the map to be processed.
When map data is to be processed for display in a mapping or navigation apparatus, the application processor of the apparatus first checks to see whether any existing data in a local cache memory represents a similar map part to the new data that is to be processed. This is done by means of a similarity analysis. If the map data already in the cache is determined to be sufficiently similar to the map data that is to be displayed, new map data from a main map storage is not loaded and processed, but instead the existing data in the cache is processed for display of the map part in question. This has the benefit of a reduced system load, since already processed data does not need to be reloaded and reprocessed.1-16. (canceled) 17. A method of rendering a map for display on a mapping or navigation apparatus, in which the map is rendered for display by loading data representing particular parts of the map to be displayed from a main map data store for processing and for display by a rendering processor of the mapping or navigation apparatus and the rendering processor of the mapping or navigation apparatus has a local memory in which data representing particular parts of a map to be displayed can be stored, the method comprising: determining whether a part of the map for which data is already stored in the local memory is similar to a new part of the map to be processed, when a part of the map is to be processed by the rendering processor for display; and re-using data for the existing part of the map stored in the local memory for the rendering processor for displaying the new part of the map, upon the determination that a part of the map for which data is already stored in the local memory for the rendering processor is sufficiently similar to the new part of the map to be processed. 18. The method of claim 17, wherein the stored map data is in the form of a library of base map display elements, wherein the map display elements may then be used to reproduce a desired map or maps, and the method comprises: assessing the similarity between one or more map display elements already stored in the local memory for the rendering processor and a new map display element or elements to be processed, and then loading and processing a new map display element or elements or reusing a map display element or elements that is already stored in the local memory for the rendering processor on the basis of the similarity assessment. 19. The method of claim 17, wherein the similarity determination uses a function that calculates a relative error between the already stored map part and the new map part and then determines whether the relative error is below a selected threshold value or not. 20. The method of claim 19, wherein the relative error is determined using a root mean squared error (RMSE) between the already stored map part and the new map part. 21. The method of claim 17, wherein the stored map data includes meta information that can be used to determine how similar respective parts of the map are. 22. The method of claim 17, wherein the similarity determination is configured to take account of run time parameters. 23. The method of claim 17, wherein the similarity determination is dependent upon the distance from a viewer of the map part that is to be displayed. 24. The method of claim 17, wherein the similarity determination further includes determining a transformation to apply to the map part already stored in the local memory in order for the rendering processor to compensate for misalignment between the map parts being considered. 25. The method of claim 24, wherein the applied transformation is based on a comparison of contributing components of the map part in the local memory to corresponding components of the new map part to be displayed. 26. The method of claim 17, further comprising displaying the map on a mapping or navigation apparatus. 27. A mapping or navigation apparatus, comprising: a display for displaying a digital map to a user; a rendering processor configured to process digital map data and cause a digital map to be displayed on the display; and a local memory for use by the rendering processor in which data representing particular parts of the map to be displayed can be stored, and in which: a map is rendered for display by loading data representing particular parts of the map to be displayed from a main map data store for processing for display by the rendering processor, and wherein the rendering processor is operable to: determine whether a part of the map for which data is already stored in the local memory for the rendering processor is similar to a new part of the map to be processed, when data representing a part of a map is to be processed by the rendering processor for display; and use data for an existing part of the map stored in the local memory for the rendering processor for displaying the new part of the map, upon the determination that a part of the map for which data is already stored in the local memory for the rendering processor is sufficiently similar to the new part of the map to be processed. 28. The apparatus of claim 27, wherein the stored map data is in the form of a library of base map display elements, which elements can then be used to reproduce a desired map or maps, and wherein the rendering processor is operable to: determine whether a part of the map for which data is already stored in the local memory for the rendering processor is similar to the new part of the map to be processed by assessing the similarity between one or more map display elements already stored in the local memory for the rendering processor and a new map display element or elements to be processed; and re-use data for an existing part of the map stored in the local memory for the rendering processor for displaying the new part of the map by loading and processing a new map display element or elements or reusing a map display element or elements that is already stored in the local memory for the rendering processor on the basis of the similarity assessment, upon a determination that a part of the map for which data is already stored in the local memory for the rendering processor is sufficiently similar to the new part of the map to be processed. 29. The apparatus of claim 27, wherein the similarity determination uses a function that calculates a relative error between the already stored map part and the new map part and then determines whether the relative error is below a selected threshold value or not. 30. The apparatus of claim 29, wherein a relative error margin increases as the distance from the viewer of the map part that is to be displayed increases. 31. The apparatus of claim 27, wherein the stored map data includes meta information that can be used to determine how similar respective parts of the map are. 32. The apparatus of claim 27, wherein the determination of whether a part of the map for which data is already stored in the local memory for the rendering processor is similar to the new part of the map to be processed is configured to take account of run time parameters. 33. The apparatus of claim 27, wherein the similarity determination further includes determining a transformation to apply to the map part already stored in the local memory in order for the rendering processor to compensate for misalignment between the map parts being considered. 34. The apparatus of claim 27, wherein the apparatus is a portable navigation device (PND) or an integrated navigation system. 35. The apparatus of claim 27, wherein the apparatus displays the map on the display of the mapping or navigation apparatus. 36. A non-transitory computer-readable medium comprising computer readable instructions which, when executed on one or more processors, cause the one or more processors to perform a method of rendering a map for display on a mapping or navigation apparatus, in which the map is rendered for display by loading data representing particular parts of the map to be displayed from a main map data store for processing and for display by a rendering processor of the mapping or navigation apparatus and the rendering processor of the mapping or navigation apparatus has a local memory in which data representing particular parts of a map to be displayed can be stored, the method comprising: determining whether a part of the map for which data is already stored in the local memory is similar to a new part of the map to be processed, when a part of the map is to be processed by the rendering processor for display; and re-using data for the existing part of the map stored in the local memory for the rendering processor for displaying the new part of the map, upon the determination that a part of the map for which data is already stored in the local memory for the rendering processor is sufficiently similar to the new part of the map to be processed.
2,600
10,159
10,159
14,262,947
2,611
Provided are techniques for providing animation in electronic communications. An image is generated by capturing multiple photographs from a camera or video camera. The first photograph is called the “key photo.” Using a graphics program, photos subsequent to the key photo are edited to cut an element common to the subsequent photos. The cut images are pasted into the key photo as layers. The modified key photo including the layers, is stored as a web-enabled graphics file, which is then transmitted in conjunction with electronic communication. When the electronic communication is received, the key photo is displayed and each of the layers is displayed and removed in the order that each was taken with a short delay between photos. In this manner, a movie is generated with much smaller files than is currently possible.
1. A method, comprising: receiving, a plurality of images of a scene captured in a sequential order using a defined set of photographic parameters; designating a first image of the plurality of images as a first key image; identifying portions of first plurality of images that follow the first key image in the sequential order and differ from the first key image to a degree corresponding to a first sensitivity level; cutting the identified portions of the first plurality of images that follow the first key image to produce a first plurality of cut images; superimposing, the first plurality of cut images onto the first key image as a first plurality of layers such that each cut portion is displayed in the first key image in a position corresponding to the position of the cut portion in the corresponding image of the first plurality of images and displayed in a time sequence proportional to the timing between the corresponding image of the first plurality of images and the first key image; and saving the first key image and the first plurality of layers as a single, web-enabled graphic file. 2. The method of claim 1, further comprising displaying each image of the plurality of cut images within the key image in turn and in accordance with the time sequence. 3. The method of claim 2, further comprising repeating the displaying of each image of the plurality of cut images in turn and in accordance with to the time sequence. 4. The method of claim 2, further comprising repeating the displaying of each image of the plurality of cut images in a reverse order. 5. The method of claim 1, further comprising: determining that a first and second of the cut images are similar; displaying only the first cut image, cut images of the plurality of cut images that are in an intervening interval between the first and second cut images and the second cut image in turn and in accordance with the time sequence; and repeating the displaying of the first cut image, the cut images of the plurality of cut images that are in the intervening interval between the first and second cut images and the second cut image. 6. An apparatus, comprising: a processor; a non-transitory, computer-readable storage medium coupled to the processor; and logic, stored on the computer-readable storage medium and executed on the processor, for: receiving a plurality of images of a scene captured in a sequential order using a defined set of photographic parameters; designating a first image of the plurality of images as a first key image; identifying portions of first plurality of images that follow the first key image in the sequential order and differ from the first key image to a degree corresponding to a first sensitivity level; cutting the identified portions of the first plurality of images that follow the first key image to produce a first plurality of cut images; superimposing, the first plurality of cut images onto the first key image as a first plurality of layers such that each cut portion is displayed in the first key image in a position corresponding to the position of the cut portion in the corresponding image of the first of images and displayed in a time sequence proportional to the timing between the corresponding image of the first plurality of images and the first key image; and saving the first key image and the first plurality of layers as a single, web-enabled graphic file. 7. The apparatus of claim 6, the logic further comprising logic for displaying each image of the plurality of cut images within the key image in turn and in accordance with the time sequence. 8. The apparatus of claim 7, the logic further comprising logic for repeating the displaying of each image of the plurality of cut images in turn and in accordance with to the time sequence. 9. The apparatus of claim 7, the logic further comprising logic for repeating the displaying of each image of the plurality of cut images in a reverse order. 10. The apparatus of claim 6, the logic further comprising logic for; determining that a lint and second of the cut images are similar; displaying only the first cut image, cut images of the plurality of cut images that are in an intervening interval between the first and second cut images and the second cut image in turn and in accordance with the time sequence; and repeating the displaying of the first cut image, the cut images of the plurality of cut images that are in the intervening interval between the first and second cut images and the second cut image. 11. A computer programming product, comprising: a non-transitory, computer-readable storage medium coupled to the processor; and logic, stored on the computer-readable storage medium for execution on a processor, for: receiving a plurality of images of a scene captured in a sequential order using a defined set of photographic parameters; designating a first image of the plurality of images as a first key image; identifying portions of first plurality of images that follow the first key image in the sequential order and differ from the first key image to a degree corresponding to a first sensitivity level; cutting the identified portions of the first plurality of images that follow the first key image to produce a first plurality of cut images; superimposing, the first plurality of cut images onto the first key image as a first plurality of layers such that each cut portion is displayed in the first key image in a position corresponding to the position of the cut portion in the corresponding image of the first plurality of images and displayed in a time sequence proportional to the timing between the corresponding image of the first plurality of images and the first key image; and saving the first key image and the first plurality of layers as a single, web-enabled graphic file. 12. The computer programming product of claim 11, the logic further comprising logic for displaying each image)t the plurality of cut images within the key image in turn and in accordance with the time sequence. 13. The computer programming product of claim 12, the logic further comprising logic fear repeating the displaying of each image of the plurality of cut images in turn and in accordance with to the time sequence. 14. The computer programming product of claim 12 the logic further comprising logic for repeating the displaying of each image of the plurality of cut images in a reverse order. 15. The computer programming product of claim 11, the logic further comprising logic for: determining that a first and second of the cut images are similar; displaying only the first cut image, cut images of the plurality of cut images that are in an intervening inter al between the first and second cut images and the second cut image in turn and in accordance with the time sequence; and repeating the displaying of the first cut image, the cut images of the plurality of cat images that are in the intervening interval between the first and second cut images and the second cut image.
Provided are techniques for providing animation in electronic communications. An image is generated by capturing multiple photographs from a camera or video camera. The first photograph is called the “key photo.” Using a graphics program, photos subsequent to the key photo are edited to cut an element common to the subsequent photos. The cut images are pasted into the key photo as layers. The modified key photo including the layers, is stored as a web-enabled graphics file, which is then transmitted in conjunction with electronic communication. When the electronic communication is received, the key photo is displayed and each of the layers is displayed and removed in the order that each was taken with a short delay between photos. In this manner, a movie is generated with much smaller files than is currently possible.1. A method, comprising: receiving, a plurality of images of a scene captured in a sequential order using a defined set of photographic parameters; designating a first image of the plurality of images as a first key image; identifying portions of first plurality of images that follow the first key image in the sequential order and differ from the first key image to a degree corresponding to a first sensitivity level; cutting the identified portions of the first plurality of images that follow the first key image to produce a first plurality of cut images; superimposing, the first plurality of cut images onto the first key image as a first plurality of layers such that each cut portion is displayed in the first key image in a position corresponding to the position of the cut portion in the corresponding image of the first plurality of images and displayed in a time sequence proportional to the timing between the corresponding image of the first plurality of images and the first key image; and saving the first key image and the first plurality of layers as a single, web-enabled graphic file. 2. The method of claim 1, further comprising displaying each image of the plurality of cut images within the key image in turn and in accordance with the time sequence. 3. The method of claim 2, further comprising repeating the displaying of each image of the plurality of cut images in turn and in accordance with to the time sequence. 4. The method of claim 2, further comprising repeating the displaying of each image of the plurality of cut images in a reverse order. 5. The method of claim 1, further comprising: determining that a first and second of the cut images are similar; displaying only the first cut image, cut images of the plurality of cut images that are in an intervening interval between the first and second cut images and the second cut image in turn and in accordance with the time sequence; and repeating the displaying of the first cut image, the cut images of the plurality of cut images that are in the intervening interval between the first and second cut images and the second cut image. 6. An apparatus, comprising: a processor; a non-transitory, computer-readable storage medium coupled to the processor; and logic, stored on the computer-readable storage medium and executed on the processor, for: receiving a plurality of images of a scene captured in a sequential order using a defined set of photographic parameters; designating a first image of the plurality of images as a first key image; identifying portions of first plurality of images that follow the first key image in the sequential order and differ from the first key image to a degree corresponding to a first sensitivity level; cutting the identified portions of the first plurality of images that follow the first key image to produce a first plurality of cut images; superimposing, the first plurality of cut images onto the first key image as a first plurality of layers such that each cut portion is displayed in the first key image in a position corresponding to the position of the cut portion in the corresponding image of the first of images and displayed in a time sequence proportional to the timing between the corresponding image of the first plurality of images and the first key image; and saving the first key image and the first plurality of layers as a single, web-enabled graphic file. 7. The apparatus of claim 6, the logic further comprising logic for displaying each image of the plurality of cut images within the key image in turn and in accordance with the time sequence. 8. The apparatus of claim 7, the logic further comprising logic for repeating the displaying of each image of the plurality of cut images in turn and in accordance with to the time sequence. 9. The apparatus of claim 7, the logic further comprising logic for repeating the displaying of each image of the plurality of cut images in a reverse order. 10. The apparatus of claim 6, the logic further comprising logic for; determining that a lint and second of the cut images are similar; displaying only the first cut image, cut images of the plurality of cut images that are in an intervening interval between the first and second cut images and the second cut image in turn and in accordance with the time sequence; and repeating the displaying of the first cut image, the cut images of the plurality of cut images that are in the intervening interval between the first and second cut images and the second cut image. 11. A computer programming product, comprising: a non-transitory, computer-readable storage medium coupled to the processor; and logic, stored on the computer-readable storage medium for execution on a processor, for: receiving a plurality of images of a scene captured in a sequential order using a defined set of photographic parameters; designating a first image of the plurality of images as a first key image; identifying portions of first plurality of images that follow the first key image in the sequential order and differ from the first key image to a degree corresponding to a first sensitivity level; cutting the identified portions of the first plurality of images that follow the first key image to produce a first plurality of cut images; superimposing, the first plurality of cut images onto the first key image as a first plurality of layers such that each cut portion is displayed in the first key image in a position corresponding to the position of the cut portion in the corresponding image of the first plurality of images and displayed in a time sequence proportional to the timing between the corresponding image of the first plurality of images and the first key image; and saving the first key image and the first plurality of layers as a single, web-enabled graphic file. 12. The computer programming product of claim 11, the logic further comprising logic for displaying each image)t the plurality of cut images within the key image in turn and in accordance with the time sequence. 13. The computer programming product of claim 12, the logic further comprising logic fear repeating the displaying of each image of the plurality of cut images in turn and in accordance with to the time sequence. 14. The computer programming product of claim 12 the logic further comprising logic for repeating the displaying of each image of the plurality of cut images in a reverse order. 15. The computer programming product of claim 11, the logic further comprising logic for: determining that a first and second of the cut images are similar; displaying only the first cut image, cut images of the plurality of cut images that are in an intervening inter al between the first and second cut images and the second cut image in turn and in accordance with the time sequence; and repeating the displaying of the first cut image, the cut images of the plurality of cat images that are in the intervening interval between the first and second cut images and the second cut image.
2,600
10,160
10,160
13,626,718
2,653
Electronic devices are designed having a housing and are configured to provide an improved degree of protection to electrical and/or mechanical components disposed therein that may be susceptible to damage from environmental elements external from the housing. The electrical device can be one carried or worn by a user, e.g., on the user's head, which can include an external component of a hearing prosthesis. The electrical device can be configured to provide an output signal in the event that a predetermined condition within the housing is detected to alert the user and/or place the electrical device in an alternative state of operation.
1. A hearing prosthesis comprising: a housing that is adapted for placement adjacent a user's ear, the housing including a moisture-sensitive component disposed therein; and a sensor disposed within the housing that is configured to detect moisture within the housing. 2. The hearing prosthesis as recited in claim 1 further comprising means for providing an indication to the user when moisture is detected. 3. The hearing prosthesis as recited in claim 1 further comprising means for altering a function of the hearing prosthesis when moisture is detected. 4. The hearing prosthesis as recited in claim 1 wherein the sensor is positioned within the housing adjacent an opening in the housing. 5. The hearing prosthesis as recited in claim 4 wherein the sensor is interposed within the housing between the opening and the moisture-sensitive component. 6. The hearing prosthesis as recited in claim 1 wherein the moisture-sensitive component is an electrical component. 7. The hearing prosthesis as recited in claim 1 wherein the hearing prosthesis provides an output in response to the sensor detecting moisture, wherein the output is selected from the group consisting of a visual alarm, an audible alarm, data information, changing the state of operation of one or more components disposed within the housing from a normal state to an alternative state, and combinations thereof. 8. The hearing prosthesis as recited in claim 6 wherein the output is transmitted to a separate device. 9. The hearing prosthesis as recited in claim 6 where the output is transmitted wirelessly. 10. An apparatus comprising: a component of a hearing prosthesis, the component having a housing that that includes an electrical component disposed therein; and a sensor disposed within the housing for detecting moisture; wherein the apparatus is configured to provide an output when moisture is detected by the sensor. 11. The apparatus as recited in claim 10 wherein the output is used to produce one or both of a visual indication and an audible indication. 12. The apparatus as recited in claim 10 wherein the output is used to place the hearing prosthesis in an alternate state of operation that is other than a normal state of operation. 13. The apparatus as recited in claim 12 wherein the alternate state of operation is a turned-off state. 14. The apparatus as recited in claim 10 wherein the output is used to provide information data. 15. The apparatus as recited in claim 14 wherein the information data is stored. 16. The apparatus as recited in claim 10 wherein the output is used to shut off one or more of the electrical components. 17. The apparatus as recited in claim 10 wherein the housing is worn on a user's head. 18. The apparatus as recited in claim 17 wherein the housing is worn adjacent a user's ear. 19. The apparatus as recited in claim 10 wherein the moisture sensor is positioned within the housing adjacent an opening through the housing. 20. The apparatus as recited in claim 10 wherein the output is provided to a device that is remote from the component. 21. A device comprising: a housing adapted to being held, worn, or carried by a user and as a result subjected to a high-moisture environment; and a sensor that is disposed within the housing and is configured to detect the presence of moisture; wherein the device performs a function to alert the user and/or to alter the state of device operation upon the detection of moisture. 22. The device as recited in claim 21 wherein the housing comprises a moisture-sensitive component disposed therein. 23. The device as recited in claim 21 wherein the moisture-sensitive component is an electrical component. 24. The device as recited in claim 21 that is a battery-powered electrical device that comprises one or more electrical components disposed therein. 25. The device as recited in claim 21 wherein the device is configured to perform one or more functions selected from the group consisting of providing a visual alarm, providing an audible alarm, providing data information, change the state of operation of one or more of the components disposed within the housing from a normal state to an alternative state, and combinations thereof. 26. The device as recited in claim 21 wherein the housing is adapted to be worn on the user's head. 27. The device as recited in claim 26 wherein the device is worn adjacent a user's ear. 28. The device as recited in claim 26 wherein the device comprises an external component of a hearing prosthesis. 29. The device as recited in claim 28 wherein the external component is worn behind a user's ear. 30. The device as recited in claim 21 wherein the device comprises a receiver component that receives a signal from an electrical device. 31. The device as recited in claim 30 wherein the electrical device is a cellular phone, and the receiver component comprises a remote receiver. 32. The device as recited in claim 30 wherein the housing includes the receiver component and a transmitter component. 33. The device as recited in claim 21 wherein the sensor is positioned within the housing at a location adjacent an opening through the housing. 34. The device as recited in claim 33 wherein the sensor is interposed between the opening and a moisture-sensitive component disposed within the housing. 35. The device as recited in claim 21 wherein the sensor additionally detects temperature. 36. A method for detecting the presence of moisture within a device worn on a user's head, the method comprising the steps of: placing a sensor within a housing of the device, wherein the housing comprises a moisture-sensitive component disposed therein, and wherein the sensor is configured to detect the presence of moisture; and providing an indication when the sensor detects moisture, wherein the indication is used to alert the user and/or alter the operation of the device. 37. The method as recited in claim 36 wherein during the step of providing, the indication to alert the user is in the form of a visual alert and/or an audible alert. 38. The method as recited in claim 36 wherein during the step of providing, the indication is used to shut the device off. 39. The method as recited in claim 36 wherein the during the step of providing, data information is provided. 40. The method as recited in claim 36 wherein during the step of placing, the sensor is positioned adjacent an opening through the housing. 41. The method as recited in claim 40 wherein during the step of placing, the sensor is positioned within the housing at a location interposed between the opening and the moisture-sensitive component. 42. The method as recited in claim 36 wherein before the step of placing, the device that is selected is an external component of a hearing prosthesis. 43. The method as recited in claim 36 where before the step of placing, the sensor that is selected is one that also detects temperature. 44. The method as recited in claim 36 wherein during the step of providing, an output is provided to an electrical component disposed within the housing. 45. The method as recited in claim 36 wherein during the step of providing, an output is provided to a component remote from the device.
Electronic devices are designed having a housing and are configured to provide an improved degree of protection to electrical and/or mechanical components disposed therein that may be susceptible to damage from environmental elements external from the housing. The electrical device can be one carried or worn by a user, e.g., on the user's head, which can include an external component of a hearing prosthesis. The electrical device can be configured to provide an output signal in the event that a predetermined condition within the housing is detected to alert the user and/or place the electrical device in an alternative state of operation.1. A hearing prosthesis comprising: a housing that is adapted for placement adjacent a user's ear, the housing including a moisture-sensitive component disposed therein; and a sensor disposed within the housing that is configured to detect moisture within the housing. 2. The hearing prosthesis as recited in claim 1 further comprising means for providing an indication to the user when moisture is detected. 3. The hearing prosthesis as recited in claim 1 further comprising means for altering a function of the hearing prosthesis when moisture is detected. 4. The hearing prosthesis as recited in claim 1 wherein the sensor is positioned within the housing adjacent an opening in the housing. 5. The hearing prosthesis as recited in claim 4 wherein the sensor is interposed within the housing between the opening and the moisture-sensitive component. 6. The hearing prosthesis as recited in claim 1 wherein the moisture-sensitive component is an electrical component. 7. The hearing prosthesis as recited in claim 1 wherein the hearing prosthesis provides an output in response to the sensor detecting moisture, wherein the output is selected from the group consisting of a visual alarm, an audible alarm, data information, changing the state of operation of one or more components disposed within the housing from a normal state to an alternative state, and combinations thereof. 8. The hearing prosthesis as recited in claim 6 wherein the output is transmitted to a separate device. 9. The hearing prosthesis as recited in claim 6 where the output is transmitted wirelessly. 10. An apparatus comprising: a component of a hearing prosthesis, the component having a housing that that includes an electrical component disposed therein; and a sensor disposed within the housing for detecting moisture; wherein the apparatus is configured to provide an output when moisture is detected by the sensor. 11. The apparatus as recited in claim 10 wherein the output is used to produce one or both of a visual indication and an audible indication. 12. The apparatus as recited in claim 10 wherein the output is used to place the hearing prosthesis in an alternate state of operation that is other than a normal state of operation. 13. The apparatus as recited in claim 12 wherein the alternate state of operation is a turned-off state. 14. The apparatus as recited in claim 10 wherein the output is used to provide information data. 15. The apparatus as recited in claim 14 wherein the information data is stored. 16. The apparatus as recited in claim 10 wherein the output is used to shut off one or more of the electrical components. 17. The apparatus as recited in claim 10 wherein the housing is worn on a user's head. 18. The apparatus as recited in claim 17 wherein the housing is worn adjacent a user's ear. 19. The apparatus as recited in claim 10 wherein the moisture sensor is positioned within the housing adjacent an opening through the housing. 20. The apparatus as recited in claim 10 wherein the output is provided to a device that is remote from the component. 21. A device comprising: a housing adapted to being held, worn, or carried by a user and as a result subjected to a high-moisture environment; and a sensor that is disposed within the housing and is configured to detect the presence of moisture; wherein the device performs a function to alert the user and/or to alter the state of device operation upon the detection of moisture. 22. The device as recited in claim 21 wherein the housing comprises a moisture-sensitive component disposed therein. 23. The device as recited in claim 21 wherein the moisture-sensitive component is an electrical component. 24. The device as recited in claim 21 that is a battery-powered electrical device that comprises one or more electrical components disposed therein. 25. The device as recited in claim 21 wherein the device is configured to perform one or more functions selected from the group consisting of providing a visual alarm, providing an audible alarm, providing data information, change the state of operation of one or more of the components disposed within the housing from a normal state to an alternative state, and combinations thereof. 26. The device as recited in claim 21 wherein the housing is adapted to be worn on the user's head. 27. The device as recited in claim 26 wherein the device is worn adjacent a user's ear. 28. The device as recited in claim 26 wherein the device comprises an external component of a hearing prosthesis. 29. The device as recited in claim 28 wherein the external component is worn behind a user's ear. 30. The device as recited in claim 21 wherein the device comprises a receiver component that receives a signal from an electrical device. 31. The device as recited in claim 30 wherein the electrical device is a cellular phone, and the receiver component comprises a remote receiver. 32. The device as recited in claim 30 wherein the housing includes the receiver component and a transmitter component. 33. The device as recited in claim 21 wherein the sensor is positioned within the housing at a location adjacent an opening through the housing. 34. The device as recited in claim 33 wherein the sensor is interposed between the opening and a moisture-sensitive component disposed within the housing. 35. The device as recited in claim 21 wherein the sensor additionally detects temperature. 36. A method for detecting the presence of moisture within a device worn on a user's head, the method comprising the steps of: placing a sensor within a housing of the device, wherein the housing comprises a moisture-sensitive component disposed therein, and wherein the sensor is configured to detect the presence of moisture; and providing an indication when the sensor detects moisture, wherein the indication is used to alert the user and/or alter the operation of the device. 37. The method as recited in claim 36 wherein during the step of providing, the indication to alert the user is in the form of a visual alert and/or an audible alert. 38. The method as recited in claim 36 wherein during the step of providing, the indication is used to shut the device off. 39. The method as recited in claim 36 wherein the during the step of providing, data information is provided. 40. The method as recited in claim 36 wherein during the step of placing, the sensor is positioned adjacent an opening through the housing. 41. The method as recited in claim 40 wherein during the step of placing, the sensor is positioned within the housing at a location interposed between the opening and the moisture-sensitive component. 42. The method as recited in claim 36 wherein before the step of placing, the device that is selected is an external component of a hearing prosthesis. 43. The method as recited in claim 36 where before the step of placing, the sensor that is selected is one that also detects temperature. 44. The method as recited in claim 36 wherein during the step of providing, an output is provided to an electrical component disposed within the housing. 45. The method as recited in claim 36 wherein during the step of providing, an output is provided to a component remote from the device.
2,600
10,161
10,161
14,541,132
2,643
A messaging system and methods that can be simultaneously employed during a voice conversation, ensuring not all information conveyed to the called party can be overheard. Eliminating the need for callers to switch between calling and text to send detail information that needs to be written down. Text messages storage of in a searchable database format as opposed to a running series of sequential messages, enabling both security and quick access to information when needed. Interactivity that will allow text messages to response to date/times to determine a course of action such as alarm notification of upcoming deadlines or being able to automatically eliminate old and outdated messages.
1. A method, comprising: transmitting and receiving alphanumeric text characters between a plurality of mobile device via a communication network, storing the transmitted and received alphanumeric text in records in a searchable database file accessible to users of the transmitting and receiving mobile devices. 2. The method of claim 1, wherein the fields of the record are interactive with functionality of the mobile device, enabling the mobile device to trigger activities base on the contents of specific fields. 3. The method of claim 2, wherein the content of a field in one record can affect the contents of one or more fields in the same record, allowing a time relate field to trigger the removal of all outdated information in other fields of the record. 4. The method of claim 2, wherein the content of a field in one record can affect the operating system of mobile device, allowing the time relate field to trigger a notification alarm. 5. The method of claim 2, wherein the content of all the records in database can be search by a specific field to retrieve all records within the database with the same searched for item. 6. The method of claim 1, wherein alphanumeric information is exchanged between mobile devices while a voice conversation is in progress. 7. A method implemented within a mobile calling device for transmitting text messages while an ongoing voice phone call is being conducted between the call initiating party (CIP) and a call receiving party (CRP), the method comprising the steps of: when a call is initiated the CIP's phone will send an indicator flag to the CRP's phone, CIP detecting a counter flag from the CRP indicating its ability to transmit and receive information while the phone call is in progress, CIP detecting and decoding signals into alphanumeric characters to be displayed as a forwarded message from the CRP, CIP able to convert signal generated from pressing alphanumeric character on a keypad into signals that can be transmitted over a voice communication network to the CRP, CRP able to detect and decode signal forwarded from the CIP and display as message, CRP able to convert signal generated from the pressing alphanumeric character on a keypad into signals that can be transmitted over a voice communication network to the CIP, both CRP and CIP able to store the communicate message in a record in a database file, stored records can be searched and retrieve for viewing, any given record can trigger an action when a specific temporal event occurs. 8. The method of claim 7, wherein the database file holding records can be located on the mobile calling device, or can be located on the cloud and access by the mobile calling device. 9. The method of claim 7, wherein a specific temporal event can be a set calendar date, a set time on a set date, or a number of days. 10. The method of claim 7, wherein a specific temporal event trigger an action in the mobile device to alert a user to a specific event. 11. The method of claim 10, wherein an alert can be an alarm related reminder of a task that is to be done such as an appointment. 12. The method of claim 9, wherein the content of a field in one record can affect the contents of one or more fields in the same record, allowing the time relate field to trigger the removal of all outdated information in other fields of the record. 13. The method of claim 8, wherein the content of all the records in database can be search by a specific field to retrieve all records within the database with the same searched for item. 14. The method of claim 8, wherein the mobile device able to store and retrieve records in database file from a cloud base computer system is also able to retrieve media files from a third party that is also stored on a cloud base computer system. 15. The method of claim 14, wherein the mobile device is able to display the retrieved media file as an ad on the mobile device for a set time period before removing it. 16. A mobile device-implemented method comprising: software for storing and retrieving text messages in a record format in searchable database files on remotely located central processing center computers, and have the mobile device able to access, download, and view third-party media files from the remotely located central processing center computers. 17. The method of claim 16, wherein the third party media can be an image file that will be briefly displayed to the mobile device user. 18. The method of claim 16, wherein the software stored on the mobile device can search fields of the records in the database and interactive with them base on of the mobile device internal clock function, enabling the mobile device to trigger activities base on a time setting in a specific field. 19. The method of claim 18, wherein the content of a field in one record can affect the contents of one or more fields in the same record, allowing the time relate field to trigger the removal of all outdated information in other fields of the record. 20. The method of claim 16, wherein the content of all the records in database can be search by a specific field to retrieve all records within the database with the same
A messaging system and methods that can be simultaneously employed during a voice conversation, ensuring not all information conveyed to the called party can be overheard. Eliminating the need for callers to switch between calling and text to send detail information that needs to be written down. Text messages storage of in a searchable database format as opposed to a running series of sequential messages, enabling both security and quick access to information when needed. Interactivity that will allow text messages to response to date/times to determine a course of action such as alarm notification of upcoming deadlines or being able to automatically eliminate old and outdated messages.1. A method, comprising: transmitting and receiving alphanumeric text characters between a plurality of mobile device via a communication network, storing the transmitted and received alphanumeric text in records in a searchable database file accessible to users of the transmitting and receiving mobile devices. 2. The method of claim 1, wherein the fields of the record are interactive with functionality of the mobile device, enabling the mobile device to trigger activities base on the contents of specific fields. 3. The method of claim 2, wherein the content of a field in one record can affect the contents of one or more fields in the same record, allowing a time relate field to trigger the removal of all outdated information in other fields of the record. 4. The method of claim 2, wherein the content of a field in one record can affect the operating system of mobile device, allowing the time relate field to trigger a notification alarm. 5. The method of claim 2, wherein the content of all the records in database can be search by a specific field to retrieve all records within the database with the same searched for item. 6. The method of claim 1, wherein alphanumeric information is exchanged between mobile devices while a voice conversation is in progress. 7. A method implemented within a mobile calling device for transmitting text messages while an ongoing voice phone call is being conducted between the call initiating party (CIP) and a call receiving party (CRP), the method comprising the steps of: when a call is initiated the CIP's phone will send an indicator flag to the CRP's phone, CIP detecting a counter flag from the CRP indicating its ability to transmit and receive information while the phone call is in progress, CIP detecting and decoding signals into alphanumeric characters to be displayed as a forwarded message from the CRP, CIP able to convert signal generated from pressing alphanumeric character on a keypad into signals that can be transmitted over a voice communication network to the CRP, CRP able to detect and decode signal forwarded from the CIP and display as message, CRP able to convert signal generated from the pressing alphanumeric character on a keypad into signals that can be transmitted over a voice communication network to the CIP, both CRP and CIP able to store the communicate message in a record in a database file, stored records can be searched and retrieve for viewing, any given record can trigger an action when a specific temporal event occurs. 8. The method of claim 7, wherein the database file holding records can be located on the mobile calling device, or can be located on the cloud and access by the mobile calling device. 9. The method of claim 7, wherein a specific temporal event can be a set calendar date, a set time on a set date, or a number of days. 10. The method of claim 7, wherein a specific temporal event trigger an action in the mobile device to alert a user to a specific event. 11. The method of claim 10, wherein an alert can be an alarm related reminder of a task that is to be done such as an appointment. 12. The method of claim 9, wherein the content of a field in one record can affect the contents of one or more fields in the same record, allowing the time relate field to trigger the removal of all outdated information in other fields of the record. 13. The method of claim 8, wherein the content of all the records in database can be search by a specific field to retrieve all records within the database with the same searched for item. 14. The method of claim 8, wherein the mobile device able to store and retrieve records in database file from a cloud base computer system is also able to retrieve media files from a third party that is also stored on a cloud base computer system. 15. The method of claim 14, wherein the mobile device is able to display the retrieved media file as an ad on the mobile device for a set time period before removing it. 16. A mobile device-implemented method comprising: software for storing and retrieving text messages in a record format in searchable database files on remotely located central processing center computers, and have the mobile device able to access, download, and view third-party media files from the remotely located central processing center computers. 17. The method of claim 16, wherein the third party media can be an image file that will be briefly displayed to the mobile device user. 18. The method of claim 16, wherein the software stored on the mobile device can search fields of the records in the database and interactive with them base on of the mobile device internal clock function, enabling the mobile device to trigger activities base on a time setting in a specific field. 19. The method of claim 18, wherein the content of a field in one record can affect the contents of one or more fields in the same record, allowing the time relate field to trigger the removal of all outdated information in other fields of the record. 20. The method of claim 16, wherein the content of all the records in database can be search by a specific field to retrieve all records within the database with the same
2,600
10,162
10,162
14,291,971
2,647
Operating parameters of a hands-free audio system used with a wireless communication device in a moving vehicle are adjusted or tuned in real-time and requires only two persons: one to drive the vehicle and thus provide actual usage conditions with the hands-free audio system and one to remotely tune or adjust operating parameters to optimize far end audio quality. The system is remotely tuned by transmitting audio signals from the vehicle to the far end using a first communications link to the far end and sending adjustment commands to the vehicle from the far end via a second, data link between the far end and the vehicle. In one embodiment, DTMF signals received from inside or outside the vehicle can tune or be used to diagnose the hands-free system. Test measurements obtained from within and by the hands-free audio system can also be retrieved from a remote location.
1. A method of remotely accessing a hands-free audio system used with a wireless communication device in a vehicle, the wireless communications device being operatively coupled to a controller for the hands-free audio system and which controls the hands-free audio system, the method comprising: transducing audio signals in the vehicle using the hands-free audio system located in the vehicle to provide a signal representing said audio signals in the vehicle; transmitting the signal representing said audio signals to a remotely-located communications device using a first wireless communications link, the remotely-located communications device being at a first remote location; receiving a data signal at the wireless communications device in the vehicle; and changing an operating parameter of the hands-free audio system responsive to data received at the wireless communications device in the vehicle, from said data signal. 2. The method of claim 1, further comprising: establishing a second wireless communications link to a remotely-located computer and wherein the step of changing an operating parameter of the hands-free audio system comprises changing an operating parameter responsive to a command received at the vehicle from the remotely-located computer using the second wireless communications link. 3. The method of claim 1, wherein the step of receiving a data signal comprises: receiving a series of dual-tone, multi-frequency (DTMF) signals over the first wireless communications link. 4. The method of claim 1, wherein the step of receiving a data signal comprises: receiving a dual-tone, multi-frequency (DTMF) audio signal from a DTMF signal generator located inside the vehicle. 5. The method of claim 1, wherein the step of receiving a data signal comprises: receiving a dual-tone, multi-frequency (DTMF) audio signal from a DTMF signal generator located outside the vehicle. 6. The method of claim 1, wherein the steps of transducing, transmitting, receiving a data signal and changing an operating parameter, are performed while the vehicle is moving. 7. The method of claim 1, wherein the step of transducing audio signals comprises: detecting near-end speech that is obtained from inside the vehicle, detecting acoustic noise in the vehicle and detecting far-end speech provided by a loudspeaker in the vehicle. 8. The method of claim 1, wherein the data signal comprises information that causes the controller for the hands-free audio system to change a control algorithm for the hands-free audio system. 9. The method of claim 3, wherein the data signal received at the vehicle comprises information that causes the controller for the hands-free audio system to change a noise attenuation factor. 10. The method of claim 3, wherein the data signal comprises information that causes the controller for the hands-free audio system to change an audio filter cutoff frequency. 11. The method of claim 3, wherein the data signal comprises information that causes the controller for the hands-free audio system to change a delay time provided between a loud speaker in the vehicle and a microphone in the vehicle. 12. The method of claim 2, further comprising: transmitting a command to a controller for the hands-free system via the second wireless communications link, said command causing the controller to collect information from the hands-free system and transmit said collected information via, at least one of, the first and second wireless communications links. 13. A method of remotely accessing a hands-free audio system that is used with a wireless communication device in a moving vehicle, the wireless communications device being operatively coupled to a controller for the hands-free audio system, the method comprising: establishing a voice communication link between the wireless communication device in the moving vehicle and a communications device located at a first remote location that is away from the moving vehicle; establishing a data communication link between the wireless communications device in the moving vehicle and a computer at the first remote location; monitoring the quality of voice-frequency signals that are received at the communications device at the first remote location, said voice-frequency signals being carried over the voice communication link; and transmitting a first command to the controller for the hands-free audio system from the first remote location to the moving vehicle via the data communication link, the first command directing the controller for the hands-free audio system to change an operating parameter of the hands-free audio system, which affects the voice-frequency signals. 14. The method of claim 13, wherein the step of monitoring the quality of voice-frequency signals comprises detecting echo. 15. The method of claim 13, wherein the first command comprises information that causes the controller for the hands-free audio system to change a control algorithm for the hands-free audio system. 16. The method of claim 13, wherein the first command comprises information that causes the controller for the hands-free audio system to change a noise attenuation factor. 17. The method of claim 13, wherein the first command comprises information that causes the controller for the hands-free audio system to change an audio filter cutoff frequency. 18. The method of claim 13, wherein the first command comprises information that causes the controller for the hands-free audio system to change a delay time provided between a loud speaker in the vehicle and a microphone in the vehicle. 19. The method of claim 13, further comprising: transmitting a command to a controller for the hands-free system via the second wireless communications link, said command causing the controller to collect information from the hands-free system and transmit said collected information via the second wireless communications link. 20. An apparatus for adjusting from a remote location, operating parameters of a hands-free audio system used with a wireless communication device in a vehicle that is moving, the apparatus comprising: a controller coupled to the hands-free audio system and coupled to the wireless communication device; a loud speaker coupled to the wireless communication device and configured to provide sound in the vehicle; a microphone within the vehicle and which is operatively coupled to the controller and coupled to the wireless communications device, the microphone being capable of detecting sound from the loud speaker and capable of detecting noise in the vehicle; a non-transitory memory device coupled to the controller, the non-transitory memory device storing program instructions, which when executed cause the controller to: receive data that was transmitted to the wireless communications device over a wireless data link; change an operating parameter of the hands-free audio system responsive to the data received over the wireless data link, the operating parameter determining at least one of noise attenuation, noise cancellation and a delay added to audio signals that are sent to the loudspeaker; and transmit data from the hands-free audio system responsive to a command received by the controller from a remote location.
Operating parameters of a hands-free audio system used with a wireless communication device in a moving vehicle are adjusted or tuned in real-time and requires only two persons: one to drive the vehicle and thus provide actual usage conditions with the hands-free audio system and one to remotely tune or adjust operating parameters to optimize far end audio quality. The system is remotely tuned by transmitting audio signals from the vehicle to the far end using a first communications link to the far end and sending adjustment commands to the vehicle from the far end via a second, data link between the far end and the vehicle. In one embodiment, DTMF signals received from inside or outside the vehicle can tune or be used to diagnose the hands-free system. Test measurements obtained from within and by the hands-free audio system can also be retrieved from a remote location.1. A method of remotely accessing a hands-free audio system used with a wireless communication device in a vehicle, the wireless communications device being operatively coupled to a controller for the hands-free audio system and which controls the hands-free audio system, the method comprising: transducing audio signals in the vehicle using the hands-free audio system located in the vehicle to provide a signal representing said audio signals in the vehicle; transmitting the signal representing said audio signals to a remotely-located communications device using a first wireless communications link, the remotely-located communications device being at a first remote location; receiving a data signal at the wireless communications device in the vehicle; and changing an operating parameter of the hands-free audio system responsive to data received at the wireless communications device in the vehicle, from said data signal. 2. The method of claim 1, further comprising: establishing a second wireless communications link to a remotely-located computer and wherein the step of changing an operating parameter of the hands-free audio system comprises changing an operating parameter responsive to a command received at the vehicle from the remotely-located computer using the second wireless communications link. 3. The method of claim 1, wherein the step of receiving a data signal comprises: receiving a series of dual-tone, multi-frequency (DTMF) signals over the first wireless communications link. 4. The method of claim 1, wherein the step of receiving a data signal comprises: receiving a dual-tone, multi-frequency (DTMF) audio signal from a DTMF signal generator located inside the vehicle. 5. The method of claim 1, wherein the step of receiving a data signal comprises: receiving a dual-tone, multi-frequency (DTMF) audio signal from a DTMF signal generator located outside the vehicle. 6. The method of claim 1, wherein the steps of transducing, transmitting, receiving a data signal and changing an operating parameter, are performed while the vehicle is moving. 7. The method of claim 1, wherein the step of transducing audio signals comprises: detecting near-end speech that is obtained from inside the vehicle, detecting acoustic noise in the vehicle and detecting far-end speech provided by a loudspeaker in the vehicle. 8. The method of claim 1, wherein the data signal comprises information that causes the controller for the hands-free audio system to change a control algorithm for the hands-free audio system. 9. The method of claim 3, wherein the data signal received at the vehicle comprises information that causes the controller for the hands-free audio system to change a noise attenuation factor. 10. The method of claim 3, wherein the data signal comprises information that causes the controller for the hands-free audio system to change an audio filter cutoff frequency. 11. The method of claim 3, wherein the data signal comprises information that causes the controller for the hands-free audio system to change a delay time provided between a loud speaker in the vehicle and a microphone in the vehicle. 12. The method of claim 2, further comprising: transmitting a command to a controller for the hands-free system via the second wireless communications link, said command causing the controller to collect information from the hands-free system and transmit said collected information via, at least one of, the first and second wireless communications links. 13. A method of remotely accessing a hands-free audio system that is used with a wireless communication device in a moving vehicle, the wireless communications device being operatively coupled to a controller for the hands-free audio system, the method comprising: establishing a voice communication link between the wireless communication device in the moving vehicle and a communications device located at a first remote location that is away from the moving vehicle; establishing a data communication link between the wireless communications device in the moving vehicle and a computer at the first remote location; monitoring the quality of voice-frequency signals that are received at the communications device at the first remote location, said voice-frequency signals being carried over the voice communication link; and transmitting a first command to the controller for the hands-free audio system from the first remote location to the moving vehicle via the data communication link, the first command directing the controller for the hands-free audio system to change an operating parameter of the hands-free audio system, which affects the voice-frequency signals. 14. The method of claim 13, wherein the step of monitoring the quality of voice-frequency signals comprises detecting echo. 15. The method of claim 13, wherein the first command comprises information that causes the controller for the hands-free audio system to change a control algorithm for the hands-free audio system. 16. The method of claim 13, wherein the first command comprises information that causes the controller for the hands-free audio system to change a noise attenuation factor. 17. The method of claim 13, wherein the first command comprises information that causes the controller for the hands-free audio system to change an audio filter cutoff frequency. 18. The method of claim 13, wherein the first command comprises information that causes the controller for the hands-free audio system to change a delay time provided between a loud speaker in the vehicle and a microphone in the vehicle. 19. The method of claim 13, further comprising: transmitting a command to a controller for the hands-free system via the second wireless communications link, said command causing the controller to collect information from the hands-free system and transmit said collected information via the second wireless communications link. 20. An apparatus for adjusting from a remote location, operating parameters of a hands-free audio system used with a wireless communication device in a vehicle that is moving, the apparatus comprising: a controller coupled to the hands-free audio system and coupled to the wireless communication device; a loud speaker coupled to the wireless communication device and configured to provide sound in the vehicle; a microphone within the vehicle and which is operatively coupled to the controller and coupled to the wireless communications device, the microphone being capable of detecting sound from the loud speaker and capable of detecting noise in the vehicle; a non-transitory memory device coupled to the controller, the non-transitory memory device storing program instructions, which when executed cause the controller to: receive data that was transmitted to the wireless communications device over a wireless data link; change an operating parameter of the hands-free audio system responsive to the data received over the wireless data link, the operating parameter determining at least one of noise attenuation, noise cancellation and a delay added to audio signals that are sent to the loudspeaker; and transmit data from the hands-free audio system responsive to a command received by the controller from a remote location.
2,600
10,163
10,163
14,696,950
2,689
A method and system are provided for personalized assistance for a driver of a motor vehicle. At least one first measurement value characterizing a physical and/or psychological state of the driver is detected by sensors in a driver state detection. The first measurement value, or a combination of a plurality of such first measurement values, is compared with an assigned predetermined target state or target state range. A driver intervention in which a predetermined signal is generated or altered takes place if, in the comparison, there is determined to be a deviation of the first measurement value or the combination of first measurement values from the assigned predetermined target state or target state range. In advance, a driver-dependent designation of at least one sensory channel of the driver to be addressed in the driver intervention has been made, and the signal is predetermined in accordance with the determined deviation. The signal is furthermore predetermined so as to be suitable for addressing the at least one designated sensory channel.
1. A method for personalized assistance for a driver of a vehicle, the method comprising the acts of: detecting a driver state via a sensor-based acquisition of at least one first measurement value characterizing a physical and/or psychological state of the driver; comparing the acquired first measurement value, or a combination of a plurality of said first measurement values, with an assigned predetermined target state or target state range; if the comparing act establishes that the first measurement value, or combination of first measurement values, deviates from the assigned predetermined target state of target state range, performing a driver intervention in which a predetermined signal is generated or altered, wherein at least one sensory channel of the driver needing to be addressed in the driver intervention is designated in a driver-dependent manner, and the predetermined signal is predetermined in accordance with the established deviation and so as to be suitable for addressing the at least one designated sensory channel. 2. The method according to claim 1, wherein the act of detecting the driver state is carried out while the driver is outside of the vehicle. 3. The method according to claim 1, wherein the sensor-based acquisition of the at least one first measurement value takes place by one or more of the following measuring devices: a) a seat occupancy sensor; b) a smart textile comprising a sensor; c) a smart watch; d) a steering wheel sensor; e) a camera; f) smart glasses comprising an eye-tracking function; and g) a communication terminal comprising a sensor. 4. The method according to claim 1, wherein the act of detecting the driver state comprises a driver diagnosis in which a question of which sensory channel or sensory channels would most effectively intervene with the driver is determined from at least one first measurement value, and at least one of these determined sensory channels is designated as the sensory channel of the driver needing to be addressed. 5. The method according to claim 4, wherein: the driver diagnosis comprises acquisition of a reaction of the driver to various sensory stimuli; and the question of which sensory channel or sensory channels would most effectively intervene with the driver is determined with the aid of the measurement values and the acquired reaction of the driver. 6. The method according to claim 5, wherein the acquisition of the reaction of the driver to various sensory stimuli takes place with the aid of sensor-based acquisition of at least one of the following: a) one or more measurement values characterizing a physical and/or psychological state of the driver; b) at least one driver action for vehicle control; and c) vehicle state data. 7. The method according to claim 1, wherein the predetermination of the signal or the designation of the at least one sensory channel comprises a selection, via a user interface, of one or more signals or signal characteristics or of one or more sensory channels. 8. The method according to claim 1, wherein: user profile data that identifies the driver and the associated at least one designated sensory channel is stored, and the designation of the at least one sensory channel for the driver takes place in a subsequent vehicle trip through reading out of the user profile data that has been stored for the driver. 9. The method according to claim 1, wherein: the predetermined signal is a combination of different individual signals, which each address different sensory channels; and wherein the individual signals either: (a) are generated or altered at substantially the same time, or (b) are prioritized relative to one another in accordance with the designation of the at least one sensory channel, and generated or altered in a staggered manner with respect to one another in a cascade according to the prioritization. 10. The method according to claim 1, wherein if the driver is located in the vehicle, then the driver state detection takes place multiple times, and the driver intervention takes place dynamically in response to the series of first measurement values acquired therein. 11. The method according to claim 1, wherein at least one first measurement value is also used to adjust user-defined vehicle settings. 12. The method according to claim 1, wherein: the driver state detection comprises reading out calendar data including appointments from an electronic calendar of the driver; a question of whether the appointment can be reached in time is determined by determining the travel time from the current location of the vehicle to an appointment location set forth in the calendar data; and if the appointment cannot be reached in time, then an electronic message is automatically sent to a predetermined circle of recipients. 13. The method according to claim 12, wherein the predetermined circle of recipients comprise appointment participants identified in the calendar data for the appointment. 14. The method according to claim 12, wherein the automatic sending of the electronic message is carried out only after a confirmation by the driver via a user interface. 15. The method according to claim 1, wherein a variety of degrees of the driver intervention can be selected by the driver. 16. The method according to claim 1, wherein the vehicle is a motor vehicle. 17. A system for personalized assistance for a driver of a vehicle, the system comprising: at least one interface for receiving measurement values from at least one sensor for acquiring at least one first measurement value that characterizes a physical and/or psychological state of the driver; a comparison unit for comparing a first measurement value that is receivable via the interface, or a combination of a plurality of such first measurement values, with a respectively assigned predetermined target state or target state range; and a signal device for a driver intervention, which is configured in order to generate or alter a predetermined signal if the comparison determines that there is a deviation of the first measurement value or the combination of first measurement values from the assigned predetermined target state or target state range; the system further comprising: a designation unit, by which at least one sensory channel of the driver needing to be addressed in the driver intervention is designated; and a signal determination unit which is configured in order to predetermine the signal so as to be in accordance with the determined deviation and so as to be suitable for addressing the at least one designated sensory channel. 18. The system according to claim 17, wherein the vehicle is a motor vehicle. 19. A vehicle comprising the system according to claim 17.
A method and system are provided for personalized assistance for a driver of a motor vehicle. At least one first measurement value characterizing a physical and/or psychological state of the driver is detected by sensors in a driver state detection. The first measurement value, or a combination of a plurality of such first measurement values, is compared with an assigned predetermined target state or target state range. A driver intervention in which a predetermined signal is generated or altered takes place if, in the comparison, there is determined to be a deviation of the first measurement value or the combination of first measurement values from the assigned predetermined target state or target state range. In advance, a driver-dependent designation of at least one sensory channel of the driver to be addressed in the driver intervention has been made, and the signal is predetermined in accordance with the determined deviation. The signal is furthermore predetermined so as to be suitable for addressing the at least one designated sensory channel.1. A method for personalized assistance for a driver of a vehicle, the method comprising the acts of: detecting a driver state via a sensor-based acquisition of at least one first measurement value characterizing a physical and/or psychological state of the driver; comparing the acquired first measurement value, or a combination of a plurality of said first measurement values, with an assigned predetermined target state or target state range; if the comparing act establishes that the first measurement value, or combination of first measurement values, deviates from the assigned predetermined target state of target state range, performing a driver intervention in which a predetermined signal is generated or altered, wherein at least one sensory channel of the driver needing to be addressed in the driver intervention is designated in a driver-dependent manner, and the predetermined signal is predetermined in accordance with the established deviation and so as to be suitable for addressing the at least one designated sensory channel. 2. The method according to claim 1, wherein the act of detecting the driver state is carried out while the driver is outside of the vehicle. 3. The method according to claim 1, wherein the sensor-based acquisition of the at least one first measurement value takes place by one or more of the following measuring devices: a) a seat occupancy sensor; b) a smart textile comprising a sensor; c) a smart watch; d) a steering wheel sensor; e) a camera; f) smart glasses comprising an eye-tracking function; and g) a communication terminal comprising a sensor. 4. The method according to claim 1, wherein the act of detecting the driver state comprises a driver diagnosis in which a question of which sensory channel or sensory channels would most effectively intervene with the driver is determined from at least one first measurement value, and at least one of these determined sensory channels is designated as the sensory channel of the driver needing to be addressed. 5. The method according to claim 4, wherein: the driver diagnosis comprises acquisition of a reaction of the driver to various sensory stimuli; and the question of which sensory channel or sensory channels would most effectively intervene with the driver is determined with the aid of the measurement values and the acquired reaction of the driver. 6. The method according to claim 5, wherein the acquisition of the reaction of the driver to various sensory stimuli takes place with the aid of sensor-based acquisition of at least one of the following: a) one or more measurement values characterizing a physical and/or psychological state of the driver; b) at least one driver action for vehicle control; and c) vehicle state data. 7. The method according to claim 1, wherein the predetermination of the signal or the designation of the at least one sensory channel comprises a selection, via a user interface, of one or more signals or signal characteristics or of one or more sensory channels. 8. The method according to claim 1, wherein: user profile data that identifies the driver and the associated at least one designated sensory channel is stored, and the designation of the at least one sensory channel for the driver takes place in a subsequent vehicle trip through reading out of the user profile data that has been stored for the driver. 9. The method according to claim 1, wherein: the predetermined signal is a combination of different individual signals, which each address different sensory channels; and wherein the individual signals either: (a) are generated or altered at substantially the same time, or (b) are prioritized relative to one another in accordance with the designation of the at least one sensory channel, and generated or altered in a staggered manner with respect to one another in a cascade according to the prioritization. 10. The method according to claim 1, wherein if the driver is located in the vehicle, then the driver state detection takes place multiple times, and the driver intervention takes place dynamically in response to the series of first measurement values acquired therein. 11. The method according to claim 1, wherein at least one first measurement value is also used to adjust user-defined vehicle settings. 12. The method according to claim 1, wherein: the driver state detection comprises reading out calendar data including appointments from an electronic calendar of the driver; a question of whether the appointment can be reached in time is determined by determining the travel time from the current location of the vehicle to an appointment location set forth in the calendar data; and if the appointment cannot be reached in time, then an electronic message is automatically sent to a predetermined circle of recipients. 13. The method according to claim 12, wherein the predetermined circle of recipients comprise appointment participants identified in the calendar data for the appointment. 14. The method according to claim 12, wherein the automatic sending of the electronic message is carried out only after a confirmation by the driver via a user interface. 15. The method according to claim 1, wherein a variety of degrees of the driver intervention can be selected by the driver. 16. The method according to claim 1, wherein the vehicle is a motor vehicle. 17. A system for personalized assistance for a driver of a vehicle, the system comprising: at least one interface for receiving measurement values from at least one sensor for acquiring at least one first measurement value that characterizes a physical and/or psychological state of the driver; a comparison unit for comparing a first measurement value that is receivable via the interface, or a combination of a plurality of such first measurement values, with a respectively assigned predetermined target state or target state range; and a signal device for a driver intervention, which is configured in order to generate or alter a predetermined signal if the comparison determines that there is a deviation of the first measurement value or the combination of first measurement values from the assigned predetermined target state or target state range; the system further comprising: a designation unit, by which at least one sensory channel of the driver needing to be addressed in the driver intervention is designated; and a signal determination unit which is configured in order to predetermine the signal so as to be in accordance with the determined deviation and so as to be suitable for addressing the at least one designated sensory channel. 18. The system according to claim 17, wherein the vehicle is a motor vehicle. 19. A vehicle comprising the system according to claim 17.
2,600
10,164
10,164
15,080,207
2,626
Systems and methods for navigating between images of multiple exams using gestures performed on a touch sensitive input device.
1. (canceled) 2. A method of navigating between data items of multiple data sets, wherein each data set comprises multiple data items, the method comprising: storing, in one or more databases, at least a first data set and a second data set, wherein: the first data set includes a plurality of data items, the second data set includes a plurality of data items, both the first data set and the second data set are associated with a data object, the data object is associated with at least a first attribute and a second attribute, the first data set is associated with a first value of the first attribute of the data object, the second data set is associated with a second value of the first attribute of the data object, each of the plurality of data items of the first data set is associated with respective values of the second attribute of the data object, and each of the plurality of data items of the second data set is associated with respective values of the second attribute of the data object; and displaying, on a display of a computing device having one or more computer processors, a first data item of the first data set that is associated with the first value of the first attribute of the data object, wherein the first data item is associated with a first value of the second attribute of the data object; wherein, while displaying the first data item of the first data set, the computing device is configured to: in response to receiving a first input via a touch sensitive input device of the computing device: select a second data item from the first data set; and display the second data item from the first data set on the display of the computing device in place of the first data item; and in response to receiving a second input, different from the first input, via the touch sensitive input device and indicating a request to display data items of the second data set: select the second data set that is associated with the second value of the first attribute of the data object; determine the first value of the second attribute of the data object that is associated with the first data item; identify a second data item of the second data set that is associated with the same first value of the second attribute of the data object; select the second data item from the second data set; and display the second data item of the second data set on the display of the computing device in place of the first data item. 3. The method of claim 2, wherein the data items comprise one or more of images, documents, or product configurations. 4. The method of claim 3, wherein the second attribute of the data object comprises at least one of: image number, anatomical position, temporal indicator, series, exam, position within a cardiac cycle, color, model, or make. 5. The method of claim 2, wherein: the data object comprises a product, and the first attribute and/or second attribute comprises at least one of: color, model, manufacturer, year, or mileage. 6. The method of claim 2, wherein the first input comprises movement of a single finger on a touch sensitive input device in a first direction and the second input comprises movement of two fingers on the touch sensitive input device. 7. The method of claim 5, wherein the second input comprises movement of two fingers on the touch sensitive input device in the first direction. 8. A method of navigating between data items of multiple data sets, wherein each data set comprises multiple data items, the method comprising: storing, in one or more databases, at least a first data set and a second data set, wherein: the first data set includes a plurality of data items, the second data set includes a plurality of data items, both the first data set and the second data set are associated with a data object, the data object is associated with at least a first attribute and a second attribute, the first data set is associated with a first value of the first attribute of the data object, the second data set is associated with a second value of the first attribute of the data object, each of the plurality of data items of the first data set is associated with respective values of the second attribute of the data object, and each of the plurality of data items of the second data set is associated with respective values of the second attribute of the data object; displaying, on a display of a computing device having one or more computer processors, a first data item of the first data set that is associated with the first value of the first attribute of the data object, wherein the first data item is associated with a first value of the second attribute of the data object; and in response to receiving an input: selecting, by the computing device, the second data set that is associated with the second value of the first attribute of the data object, wherein the first and second data sets are of a same or similar type; determining, by the computing device, the first value of the second attribute of the data object that is associated with the first data item; identifying, by the computing device, a second data item of the second data set that is associated with the same first value of the second attribute of the data object; selecting, by the computing device, the second data item from the second data set; and displaying the second data item on the display of the computing device in place of the first data item. 9. The method of claim 8, wherein the data items comprise one or more of images, documents, or product configurations. 10. The method of claim 8, wherein the second attribute of the data object comprises at least one of: image number, anatomical position, temporal indicator, series, exam, position within a cardiac cycle, color, model, or make. 11. The method of claim 8, wherein: the data object comprises a product, and the first attribute and/or second attribute comprises at least one of: color, model, manufacturer, year, or mileage. 12. The method of claim 8, wherein the input comprises multi-touch actions performed by a user on a touch sensitive input device. 13. The method of claim 12, wherein the touch sensitive input device comprises a touchscreen device or a touchpad. 14. The method of claim 8, wherein the input comprises actions detected based on movement of at least a portion of a user. 15. The method of claim 14, wherein the actions are detected based on images of the at least a portion of the user acquired with an image capture device. 16. The method of claim 8, wherein the input comprises movement of a single finger on a touch sensitive input device.
Systems and methods for navigating between images of multiple exams using gestures performed on a touch sensitive input device.1. (canceled) 2. A method of navigating between data items of multiple data sets, wherein each data set comprises multiple data items, the method comprising: storing, in one or more databases, at least a first data set and a second data set, wherein: the first data set includes a plurality of data items, the second data set includes a plurality of data items, both the first data set and the second data set are associated with a data object, the data object is associated with at least a first attribute and a second attribute, the first data set is associated with a first value of the first attribute of the data object, the second data set is associated with a second value of the first attribute of the data object, each of the plurality of data items of the first data set is associated with respective values of the second attribute of the data object, and each of the plurality of data items of the second data set is associated with respective values of the second attribute of the data object; and displaying, on a display of a computing device having one or more computer processors, a first data item of the first data set that is associated with the first value of the first attribute of the data object, wherein the first data item is associated with a first value of the second attribute of the data object; wherein, while displaying the first data item of the first data set, the computing device is configured to: in response to receiving a first input via a touch sensitive input device of the computing device: select a second data item from the first data set; and display the second data item from the first data set on the display of the computing device in place of the first data item; and in response to receiving a second input, different from the first input, via the touch sensitive input device and indicating a request to display data items of the second data set: select the second data set that is associated with the second value of the first attribute of the data object; determine the first value of the second attribute of the data object that is associated with the first data item; identify a second data item of the second data set that is associated with the same first value of the second attribute of the data object; select the second data item from the second data set; and display the second data item of the second data set on the display of the computing device in place of the first data item. 3. The method of claim 2, wherein the data items comprise one or more of images, documents, or product configurations. 4. The method of claim 3, wherein the second attribute of the data object comprises at least one of: image number, anatomical position, temporal indicator, series, exam, position within a cardiac cycle, color, model, or make. 5. The method of claim 2, wherein: the data object comprises a product, and the first attribute and/or second attribute comprises at least one of: color, model, manufacturer, year, or mileage. 6. The method of claim 2, wherein the first input comprises movement of a single finger on a touch sensitive input device in a first direction and the second input comprises movement of two fingers on the touch sensitive input device. 7. The method of claim 5, wherein the second input comprises movement of two fingers on the touch sensitive input device in the first direction. 8. A method of navigating between data items of multiple data sets, wherein each data set comprises multiple data items, the method comprising: storing, in one or more databases, at least a first data set and a second data set, wherein: the first data set includes a plurality of data items, the second data set includes a plurality of data items, both the first data set and the second data set are associated with a data object, the data object is associated with at least a first attribute and a second attribute, the first data set is associated with a first value of the first attribute of the data object, the second data set is associated with a second value of the first attribute of the data object, each of the plurality of data items of the first data set is associated with respective values of the second attribute of the data object, and each of the plurality of data items of the second data set is associated with respective values of the second attribute of the data object; displaying, on a display of a computing device having one or more computer processors, a first data item of the first data set that is associated with the first value of the first attribute of the data object, wherein the first data item is associated with a first value of the second attribute of the data object; and in response to receiving an input: selecting, by the computing device, the second data set that is associated with the second value of the first attribute of the data object, wherein the first and second data sets are of a same or similar type; determining, by the computing device, the first value of the second attribute of the data object that is associated with the first data item; identifying, by the computing device, a second data item of the second data set that is associated with the same first value of the second attribute of the data object; selecting, by the computing device, the second data item from the second data set; and displaying the second data item on the display of the computing device in place of the first data item. 9. The method of claim 8, wherein the data items comprise one or more of images, documents, or product configurations. 10. The method of claim 8, wherein the second attribute of the data object comprises at least one of: image number, anatomical position, temporal indicator, series, exam, position within a cardiac cycle, color, model, or make. 11. The method of claim 8, wherein: the data object comprises a product, and the first attribute and/or second attribute comprises at least one of: color, model, manufacturer, year, or mileage. 12. The method of claim 8, wherein the input comprises multi-touch actions performed by a user on a touch sensitive input device. 13. The method of claim 12, wherein the touch sensitive input device comprises a touchscreen device or a touchpad. 14. The method of claim 8, wherein the input comprises actions detected based on movement of at least a portion of a user. 15. The method of claim 14, wherein the actions are detected based on images of the at least a portion of the user acquired with an image capture device. 16. The method of claim 8, wherein the input comprises movement of a single finger on a touch sensitive input device.
2,600
10,165
10,165
14,179,996
2,689
A portable electronic device can act as a music player, gaming device and a smart phone combined. The device has multiple panels each with a display. The panels are connected in a manner to permit the device to be opened and closed such as by folding the panels. When folded the a device is compact and able to held in the palm of a hand, yet such a device can be extended into a larger form factor, such as the size of commonly available tablets. Furthermore, when in proximity to a large display device, such a device can act like a laptop or personal computer through a local connection such as Bluetooth, Wi-Fi, or other communication protocol to the display.
1. An electronic device having at least a first and a second panel, each of said panels having on at least one side an electronic display, said panels further being connected in a manner to enable a compact configuration, and a first extended configuration, the display of each of said panels being visible in the compact configuration and the extended configuration. 2. An electronic device as set forth in claim 1 wherein the device is operable to electronically couple with external equipment via a single user action. 3. An electronic device as set forth in claim 1 wherein the panels are connected by at least a first hinge, which is comprised of an elastic material. 4. An electronic device as set forth in claim 1 wherein the panels are connected by at least a first hinge which is comprised of a magnetically operable hinge. 5. An electronic device as set forth in claim 1 wherein the first and second panel are connected in a manner to permit the second panel to slide and rotate into the extended configuration. 6. An electronic device as set forth in claim 1 wherein at least one panel of said first and said second panel includes a plurality of user input sections. 7. An electronic device as set forth in claim 1 wherein the first and second panel are connected via a first hinge that is operable to permit said second panel to rotate around said hinge to cause said device to change from the compact configuration to the first extended configuration, the display of the first and second panels facing the same direction in the first extended configuration. 8. An electronic device as set forth in claim 7 further comprising a third panel and a fourth panel, the third panel connected to the first panel via at least a second hinge and the fourth panel connected to the second panel via at least a third hinge, the second and third hinges operable to permit the third and fourth panels to rotate around the second and third hinges respectively to cause the device to change from the first extended configuration to a second extended configuration. 9. An electronic device as set forth in claim 8 wherein the device is operable to couple wirelessly with an electronic unit that provides a service selectable by the electronic device with a single user action. 10. An electronic device as set forth in claim 9 wherein the external device provides a visual display. 11. An electronic device as set forth in claim 8 wherein at least a first one of said hinges is comprised of an elastic material. 12. An electronic device as set forth in claim 8 wherein at least a first one of said hinges is comprised of a magnetically operable hinge. 13. An electronic device as set forth in claim 8 wherein at least a first one of said hinges is comprised of a cylindrical socket, shaft and spring. 14. An electronic device comprising four panels, the device characterized by: a folded mode where the panels are in a stacked configuration, at least a plurality of the panels having a display on at least one side, at least one display panel facing outward when the device is in said folded mode, and wherein the folded panels are connected via at least a first connector to hold the panels together in said folded mode together; and open mode where each panel is connected to and adjacent to at least one of the other panels. 15. An electronic device as set forth in claim 14 wherein at least one panel of said first and said second panel includes a plurality of user input sections. 16. An electronic device as set forth in claim 14 further comprising a coupling mechanism to enable a user of the device to electronically couple the device to an external network or equipment with a single user action. 17. An electronic device as set forth in claim 14 wherein the connector is comprised of an elastic material. 18. An electronic device as set forth in claim 14 wherein the connector is comprised of a magnetically operated hinge. 19. An electronic device as set forth in claim 14 wherein the connector is comprised of a spring mechanism. 20. An electronic device comprising a plurality of panels, the device extendible from a closed mode to an open mode, the device comprising: a closed mode where the panels are in a compact configuration, at least one of the panels having a display on at least one side, said display panel facing outward when the device is in said closed mode, and an open mode where the display is extended to provide a display surface greater than when said device is in said closed mode.
A portable electronic device can act as a music player, gaming device and a smart phone combined. The device has multiple panels each with a display. The panels are connected in a manner to permit the device to be opened and closed such as by folding the panels. When folded the a device is compact and able to held in the palm of a hand, yet such a device can be extended into a larger form factor, such as the size of commonly available tablets. Furthermore, when in proximity to a large display device, such a device can act like a laptop or personal computer through a local connection such as Bluetooth, Wi-Fi, or other communication protocol to the display.1. An electronic device having at least a first and a second panel, each of said panels having on at least one side an electronic display, said panels further being connected in a manner to enable a compact configuration, and a first extended configuration, the display of each of said panels being visible in the compact configuration and the extended configuration. 2. An electronic device as set forth in claim 1 wherein the device is operable to electronically couple with external equipment via a single user action. 3. An electronic device as set forth in claim 1 wherein the panels are connected by at least a first hinge, which is comprised of an elastic material. 4. An electronic device as set forth in claim 1 wherein the panels are connected by at least a first hinge which is comprised of a magnetically operable hinge. 5. An electronic device as set forth in claim 1 wherein the first and second panel are connected in a manner to permit the second panel to slide and rotate into the extended configuration. 6. An electronic device as set forth in claim 1 wherein at least one panel of said first and said second panel includes a plurality of user input sections. 7. An electronic device as set forth in claim 1 wherein the first and second panel are connected via a first hinge that is operable to permit said second panel to rotate around said hinge to cause said device to change from the compact configuration to the first extended configuration, the display of the first and second panels facing the same direction in the first extended configuration. 8. An electronic device as set forth in claim 7 further comprising a third panel and a fourth panel, the third panel connected to the first panel via at least a second hinge and the fourth panel connected to the second panel via at least a third hinge, the second and third hinges operable to permit the third and fourth panels to rotate around the second and third hinges respectively to cause the device to change from the first extended configuration to a second extended configuration. 9. An electronic device as set forth in claim 8 wherein the device is operable to couple wirelessly with an electronic unit that provides a service selectable by the electronic device with a single user action. 10. An electronic device as set forth in claim 9 wherein the external device provides a visual display. 11. An electronic device as set forth in claim 8 wherein at least a first one of said hinges is comprised of an elastic material. 12. An electronic device as set forth in claim 8 wherein at least a first one of said hinges is comprised of a magnetically operable hinge. 13. An electronic device as set forth in claim 8 wherein at least a first one of said hinges is comprised of a cylindrical socket, shaft and spring. 14. An electronic device comprising four panels, the device characterized by: a folded mode where the panels are in a stacked configuration, at least a plurality of the panels having a display on at least one side, at least one display panel facing outward when the device is in said folded mode, and wherein the folded panels are connected via at least a first connector to hold the panels together in said folded mode together; and open mode where each panel is connected to and adjacent to at least one of the other panels. 15. An electronic device as set forth in claim 14 wherein at least one panel of said first and said second panel includes a plurality of user input sections. 16. An electronic device as set forth in claim 14 further comprising a coupling mechanism to enable a user of the device to electronically couple the device to an external network or equipment with a single user action. 17. An electronic device as set forth in claim 14 wherein the connector is comprised of an elastic material. 18. An electronic device as set forth in claim 14 wherein the connector is comprised of a magnetically operated hinge. 19. An electronic device as set forth in claim 14 wherein the connector is comprised of a spring mechanism. 20. An electronic device comprising a plurality of panels, the device extendible from a closed mode to an open mode, the device comprising: a closed mode where the panels are in a compact configuration, at least one of the panels having a display on at least one side, said display panel facing outward when the device is in said closed mode, and an open mode where the display is extended to provide a display surface greater than when said device is in said closed mode.
2,600
10,166
10,166
15,114,860
2,625
One or more embodiments of the present disclosure provide a system and method for presenting a user interface on a wearable electronic device. In certain embodiments, input is received from at least one sensor coupled to the wearable electronic device. Once the input from the at least one sensor is received, an orientation of the wearable electronic device is determined with respect to an object to which the wearable electronic device is attached. When the orientation of the wearable electronic device is determined, a user interface is presented on a display of the wearable electronic device. In embodiments, the user interface is displayed in an orientation that is based, at least in part, on the determined orientation of the wearable electronic device.
1. A method for presenting a user interface on a wearable electronic device, the method comprising: receiving input from at least one sensor coupled to the wearable electronic device; determining, based on the input from the at least one sensor, an orientation of the wearable electronic device with respect to an object to which the wearable electronic device is attached; and displaying a user interface on a display of the wearable electronic device, wherein the user interface is displayed in a first orientation based, at least in part, on the determined orientation of the wearable electronic device. 2. The method of claim 1, wherein the sensor is an accelerometer. 3. The method of claim 1, wherein the sensor is a biometric sensor configured to detect a pulse associated with the object to which the wearable electronic device is attached. 4. The method of claim 1, wherein the sensor is configured to determine whether the display of the wearable electronic device is in a field of view of a portion of the object to which the wearable electronic device is attached. 5. The method of claim 1, further comprising receiving additional input, wherein the orientation of the user interface is based, at least in part, on the additional input. 6. The method of claim 5, wherein the additional input is touch input on the display of the wearable electronic device. 7. The method of claim 1, wherein the sensor is a pressure sensor. 8. The method of claim 1, further comprising receiving input from a voice input mechanism, wherein the input from the voice input mechanism is used, in conjunction with the input from the at least one sensor, to determine the orientation of the wearable electronic device. 9. A method for presenting a user interface on a wearable electronic device, the method comprising: receiving input from at least one sensor coupled to the wearable electronic device; determining, based on the input from the at least one sensor, whether a display of the wearable electronic device is in a field of view of a wearer of the wearable electronic device; when it is determined that the display of the wearable electronic device is not in a field of view of the wearer of the wearable electronic device, causing the display to enter a standby mode; and when it is determined that the display of the wearable electronic device is in a field of the view of the wearer of the wearable electronic device: determining, based on the input from the at least one sensor, an orientation of the wearable electronic device; and displaying a user interface on the display of the wearable electronic device, wherein the user interface is displayed in a first orientation based, at least in part, on the determined orientation of the wearable electronic device. 10. The method of claim 9, wherein determining whether a display of the wearable electronic device is in a field of view of a wearer of the wearable electronic device comprises determining whether the display is at least partially occluded. 11. The method of claim 9, further comprising: detecting a reorientation of the wearable electronic device; and displaying the user interface in a second orientation based, at least in part, on the detected reorientation of the wearable electronic device, wherein the first orientation is different from the second orientation. 12. The method of claim 9, wherein the at least one sensor is a light sensor. 13. The method of claim 9, wherein the at least one sensor is a microphone. 14. The method of claim 9, wherein the at least one sensor is a proximity sensor. 15. The method of claim 9, wherein the at least one sensor is a camera. 16. A device comprising: at least one sensor; at least one processor; and a memory coupled to the at least one processor, the memory for storing instructions which, when executed by the at least one processor performs a method for presenting a user interface on a wearable electronic device, the method comprising: receiving input from the at least one sensor; determining, based on the input from the at least one sensor, whether a display of the wearable electronic device is in a field of view of a wearer of the wearable electronic device; when it is determined that the display of the wearable electronic device is not in a field of view of the wearer of the wearable electronic device, causing the display to enter a standby mode; and when it is determined that the display of the wearable electronic device is in a field of the view of the wearer of the wearable electronic device: determining, based on the input from the at least one sensor, an orientation of the wearable electronic device; and displaying a user interface on the display of the wearable electronic device, wherein the user interface is displayed in a first orientation based, at least in part, on the determined orientation of the wearable electronic device. 17. The device of claim 16, wherein determining whether a display of the wearable electronic device is in a field of view of a wearer of the wearable electronic device comprises determining whether the display is at least partially occluded. 18. The device of claim 16, further comprising instructions for: detecting a reorientation of the wearable electronic device; and displaying the user interface in a second orientation based, at least in part, on the detected reorientation of the wearable electronic device, wherein the first orientation is different from the second orientation. 19. The device of claim 16, wherein the at least one sensor is a light sensor. 20. The device of claim 16, wherein the at least one sensor is a microphone.
One or more embodiments of the present disclosure provide a system and method for presenting a user interface on a wearable electronic device. In certain embodiments, input is received from at least one sensor coupled to the wearable electronic device. Once the input from the at least one sensor is received, an orientation of the wearable electronic device is determined with respect to an object to which the wearable electronic device is attached. When the orientation of the wearable electronic device is determined, a user interface is presented on a display of the wearable electronic device. In embodiments, the user interface is displayed in an orientation that is based, at least in part, on the determined orientation of the wearable electronic device.1. A method for presenting a user interface on a wearable electronic device, the method comprising: receiving input from at least one sensor coupled to the wearable electronic device; determining, based on the input from the at least one sensor, an orientation of the wearable electronic device with respect to an object to which the wearable electronic device is attached; and displaying a user interface on a display of the wearable electronic device, wherein the user interface is displayed in a first orientation based, at least in part, on the determined orientation of the wearable electronic device. 2. The method of claim 1, wherein the sensor is an accelerometer. 3. The method of claim 1, wherein the sensor is a biometric sensor configured to detect a pulse associated with the object to which the wearable electronic device is attached. 4. The method of claim 1, wherein the sensor is configured to determine whether the display of the wearable electronic device is in a field of view of a portion of the object to which the wearable electronic device is attached. 5. The method of claim 1, further comprising receiving additional input, wherein the orientation of the user interface is based, at least in part, on the additional input. 6. The method of claim 5, wherein the additional input is touch input on the display of the wearable electronic device. 7. The method of claim 1, wherein the sensor is a pressure sensor. 8. The method of claim 1, further comprising receiving input from a voice input mechanism, wherein the input from the voice input mechanism is used, in conjunction with the input from the at least one sensor, to determine the orientation of the wearable electronic device. 9. A method for presenting a user interface on a wearable electronic device, the method comprising: receiving input from at least one sensor coupled to the wearable electronic device; determining, based on the input from the at least one sensor, whether a display of the wearable electronic device is in a field of view of a wearer of the wearable electronic device; when it is determined that the display of the wearable electronic device is not in a field of view of the wearer of the wearable electronic device, causing the display to enter a standby mode; and when it is determined that the display of the wearable electronic device is in a field of the view of the wearer of the wearable electronic device: determining, based on the input from the at least one sensor, an orientation of the wearable electronic device; and displaying a user interface on the display of the wearable electronic device, wherein the user interface is displayed in a first orientation based, at least in part, on the determined orientation of the wearable electronic device. 10. The method of claim 9, wherein determining whether a display of the wearable electronic device is in a field of view of a wearer of the wearable electronic device comprises determining whether the display is at least partially occluded. 11. The method of claim 9, further comprising: detecting a reorientation of the wearable electronic device; and displaying the user interface in a second orientation based, at least in part, on the detected reorientation of the wearable electronic device, wherein the first orientation is different from the second orientation. 12. The method of claim 9, wherein the at least one sensor is a light sensor. 13. The method of claim 9, wherein the at least one sensor is a microphone. 14. The method of claim 9, wherein the at least one sensor is a proximity sensor. 15. The method of claim 9, wherein the at least one sensor is a camera. 16. A device comprising: at least one sensor; at least one processor; and a memory coupled to the at least one processor, the memory for storing instructions which, when executed by the at least one processor performs a method for presenting a user interface on a wearable electronic device, the method comprising: receiving input from the at least one sensor; determining, based on the input from the at least one sensor, whether a display of the wearable electronic device is in a field of view of a wearer of the wearable electronic device; when it is determined that the display of the wearable electronic device is not in a field of view of the wearer of the wearable electronic device, causing the display to enter a standby mode; and when it is determined that the display of the wearable electronic device is in a field of the view of the wearer of the wearable electronic device: determining, based on the input from the at least one sensor, an orientation of the wearable electronic device; and displaying a user interface on the display of the wearable electronic device, wherein the user interface is displayed in a first orientation based, at least in part, on the determined orientation of the wearable electronic device. 17. The device of claim 16, wherein determining whether a display of the wearable electronic device is in a field of view of a wearer of the wearable electronic device comprises determining whether the display is at least partially occluded. 18. The device of claim 16, further comprising instructions for: detecting a reorientation of the wearable electronic device; and displaying the user interface in a second orientation based, at least in part, on the detected reorientation of the wearable electronic device, wherein the first orientation is different from the second orientation. 19. The device of claim 16, wherein the at least one sensor is a light sensor. 20. The device of claim 16, wherein the at least one sensor is a microphone.
2,600
10,167
10,167
14,408,514
2,667
Checking device for a label, with a detection and processing unit for the detection of the label and with a layout unit for the creation of a layout of the label, wherein the checking device comprises a data converter unit configured in such a way that it generates test data from data for creating the label in order to control the detection and processing unit. The invention also relates to a data converter unit for use in a checking device, wherein the data converter unit is configured in such a way that it generates test data from data for creating a label for checking the label.
1. A checking device for a label, comprising a detection and processing unit for the detection of the label and with a layout unit for the creation of a layout of the label, wherein: the checking device comprises a data converter unit, which is configured in such a way that it generates test data from data for the creation of the label in order to control the detection and processing unit. 2. The checking device according to claim 1, wherein the detection and processing unit is configured as an image processor. 3. The checking device according to claim 1, wherein the test data exist as metadata. 4. The checking device according to claim 1, wherein the data converter unit is integrated in the layout unit and/or in the detection and processing unit. 5. A data converter unit for use in a checking device according to claim 1, wherein the data converter unit is configured in such a way that it generates test data from data for the creation of a label for the checking of the label. 6. The data converter unit according to claim 5, wherein the data converter unit is configured in such a way that the test data are available in the form of metadata.
Checking device for a label, with a detection and processing unit for the detection of the label and with a layout unit for the creation of a layout of the label, wherein the checking device comprises a data converter unit configured in such a way that it generates test data from data for creating the label in order to control the detection and processing unit. The invention also relates to a data converter unit for use in a checking device, wherein the data converter unit is configured in such a way that it generates test data from data for creating a label for checking the label.1. A checking device for a label, comprising a detection and processing unit for the detection of the label and with a layout unit for the creation of a layout of the label, wherein: the checking device comprises a data converter unit, which is configured in such a way that it generates test data from data for the creation of the label in order to control the detection and processing unit. 2. The checking device according to claim 1, wherein the detection and processing unit is configured as an image processor. 3. The checking device according to claim 1, wherein the test data exist as metadata. 4. The checking device according to claim 1, wherein the data converter unit is integrated in the layout unit and/or in the detection and processing unit. 5. A data converter unit for use in a checking device according to claim 1, wherein the data converter unit is configured in such a way that it generates test data from data for the creation of a label for the checking of the label. 6. The data converter unit according to claim 5, wherein the data converter unit is configured in such a way that the test data are available in the form of metadata.
2,600
10,168
10,168
16,113,900
2,631
Methods and apparatuses for direct sequence detection can receive an input signal over a communication channel. Next, the input signal can be sampled based on a clock signal to obtain a sampled voltage. A set of reference voltages can be generated based on a main cursor, a set of pre-cursors, and a set of post-cursors associated with the communication channel. Each generated reference voltage in the set of reference voltages can correspond to a particular sequence of symbols. A sequence corresponding to the sampled voltage can be selected based on comparing the sampled voltage with the set of reference voltages.
1-18. (canceled) 19. An integrated circuit (IC), comprising: a plurality of comparators, wherein each comparator has a respective reference voltage that is adjustable, wherein each comparator outputs a result signal based on comparing a received input from a communication channel with the comparator's reference voltage, wherein reference voltages for the plurality of comparators are computed based on a main cursor and at least one pre-cursor associated with the communication channel, and wherein each generated reference voltage corresponds to a particular sequence of symbols; and a sequence-selection circuit to select a sequence of symbols based on result signals outputted by the plurality of comparators. 20. The IC of claim 19, wherein each symbol in the sequence of symbols is transmitted over the communication channel at successive time instances, and wherein the sequence of symbols is selected based on received input that is sampled at a single time instance. 21. The IC of claim 19, wherein the sequence-selection circuit comprises: a sequence generation circuit to generate a set of 2M possible sequences of symbols based on the result signals outputted by the plurality of comparators; and a chain of M multiplexers, where each multiplexer in the chain of M multiplexers selects a progressively smaller subset of possible sequences of symbols from the set of 2M possible sequences of symbols, and wherein M previous symbols are used as select inputs for the chain of M multiplexers. 22. The IC of claim 19, wherein the sequence-selection circuit comprises: a probable-sequence-selection circuit to select a plurality of probable sequences of symbols; an error checking circuit to output a comparison result based on comparing a symbol in at least one of the plurality of probable sequences of symbols with a corresponding symbol that was predicted in a previously detected sequence of symbols; and a correct-sequence-selection circuit to select the sequence of symbols from the plurality of probable sequences of symbols based on the comparison result. 23. The IC of claim 19, wherein the sequence-selection circuit comprises circuitry to remove intra-symbol interference based on bits of detected symbols in multi-level signaling. 24. The IC of claim 19, wherein the sequence-selection circuit comprises: a plurality of fine comparators, wherein each fine comparator has a fine-comparator reference voltage that is adjustable, wherein each fine comparator outputs a fine-comparator result signal based on comparing the received input with the fine comparator's reference voltage, wherein fine-comparator reference voltages for the plurality of fine comparators are uniformly spread between two fine-comparator reference voltages of the plurality of comparators that are closest to the received input; a probability-quantization circuit to output two probability values based on the fine-comparator result signals outputted by the plurality of fine comparators, wherein the two probability values correspond to two sequences of symbols associated with two reference voltages of the plurality of comparators that are closest to the received input; and a circuit to select the sequence of symbols from a set of sequences of symbols based on the two probability values. 25. A receiver in a communication system, the receiver comprising: a plurality of comparators, wherein each comparator has a respective reference voltage that is adjustable, wherein each comparator outputs a result signal based on comparing a received input from a communication channel with the comparator's reference voltage, wherein reference voltages for the plurality of comparators are computed based on a main cursor and at least one pre-cursor associated with the communication channel, and wherein each generated reference voltage corresponds to a particular sequence of symbols; and a sequence-selection circuit to select a sequence of symbols based on result signals outputted by the plurality of comparators. 26. The receiver of claim 25, wherein each symbol in the sequence of symbols is transmitted over the communication channel at successive time instances, and wherein the sequence of symbols is selected based on received input that is sampled at a single time instance. 27. The receiver of claim 25, wherein the sequence-selection circuit comprises: a sequence generation circuit to generate a set of 2M possible sequences of symbols based on the result signals outputted by the plurality of comparators; and a chain of M multiplexers, where each multiplexer in the chain of M multiplexers selects a progressively smaller subset of possible sequences of symbols from the set of 2M possible sequences of symbols, and wherein M previous symbols are used as select inputs for the chain of M multiplexers. 28. The receiver of claim 25, wherein the sequence-selection circuit comprises: a probable-sequence-selection circuit to select a plurality of probable sequences of symbols; an error checking circuit to output a comparison result based on comparing a symbol in at least one of the plurality of probable sequences of symbols with a corresponding symbol that was predicted in a previously detected sequence of symbols; and a correct-sequence-selection circuit to select the sequence of symbols from the plurality of probable sequences of symbols based on the comparison result. 29. The receiver of claim 25, wherein the sequence-selection circuit comprises circuitry to remove intra-symbol interference based on bits of detected symbols in multi-level signaling. 30. The receiver of claim 25, wherein the sequence-selection circuit comprises: a plurality of fine comparators, wherein each fine comparator has a fine-comparator reference voltage that is adjustable, wherein each fine comparator outputs a fine-comparator result signal based on comparing the received input with the fine comparator's reference voltage, wherein fine-comparator reference voltages for the plurality of fine comparators are uniformly spread between two fine-comparator reference voltages of the plurality of comparators that are closest to the received input; a probability-quantization circuit to output two probability values based on the fine-comparator result signals outputted by the plurality of fine comparators, wherein the two probability values correspond to two sequences of symbols associated with two reference voltages of the plurality of comparators that are closest to the received input; and a circuit to select the sequence of symbols from a set of sequences of symbols based on the two probability values. 31. The receiver of claim 25, comprising an input port to receive the received input over the communication channel. 32. A method, comprising: generating a first set of reference voltages based on a main cursor and at least one pre-cursor associated with a communication channel, wherein each generated reference voltage in the first set of reference voltages corresponds to a particular sequence of symbols; and selecting, by using a sequence-selection circuit, a sequence of symbols based on comparing a received input from the communication channel with the first set of reference voltages. 33. The method of claim 32, wherein each symbol in the sequence of symbols is transmitted over the communication channel at successive time instances, and wherein the sequence of symbols is selected based on the received input that is sampled at a single time instance. 34. The method of claim 32, wherein said selecting, by using the sequence-selection circuit, the sequence of symbols comprises: generating a set of 2M possible sequences of symbols based on said comparing the received input from the communication channel with the first set of reference voltages; and selecting a progressively smaller subset of possible sequences of symbols from the set of 2M possible sequences of symbols based on M previously detected symbols. 35. The method of claim 32, wherein said selecting the sequence of symbols comprises: selecting a plurality of probable sequences of symbols; comparing a symbol in at least one of the plurality of probable sequences of symbols with a corresponding symbol that was predicted in a previously detected sequence of symbols; and selecting the sequence of symbols from the plurality of probable sequences of symbols based on said comparing. 36. The method of claim 32, wherein said selecting the sequence of symbols comprises removing intra-symbol interference based on bits of detected symbols in multi-level signaling. 37. The method of claim 32, wherein said selecting the sequence of symbols comprises: generating a second set of reference voltages that are uniformly spread out between two reference voltages in the first set of reference voltages that are closest to the received input; computing two probability values based on comparing the second set of reference voltages with the received input, wherein the two probability values correspond to two reference voltages in the first set of reference voltages that are closest to the received input; and selecting the sequence of symbols from a set of sequences of symbols based on the two probability values. 38. The method of claim 32, comprising receiving an input over the communication channel.
Methods and apparatuses for direct sequence detection can receive an input signal over a communication channel. Next, the input signal can be sampled based on a clock signal to obtain a sampled voltage. A set of reference voltages can be generated based on a main cursor, a set of pre-cursors, and a set of post-cursors associated with the communication channel. Each generated reference voltage in the set of reference voltages can correspond to a particular sequence of symbols. A sequence corresponding to the sampled voltage can be selected based on comparing the sampled voltage with the set of reference voltages.1-18. (canceled) 19. An integrated circuit (IC), comprising: a plurality of comparators, wherein each comparator has a respective reference voltage that is adjustable, wherein each comparator outputs a result signal based on comparing a received input from a communication channel with the comparator's reference voltage, wherein reference voltages for the plurality of comparators are computed based on a main cursor and at least one pre-cursor associated with the communication channel, and wherein each generated reference voltage corresponds to a particular sequence of symbols; and a sequence-selection circuit to select a sequence of symbols based on result signals outputted by the plurality of comparators. 20. The IC of claim 19, wherein each symbol in the sequence of symbols is transmitted over the communication channel at successive time instances, and wherein the sequence of symbols is selected based on received input that is sampled at a single time instance. 21. The IC of claim 19, wherein the sequence-selection circuit comprises: a sequence generation circuit to generate a set of 2M possible sequences of symbols based on the result signals outputted by the plurality of comparators; and a chain of M multiplexers, where each multiplexer in the chain of M multiplexers selects a progressively smaller subset of possible sequences of symbols from the set of 2M possible sequences of symbols, and wherein M previous symbols are used as select inputs for the chain of M multiplexers. 22. The IC of claim 19, wherein the sequence-selection circuit comprises: a probable-sequence-selection circuit to select a plurality of probable sequences of symbols; an error checking circuit to output a comparison result based on comparing a symbol in at least one of the plurality of probable sequences of symbols with a corresponding symbol that was predicted in a previously detected sequence of symbols; and a correct-sequence-selection circuit to select the sequence of symbols from the plurality of probable sequences of symbols based on the comparison result. 23. The IC of claim 19, wherein the sequence-selection circuit comprises circuitry to remove intra-symbol interference based on bits of detected symbols in multi-level signaling. 24. The IC of claim 19, wherein the sequence-selection circuit comprises: a plurality of fine comparators, wherein each fine comparator has a fine-comparator reference voltage that is adjustable, wherein each fine comparator outputs a fine-comparator result signal based on comparing the received input with the fine comparator's reference voltage, wherein fine-comparator reference voltages for the plurality of fine comparators are uniformly spread between two fine-comparator reference voltages of the plurality of comparators that are closest to the received input; a probability-quantization circuit to output two probability values based on the fine-comparator result signals outputted by the plurality of fine comparators, wherein the two probability values correspond to two sequences of symbols associated with two reference voltages of the plurality of comparators that are closest to the received input; and a circuit to select the sequence of symbols from a set of sequences of symbols based on the two probability values. 25. A receiver in a communication system, the receiver comprising: a plurality of comparators, wherein each comparator has a respective reference voltage that is adjustable, wherein each comparator outputs a result signal based on comparing a received input from a communication channel with the comparator's reference voltage, wherein reference voltages for the plurality of comparators are computed based on a main cursor and at least one pre-cursor associated with the communication channel, and wherein each generated reference voltage corresponds to a particular sequence of symbols; and a sequence-selection circuit to select a sequence of symbols based on result signals outputted by the plurality of comparators. 26. The receiver of claim 25, wherein each symbol in the sequence of symbols is transmitted over the communication channel at successive time instances, and wherein the sequence of symbols is selected based on received input that is sampled at a single time instance. 27. The receiver of claim 25, wherein the sequence-selection circuit comprises: a sequence generation circuit to generate a set of 2M possible sequences of symbols based on the result signals outputted by the plurality of comparators; and a chain of M multiplexers, where each multiplexer in the chain of M multiplexers selects a progressively smaller subset of possible sequences of symbols from the set of 2M possible sequences of symbols, and wherein M previous symbols are used as select inputs for the chain of M multiplexers. 28. The receiver of claim 25, wherein the sequence-selection circuit comprises: a probable-sequence-selection circuit to select a plurality of probable sequences of symbols; an error checking circuit to output a comparison result based on comparing a symbol in at least one of the plurality of probable sequences of symbols with a corresponding symbol that was predicted in a previously detected sequence of symbols; and a correct-sequence-selection circuit to select the sequence of symbols from the plurality of probable sequences of symbols based on the comparison result. 29. The receiver of claim 25, wherein the sequence-selection circuit comprises circuitry to remove intra-symbol interference based on bits of detected symbols in multi-level signaling. 30. The receiver of claim 25, wherein the sequence-selection circuit comprises: a plurality of fine comparators, wherein each fine comparator has a fine-comparator reference voltage that is adjustable, wherein each fine comparator outputs a fine-comparator result signal based on comparing the received input with the fine comparator's reference voltage, wherein fine-comparator reference voltages for the plurality of fine comparators are uniformly spread between two fine-comparator reference voltages of the plurality of comparators that are closest to the received input; a probability-quantization circuit to output two probability values based on the fine-comparator result signals outputted by the plurality of fine comparators, wherein the two probability values correspond to two sequences of symbols associated with two reference voltages of the plurality of comparators that are closest to the received input; and a circuit to select the sequence of symbols from a set of sequences of symbols based on the two probability values. 31. The receiver of claim 25, comprising an input port to receive the received input over the communication channel. 32. A method, comprising: generating a first set of reference voltages based on a main cursor and at least one pre-cursor associated with a communication channel, wherein each generated reference voltage in the first set of reference voltages corresponds to a particular sequence of symbols; and selecting, by using a sequence-selection circuit, a sequence of symbols based on comparing a received input from the communication channel with the first set of reference voltages. 33. The method of claim 32, wherein each symbol in the sequence of symbols is transmitted over the communication channel at successive time instances, and wherein the sequence of symbols is selected based on the received input that is sampled at a single time instance. 34. The method of claim 32, wherein said selecting, by using the sequence-selection circuit, the sequence of symbols comprises: generating a set of 2M possible sequences of symbols based on said comparing the received input from the communication channel with the first set of reference voltages; and selecting a progressively smaller subset of possible sequences of symbols from the set of 2M possible sequences of symbols based on M previously detected symbols. 35. The method of claim 32, wherein said selecting the sequence of symbols comprises: selecting a plurality of probable sequences of symbols; comparing a symbol in at least one of the plurality of probable sequences of symbols with a corresponding symbol that was predicted in a previously detected sequence of symbols; and selecting the sequence of symbols from the plurality of probable sequences of symbols based on said comparing. 36. The method of claim 32, wherein said selecting the sequence of symbols comprises removing intra-symbol interference based on bits of detected symbols in multi-level signaling. 37. The method of claim 32, wherein said selecting the sequence of symbols comprises: generating a second set of reference voltages that are uniformly spread out between two reference voltages in the first set of reference voltages that are closest to the received input; computing two probability values based on comparing the second set of reference voltages with the received input, wherein the two probability values correspond to two reference voltages in the first set of reference voltages that are closest to the received input; and selecting the sequence of symbols from a set of sequences of symbols based on the two probability values. 38. The method of claim 32, comprising receiving an input over the communication channel.
2,600
10,169
10,169
15,224,651
2,613
Systems and methods are provided for visualizing the number of events having different values for a field of interest over a selected time range. The events may be derived from machine data obtained from one or more data sources. User input received via a graphical user interface may specify the field of interest, a time range, and a time granularity for displaying counts of the number of events having various values during different time slots within the selected time range. Events including the specified field during the user-selected time range are identified and values for the field are extracted from the identified events. A visualization indicating a relation between a number of the events occurring within each of a plurality of time slots over the selected time range and each of the unique extracted values of the field is provided to the user via the graphical user interface.
1. A method comprising: creating a set of time stamped, searchable events from a set of raw data, each event in the set of time stamped, searchable events includes a portion of the set of raw data from which the time stamped, searchable event was derived, the set of raw data related to security or performance aspects of one or more information technology systems; identifying a set of unique values included in a particular field that is present in one or more time stamped, searchable events in the set of time stamped, searchable events; causing display of a plurality of rows, each row corresponding to one unique value among the set of unique values, each row having one or more indicators displayed along a timeline, each indicator among the one or more indicators indicating a number of time stamped, searchable events in the set of time stamped, searchable events within a certain time period that includes the unique value in the particular field, each indicator of the one or more indicators is positioned along the timeline according to the certain time period; wherein the method is performed by one or more computing devices. 2. The method of claim 1, wherein the raw data is machine data. 3. The method of claim 1, wherein the time stamped, searchable events are derived at least in part from log files generated by one or more servers. 4. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade. 5. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to a linear scale. 6. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to a logarithmic scale. 7. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to an exponential scale. 8. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to a linear scale, the color or shade is applied to each intersection according to a rank assigned to that intersection based on a corresponding number of events. 9. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to a linear scale, the color or shade is applied to each intersection using a scale based on a maximum event count and a minimum event count determined from (i) intersections within a row including the intersection for which the color or shade is being applied, (ii) intersections within a column including the intersection for which the color or shade is being applied, or (iii) all displayed intersections. 10. The method of claim 1, further compromising; receiving user input that specifies a time granularity; and determining a duration of time covered by each of the plurality of time periods based on the time granularity 11. The method of claim 1, further compromising; predicting what a plot of a number of events having a specified value for the particular field would look like for future time periods based on extrapolating from an actual number of events for previous time periods; causing display of a graphical representation of a plot based on the predicting. 12. The method of claim 1, further comprising: receiving user input indicating a particular time period to be used for sorting the plurality of rows; and sorting the plurality of rows, wherein each row is positioned in ascending or descending order based on a number of events corresponding to the intersection of that row with the particular time period. 13. The method of claim 1, further comprising: receiving user input selecting an intersection of a row and a time period; and causing display of information pertaining to the intersection that includes any of: a corresponding field value, a count value indicating a number of events associated with the intersection, or a time period associated with the intersection. 14. The method of claim 1, further comprising displaying a statistic for each unique value in the set of unique values for the particular field, wherein the statistic for a given unique value includes any combination of: a minimum event count corresponding to intersections in the row corresponding to the given unique value with time periods, a maximum event count corresponding to the intersections, an average of event counts corresponding to the intersections, a total count of events in multiple intersections, or a percentage of the set of time stamped, searchable events that correspond to multiple intersections. 15. The method of claim 1, further comprising: reordering the plurality of rows based on a drag and drop gesture received from a user input device. 16. A non-transitory computer readable storage medium, storing instructions that, when executed by one or more processors, cause performance of: creating a set of time stamped, searchable events from a set of raw data, each event in the set of time stamped, searchable events includes a portion of the set of raw data from which the time stamped, searchable event was derived, the set of raw data related to security or performance aspects of one or more information technology systems; identifying a set of unique values included in a particular field that is present in one or more time stamped, searchable events in the set of time stamped, searchable events; causing display of a plurality of rows, each row corresponding to one unique value among the set of unique values, each row having one or more indicators displayed along a timeline, each indicator among the one or more indicators indicating a number of time stamped, searchable events in the set of time stamped, searchable events within a certain time period that includes the unique value in the particular field, each indicator of the one or more indicators is positioned along the timeline according to the certain time period. 17. The non-transitory computer readable storage medium of claim 16, further compromising; predicting what a plot of a number of events having a specified value for the particular field would look like for future time periods based on extrapolating from an actual number of events for previous time periods; causing display of a graphical representation of a plot based on the predicting. 18. A system comprising: a memory having processor-readable instructions stored therein; and a processor configured to access the memory and execute the processor-readable instructions, which when executed by the processor, configures the processor to perform a plurality of functions, including functions to: creating a set of time stamped, searchable events from a set of raw data, each event in the set of time stamped, searchable events includes a portion of the set of raw data from which the time stamped, searchable event was derived, the set of raw data related to security or performance aspects of one or more information technology systems; identifying a set of unique values included in a particular field that is present in one or more time stamped, searchable events in the set of time stamped, searchable events; causing display of a plurality of rows, each row corresponding to one unique value among the set of unique values, each row having one or more indicators displayed along a timeline, each indicator among the one or more indicators indicating a number of time stamped, searchable events in the set of time stamped, searchable events within a certain time period that includes the unique value in the particular field, each indicator of the one or more indicators is positioned along the timeline according to the certain time period. 19. The system of claim 18, wherein the processor is further configured to perform functions to: predicting what a plot of a number of events having a specified value for the particular field would look like for future time periods based on extrapolating from an actual number of events for previous time periods; 20. The system of claim 18, further comprising displaying a statistic for each unique value in the set of unique values for the particular field, wherein the statistic for a given unique value includes any combination of: a minimum event count corresponding to intersections in the row corresponding to the given unique value with time periods, a maximum event count corresponding to the intersections, an average of event counts corresponding to the intersections, a total count of events in multiple intersections, or a percentage of the set of time stamped, searchable events that correspond to multiple intersections.
Systems and methods are provided for visualizing the number of events having different values for a field of interest over a selected time range. The events may be derived from machine data obtained from one or more data sources. User input received via a graphical user interface may specify the field of interest, a time range, and a time granularity for displaying counts of the number of events having various values during different time slots within the selected time range. Events including the specified field during the user-selected time range are identified and values for the field are extracted from the identified events. A visualization indicating a relation between a number of the events occurring within each of a plurality of time slots over the selected time range and each of the unique extracted values of the field is provided to the user via the graphical user interface.1. A method comprising: creating a set of time stamped, searchable events from a set of raw data, each event in the set of time stamped, searchable events includes a portion of the set of raw data from which the time stamped, searchable event was derived, the set of raw data related to security or performance aspects of one or more information technology systems; identifying a set of unique values included in a particular field that is present in one or more time stamped, searchable events in the set of time stamped, searchable events; causing display of a plurality of rows, each row corresponding to one unique value among the set of unique values, each row having one or more indicators displayed along a timeline, each indicator among the one or more indicators indicating a number of time stamped, searchable events in the set of time stamped, searchable events within a certain time period that includes the unique value in the particular field, each indicator of the one or more indicators is positioned along the timeline according to the certain time period; wherein the method is performed by one or more computing devices. 2. The method of claim 1, wherein the raw data is machine data. 3. The method of claim 1, wherein the time stamped, searchable events are derived at least in part from log files generated by one or more servers. 4. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade. 5. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to a linear scale. 6. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to a logarithmic scale. 7. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to an exponential scale. 8. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to a linear scale, the color or shade is applied to each intersection according to a rank assigned to that intersection based on a corresponding number of events. 9. The method of claim 1, wherein each indicator among the one or more indicators is an absolute or relative indication of the number of time stamped, searchable events and is displayed using a color or shade, the color or shade is applied to each intersection according to a linear scale, the color or shade is applied to each intersection using a scale based on a maximum event count and a minimum event count determined from (i) intersections within a row including the intersection for which the color or shade is being applied, (ii) intersections within a column including the intersection for which the color or shade is being applied, or (iii) all displayed intersections. 10. The method of claim 1, further compromising; receiving user input that specifies a time granularity; and determining a duration of time covered by each of the plurality of time periods based on the time granularity 11. The method of claim 1, further compromising; predicting what a plot of a number of events having a specified value for the particular field would look like for future time periods based on extrapolating from an actual number of events for previous time periods; causing display of a graphical representation of a plot based on the predicting. 12. The method of claim 1, further comprising: receiving user input indicating a particular time period to be used for sorting the plurality of rows; and sorting the plurality of rows, wherein each row is positioned in ascending or descending order based on a number of events corresponding to the intersection of that row with the particular time period. 13. The method of claim 1, further comprising: receiving user input selecting an intersection of a row and a time period; and causing display of information pertaining to the intersection that includes any of: a corresponding field value, a count value indicating a number of events associated with the intersection, or a time period associated with the intersection. 14. The method of claim 1, further comprising displaying a statistic for each unique value in the set of unique values for the particular field, wherein the statistic for a given unique value includes any combination of: a minimum event count corresponding to intersections in the row corresponding to the given unique value with time periods, a maximum event count corresponding to the intersections, an average of event counts corresponding to the intersections, a total count of events in multiple intersections, or a percentage of the set of time stamped, searchable events that correspond to multiple intersections. 15. The method of claim 1, further comprising: reordering the plurality of rows based on a drag and drop gesture received from a user input device. 16. A non-transitory computer readable storage medium, storing instructions that, when executed by one or more processors, cause performance of: creating a set of time stamped, searchable events from a set of raw data, each event in the set of time stamped, searchable events includes a portion of the set of raw data from which the time stamped, searchable event was derived, the set of raw data related to security or performance aspects of one or more information technology systems; identifying a set of unique values included in a particular field that is present in one or more time stamped, searchable events in the set of time stamped, searchable events; causing display of a plurality of rows, each row corresponding to one unique value among the set of unique values, each row having one or more indicators displayed along a timeline, each indicator among the one or more indicators indicating a number of time stamped, searchable events in the set of time stamped, searchable events within a certain time period that includes the unique value in the particular field, each indicator of the one or more indicators is positioned along the timeline according to the certain time period. 17. The non-transitory computer readable storage medium of claim 16, further compromising; predicting what a plot of a number of events having a specified value for the particular field would look like for future time periods based on extrapolating from an actual number of events for previous time periods; causing display of a graphical representation of a plot based on the predicting. 18. A system comprising: a memory having processor-readable instructions stored therein; and a processor configured to access the memory and execute the processor-readable instructions, which when executed by the processor, configures the processor to perform a plurality of functions, including functions to: creating a set of time stamped, searchable events from a set of raw data, each event in the set of time stamped, searchable events includes a portion of the set of raw data from which the time stamped, searchable event was derived, the set of raw data related to security or performance aspects of one or more information technology systems; identifying a set of unique values included in a particular field that is present in one or more time stamped, searchable events in the set of time stamped, searchable events; causing display of a plurality of rows, each row corresponding to one unique value among the set of unique values, each row having one or more indicators displayed along a timeline, each indicator among the one or more indicators indicating a number of time stamped, searchable events in the set of time stamped, searchable events within a certain time period that includes the unique value in the particular field, each indicator of the one or more indicators is positioned along the timeline according to the certain time period. 19. The system of claim 18, wherein the processor is further configured to perform functions to: predicting what a plot of a number of events having a specified value for the particular field would look like for future time periods based on extrapolating from an actual number of events for previous time periods; 20. The system of claim 18, further comprising displaying a statistic for each unique value in the set of unique values for the particular field, wherein the statistic for a given unique value includes any combination of: a minimum event count corresponding to intersections in the row corresponding to the given unique value with time periods, a maximum event count corresponding to the intersections, an average of event counts corresponding to the intersections, a total count of events in multiple intersections, or a percentage of the set of time stamped, searchable events that correspond to multiple intersections.
2,600
10,170
10,170
15,729,417
2,689
A method for coordinating reader transmissions, according to one embodiment, includes: at a first reader, receiving from a second reader a request to transmit to a RFID tag; determining whether the first reader is transmitting; and sending a denial of the request from the first reader to the second reader in response to determining that the first reader is transmitting. A method, according to another embodiment, includes: from a first reader, sending to a plurality of readers a request to transmit to a RFID tag; waiting for responses from the plurality of readers; not transmitting to the RFID tag in response to the first reader receiving a denial of the request from any of the readers; and transmitting to the RFID tag in response to the first reader not receiving a denial of the request from any of the readers. Additional systems, methods and computer program products are presented.
1. A method for coordinating reader transmissions, comprising: at a first reader, receiving from a second reader a request to transmit to a radio frequency identification tag; in response to receiving the request, determining whether the first reader is transmitting; and sending a denial of the request from the first reader to the second reader in response to determining that the first reader is transmitting. 2. The method of claim 1, wherein the readers communicate directly with each other via at least one of radio frequency signal and a network. 3. The method of claim 1, wherein the request to transmit is directed to the first reader. 4. The method of claim 1, wherein the first reader sends a completion notice to the second reader upon completion of the transmitting. 5. The method of claim 1, wherein the second reader transmits to the radio frequency identification tag in response to not receiving a denial of the request after a predetermined amount of time has elapsed since the second reader transmitted the request to transmit. 6. The method of claim 1, wherein the first and second readers receive a request to transmit from a third reader, wherein a denial of the third reader's request is sent from the first and second readers when the first reader is transmitting and the second reader is waiting to transmit. 7. The method of claim 6, wherein the third reader waits until the second reader completes transmission before attempting to communicate with a radio frequency identification tag. 8. The method of claim 1, wherein the first and second readers receive a request to transmit from a third reader, wherein the request from the third reader includes a request for priority, wherein the first reader aborts transmitting when the first reader is transmitting upon receiving the request from the third reader, wherein the second reader waits to transmit to the radio frequency identification tag until after the third reader communicates with a radio frequency identification tag. 9. The method of claim 1, wherein the first and second readers receive a request to transmit from a third reader, wherein the request from the third reader includes a request for priority, wherein the first reader sends a denial of the third reader's request when the first reader is transmitting upon receiving the request from the third reader, wherein the second reader waits to transmit to the radio frequency identification tag until after the third reader communicates with a radio frequency identification tag. 10. A method for coordinating reader transmissions, comprising: from a first reader, sending to a plurality of readers a request to transmit to a radio frequency identification tag, the request being directed to the readers; waiting for responses from the plurality of readers; not transmitting to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers; and transmitting to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers. 11. The method of claim 10, wherein the readers communicate directly with each other via radio frequency signal. 12. The method of claim 10, wherein the readers communicate with each other via a network. 13. The method of claim 10, comprising: awaiting receipt of a completion notice from the reader sending the denial prior to transmitting in response to receiving a denial of the request. 14. The method of claim 10, wherein the first reader transmits to the radio frequency identification tag in response to not receiving a denial of the request after a predetermined amount of time has elapsed since the first reader transmitted the request to transmit. 15. The method of claim 10, comprising: receiving a request to transmit from another reader, wherein a denial of the other reader's request is sent from the first reader when the first reader is transmitting or the first reader is waiting to transmit. 16. The method of claim 15, wherein the other reader waits until the first reader completes transmission before attempting to communicate with a radio frequency identification tag. 17. The method of claim 10, comprising: receiving a request to transmit from another reader, wherein the request from the other reader includes a request for priority, wherein the first reader aborts transmitting when the first reader is transmitting upon receiving the request from the other reader, wherein the first reader waits to retransmit to the radio frequency identification tag until after the other reader communicates with a radio frequency identification tag. 18. The method of claim 10, comprising: receiving a request to transmit from another reader, wherein the request from the other reader includes a request for priority, wherein the first reader sends a denial of the other reader's request when the first reader is transmitting upon receiving the request from the other reader, wherein the remainder of the plurality of readers wait to transmit to the radio frequency identification tag until after the other reader communicates with a radio frequency identification tag. 19. A method for coordinating reader transmissions, comprising: for a first of a plurality of readers, determining whether the first reader and any other of the readers interfere with each other; and storing a result of the determination; wherein the first reader sends a request to transmit to those readers determined to interfere with the first reader or vice versa prior to transmitting to a radio frequency identification tag, the request to transmit being directed to those readers determined to interfere with the first reader or vice versa; wherein the first reader does not transmit to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers the request was sent to; and wherein the first reader transmits to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers the request was sent to. 20. The method of claim 19, wherein the determination is performed in response to activating at least one of the readers. 21. The method of claim 20, wherein previous determinations are stored for other readers, wherein the determination involves only the at least one of the readers activating in relation to the other readers. 22. The method of claim 19, wherein the first reader stores the result of the determination. 23. The method of claim 19, wherein the method is performed for each of the readers. 24. The method of claim 19, wherein the readers communicate with each other via a network, each reader being assigned a network address for allowing the first reader to directly send the request to transmit to those readers determined to interfere with the first reader or vice versa. 25. The method of claim 19, wherein only those readers determined to interfere with the first reader or vice versa receive the request from the first reader. 26. A system, comprising: a processing circuit; memory coupled to the processing circuit; and an antenna coupled to the processing circuit, wherein the processing circuit is configured to: at a first reader, receive from a second reader a request to transmit to a radio frequency identification tag; and send a denial of the request from the first reader to the second reader when the first reader is transmitting upon receiving the request from the second reader. 27. A computer program product comprising a non-transitory computer readable medium having computer code thereon, which when executed by a reader causes the reader to: receive, by the reader, a request to transmit to a radio frequency identification tag from a remote reader; and send, by the reader, a denial of the request to the remote reader when the reader is transmitting upon receiving the request from the remote reader. 28. A system, comprising: a processing circuit; memory coupled to the processing circuit; and an antenna coupled to the processing circuit, wherein the processing circuit is configured to: from a first reader, send to a plurality of readers a request to transmit to a radio frequency identification tag, the request being directed to the readers; wait for responses from the plurality of readers; not transmit to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers; and transmit to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers. 29. A computer program product comprising a non-transitory computer readable medium having computer code thereon, which when executed by a reader causes the reader to: send to a plurality of readers a request to transmit to a radio frequency identification tag, the request being directed to the readers; wait for responses from the plurality of readers; not transmit to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers; and transmit to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers. 30. A system, comprising: a processing circuit; memory coupled to the processing circuit; and an antenna coupled to the processing circuit, wherein the processing circuit is configured to: determine whether a first of a plurality of readers and any other of the readers interfere with each other; store a result of the determination; store a request to transmit from the first reader to those readers determined to interfere with the first reader or vice versa prior to transmitting to a radio frequency identification tag, the request to transmit being directed to those readers determined to interfere with the first reader or vice versa; not transmit to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers receiving the request; and transmit to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers receiving the request. 31. A computer program product comprising a non-transitory computer readable medium having computer code thereon, which when executed by a reader causes the reader to: determine whether a first of a plurality of readers and any other of the readers interfere with each other; and store a result of the determination; send a request to transmit from the first reader to those readers determined to interfere with the first reader or vice versa prior to transmitting to a radio frequency identification tag, the request to transmit being directed to those readers determined to interfere with the first reader or vice versa; not transmit to the radio frequency identification tag in response to the first reader receiving a of the request from any of the readers receiving the request; and transmit to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers receiving the request.
A method for coordinating reader transmissions, according to one embodiment, includes: at a first reader, receiving from a second reader a request to transmit to a RFID tag; determining whether the first reader is transmitting; and sending a denial of the request from the first reader to the second reader in response to determining that the first reader is transmitting. A method, according to another embodiment, includes: from a first reader, sending to a plurality of readers a request to transmit to a RFID tag; waiting for responses from the plurality of readers; not transmitting to the RFID tag in response to the first reader receiving a denial of the request from any of the readers; and transmitting to the RFID tag in response to the first reader not receiving a denial of the request from any of the readers. Additional systems, methods and computer program products are presented.1. A method for coordinating reader transmissions, comprising: at a first reader, receiving from a second reader a request to transmit to a radio frequency identification tag; in response to receiving the request, determining whether the first reader is transmitting; and sending a denial of the request from the first reader to the second reader in response to determining that the first reader is transmitting. 2. The method of claim 1, wherein the readers communicate directly with each other via at least one of radio frequency signal and a network. 3. The method of claim 1, wherein the request to transmit is directed to the first reader. 4. The method of claim 1, wherein the first reader sends a completion notice to the second reader upon completion of the transmitting. 5. The method of claim 1, wherein the second reader transmits to the radio frequency identification tag in response to not receiving a denial of the request after a predetermined amount of time has elapsed since the second reader transmitted the request to transmit. 6. The method of claim 1, wherein the first and second readers receive a request to transmit from a third reader, wherein a denial of the third reader's request is sent from the first and second readers when the first reader is transmitting and the second reader is waiting to transmit. 7. The method of claim 6, wherein the third reader waits until the second reader completes transmission before attempting to communicate with a radio frequency identification tag. 8. The method of claim 1, wherein the first and second readers receive a request to transmit from a third reader, wherein the request from the third reader includes a request for priority, wherein the first reader aborts transmitting when the first reader is transmitting upon receiving the request from the third reader, wherein the second reader waits to transmit to the radio frequency identification tag until after the third reader communicates with a radio frequency identification tag. 9. The method of claim 1, wherein the first and second readers receive a request to transmit from a third reader, wherein the request from the third reader includes a request for priority, wherein the first reader sends a denial of the third reader's request when the first reader is transmitting upon receiving the request from the third reader, wherein the second reader waits to transmit to the radio frequency identification tag until after the third reader communicates with a radio frequency identification tag. 10. A method for coordinating reader transmissions, comprising: from a first reader, sending to a plurality of readers a request to transmit to a radio frequency identification tag, the request being directed to the readers; waiting for responses from the plurality of readers; not transmitting to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers; and transmitting to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers. 11. The method of claim 10, wherein the readers communicate directly with each other via radio frequency signal. 12. The method of claim 10, wherein the readers communicate with each other via a network. 13. The method of claim 10, comprising: awaiting receipt of a completion notice from the reader sending the denial prior to transmitting in response to receiving a denial of the request. 14. The method of claim 10, wherein the first reader transmits to the radio frequency identification tag in response to not receiving a denial of the request after a predetermined amount of time has elapsed since the first reader transmitted the request to transmit. 15. The method of claim 10, comprising: receiving a request to transmit from another reader, wherein a denial of the other reader's request is sent from the first reader when the first reader is transmitting or the first reader is waiting to transmit. 16. The method of claim 15, wherein the other reader waits until the first reader completes transmission before attempting to communicate with a radio frequency identification tag. 17. The method of claim 10, comprising: receiving a request to transmit from another reader, wherein the request from the other reader includes a request for priority, wherein the first reader aborts transmitting when the first reader is transmitting upon receiving the request from the other reader, wherein the first reader waits to retransmit to the radio frequency identification tag until after the other reader communicates with a radio frequency identification tag. 18. The method of claim 10, comprising: receiving a request to transmit from another reader, wherein the request from the other reader includes a request for priority, wherein the first reader sends a denial of the other reader's request when the first reader is transmitting upon receiving the request from the other reader, wherein the remainder of the plurality of readers wait to transmit to the radio frequency identification tag until after the other reader communicates with a radio frequency identification tag. 19. A method for coordinating reader transmissions, comprising: for a first of a plurality of readers, determining whether the first reader and any other of the readers interfere with each other; and storing a result of the determination; wherein the first reader sends a request to transmit to those readers determined to interfere with the first reader or vice versa prior to transmitting to a radio frequency identification tag, the request to transmit being directed to those readers determined to interfere with the first reader or vice versa; wherein the first reader does not transmit to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers the request was sent to; and wherein the first reader transmits to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers the request was sent to. 20. The method of claim 19, wherein the determination is performed in response to activating at least one of the readers. 21. The method of claim 20, wherein previous determinations are stored for other readers, wherein the determination involves only the at least one of the readers activating in relation to the other readers. 22. The method of claim 19, wherein the first reader stores the result of the determination. 23. The method of claim 19, wherein the method is performed for each of the readers. 24. The method of claim 19, wherein the readers communicate with each other via a network, each reader being assigned a network address for allowing the first reader to directly send the request to transmit to those readers determined to interfere with the first reader or vice versa. 25. The method of claim 19, wherein only those readers determined to interfere with the first reader or vice versa receive the request from the first reader. 26. A system, comprising: a processing circuit; memory coupled to the processing circuit; and an antenna coupled to the processing circuit, wherein the processing circuit is configured to: at a first reader, receive from a second reader a request to transmit to a radio frequency identification tag; and send a denial of the request from the first reader to the second reader when the first reader is transmitting upon receiving the request from the second reader. 27. A computer program product comprising a non-transitory computer readable medium having computer code thereon, which when executed by a reader causes the reader to: receive, by the reader, a request to transmit to a radio frequency identification tag from a remote reader; and send, by the reader, a denial of the request to the remote reader when the reader is transmitting upon receiving the request from the remote reader. 28. A system, comprising: a processing circuit; memory coupled to the processing circuit; and an antenna coupled to the processing circuit, wherein the processing circuit is configured to: from a first reader, send to a plurality of readers a request to transmit to a radio frequency identification tag, the request being directed to the readers; wait for responses from the plurality of readers; not transmit to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers; and transmit to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers. 29. A computer program product comprising a non-transitory computer readable medium having computer code thereon, which when executed by a reader causes the reader to: send to a plurality of readers a request to transmit to a radio frequency identification tag, the request being directed to the readers; wait for responses from the plurality of readers; not transmit to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers; and transmit to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers. 30. A system, comprising: a processing circuit; memory coupled to the processing circuit; and an antenna coupled to the processing circuit, wherein the processing circuit is configured to: determine whether a first of a plurality of readers and any other of the readers interfere with each other; store a result of the determination; store a request to transmit from the first reader to those readers determined to interfere with the first reader or vice versa prior to transmitting to a radio frequency identification tag, the request to transmit being directed to those readers determined to interfere with the first reader or vice versa; not transmit to the radio frequency identification tag in response to the first reader receiving a denial of the request from any of the readers receiving the request; and transmit to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers receiving the request. 31. A computer program product comprising a non-transitory computer readable medium having computer code thereon, which when executed by a reader causes the reader to: determine whether a first of a plurality of readers and any other of the readers interfere with each other; and store a result of the determination; send a request to transmit from the first reader to those readers determined to interfere with the first reader or vice versa prior to transmitting to a radio frequency identification tag, the request to transmit being directed to those readers determined to interfere with the first reader or vice versa; not transmit to the radio frequency identification tag in response to the first reader receiving a of the request from any of the readers receiving the request; and transmit to the radio frequency identification tag in response to the first reader not receiving a denial of the request from any of the readers receiving the request.
2,600
10,171
10,171
13,703,057
2,647
A method for selecting one or more resources for use from among a set of resources comprises obtaining ( 201 ) a first sub-set of the resources. The resources belonging to the first sub-set are the ones which have, according to occupancy information gathered over a long period of time, highest probabilities of matching requirements related to estimated usage time and/or needed capacity. The method further comprises selecting ( 202 ) a second sub-set from among the first sub-set of the resources on the basis of second occupancy information gathered over a short period of time. The resources belonging to the second sub-set are the ones from among the first sub-set which, according to the second information, have highest probabilities of matching requirements related to estimated usage time and/or needed capacity. The subsequent use of the long and short term occupancy information increases the probability of optimal selection. The resources can be, for example, radio channels from which one radio channel is to be selected.
1. A device for selecting one or more resources for use from among a set of resources, the device comprising: a processing circuitry arranged to obtain a first sub-set of the set of resources, the resources belonging to the first sub-set being the ones from among the set of resources which have, according to first information gathered over a first period of time about occupancies of resources belonging to the set of resources, highest probabilities of matching requirements related to estimated usage time and/or needed capacity, wherein the processing circuitry is further arranged to select a second sub-set from among the first sub-set of resources on the basis of second information gathered over a second period of time about the occupancies of the resources belonging to the first sub-set of resources, the second period of time being shorter than the first period of time and the resources belonging to the second sub-set of resources being the ones from among the first sub-set of resources which, according to the second information, have highest probabilities of matching the requirements related to the estimated usage time and/or the needed capacity. 2. The device according to claim 1, wherein the device is a radio communication device and the processing circuitry is arranged to select one or more radio channels for data transmission from among a set of radio channels, the set of radio channels representing the set of resources and the selected one or more radio channels representing the second sub-set of resources. 3. The A device according to claim 2, wherein the processing circuitry is arranged to control the radio communication device to send a query to a server device to get a first sub-set of radio channels from the server device as a response to a need for the data transmission, the first sub-set of radio channels representing the first sub-set of resources, and to sense free radio channels from among the first sub-set, and the processing circuitry is arranged to select one radio channel for the data transmission from among the free radio channels on the basis of the second information. 4. The device according to claim 2, wherein the processing circuitry is arranged to control the radio communication device to send a query to a server device to get the first information from the server device as a response to a need for the data transmission, the processing circuitry is arranged to select a first sub-set of radio channels from among the set of radio channels on the basis of the first information, the first sub-set of radio channels representing the first sub-set of resources, the processing circuitry is arranged to control the radio communication device to sense free radio channels from among the first sub-set, and the processing circuitry is arranged to select one radio channel for the data transmission from among the free radio channels on the basis of the second information. 5. The device according to claim 3, wherein the second information is arranged to indicate a measured degree of occupancy of each radio channel belonging to the first sub-set of radio channels, and the processing circuitry is arranged to select the one having the lowest measured degree of occupancy from among the first sub-set of radio channels. 6. The device according to claim 3, wherein the second information is arranged to indicate statistical properties of use of each radio channel belonging to the first sub-set of radio channels, and the processing circuitry is arranged to select the one, from among the first sub-set of radio channels, which has the lowest probability of being used during a coming pre-determined time period. 7. The device according to claim 3, wherein the processing circuitry is arranged to update the second information with information obtained by the sensing the free radio channels. 8. The device according to claim 3, wherein the processing circuitry is arranged to control the radio communication device to transmit data for a pre-determined time period after selection of the one radio channel for the data transmission, and the processing circuitry is arranged to re-perform the sensing of the free radio channels and the selection of the one radio channel for the data transmission after the pre-determined time period as a response to a situation in which there is still the need for the data transmission. 9. A method for selecting one or more resources for use from among a set of resources, the method comprising: obtaining a first sub-set of the set of resources, the resources belonging to the first sub-set being the ones from among the set of resources which have, according to first information gathered over a first period of time about occupancies of resources belonging to the set of resources, highest probabilities of matching requirements related to estimated usage time and/or needed capacity, wherein the method further comprises selecting a second sub-set from among the first sub-set of resources on the basis of second information gathered over a second period of time about the occupancies of the resources belonging to the first sub-set of resources, the second period of time being shorter than the first period of time and the resources belonging to the second sub-set of resources being the ones from among the first sub-set of resources which, according to the second information, have highest probabilities of matching the requirements related to the estimated usage time and/or the needed capacity. 10. The method according to claim 9, wherein the selecting the second sub-set of resources is selecting one or more radio channels for data transmission from among a set of radio channels, the set of radio channels representing the set of resources and the selected one or more radio channels representing the second sub-set of resources. 11. The A method according to claim 10, wherein the method comprises: controlling a radio communication device to send a query to a server device to get a first sub-set of radio channels from the server device as a response to a need for the data transmission, the first sub-set of radio channels representing the first sub-set of resources, controlling the radio communication device to sense free radio channels from among the first sub-set, and selecting one radio channel for the data transmission from among the free radio channels on the basis of the second information. 12. The A method according to claim 10, wherein the method comprises: controlling a radio communication device to send a query to a server device to get the first information from the server device as a response to a need for the data transmission, selecting a first sub-set of radio channels from among the set of radio channels on the basis of the first information, the first sub-set of radio channels representing the first sub-set of resources, controlling the radio communication device to sense free radio channels from among the first sub-set, and selecting one radio channel for the data transmission from among the free radio channels on the basis of the second information. 13. The method according to claim 11, wherein the second information indicates a measured degree of occupancy of each radio channel belonging to the first sub-set of radio channels, and the one having the lowest measured degree of occupancy is selected from among the first sub-set of radio channels for the data transmission. 14. The method according to claim 11, wherein the second information indicates statistical properties of use of each radio channel belonging to the first sub-set of radio channels, and the one which has the lowest probability of being used during a coming pre-determined time period is selected from among the first sub-set of radio channels for the data transmission. 15. The method according to claim 11, wherein the second information is updated with information obtained by the sensing the free radio channels. 16. The method according to claim 11, wherein the method comprises: controlling the radio communication device to transmit data for a pre-determined time period after the selection of the one radio channel for the data transmission, and re-performing the sensing of the free radio channels and the selection of the one radio channel for the data transmission after the pre-determined time period as a response to a situation in which there is still the need for the data transmission. 17. A non-volatile computer readable medium encoded with a computer program for selecting one or more resources for use from among a set of resources, the computer program comprising computer executable instructions for controlling a programmable processor to obtain a first sub-set of the set of resources, the resources belonging to the first sub-set being the ones from among the set of resources which have, according to first information gathered over a first period of time about occupancies of resources belonging to the set of resources, highest probabilities of matching requirements related to estimated usage time and/or needed capacity, wherein the computer program comprises computer executable instructions for controlling the programmable processor to select a second sub-set from among the first sub-set of resources on the basis of second information gathered over a second period of time about the occupancies of the resources belonging to the first sub-set of resources, the second period of time being shorter than the first period of time and the resources belonging to the second sub-set of resources being the ones from among the first sub-set of resources which, according to the second information, have highest probabilities of matching the requirements related to the estimated usage time and/or the needed capacity. 18. The non-volatile computer readable medium according to claim 17, wherein the computer executable instructions for controlling the programmable processor to select the second sub-set are computer executable instructions for controlling the programmable processor to select one or more radio channels for data transmission from among a set of radio channels, the set of radio channels representing the set of resources and the selected one or more radio channels representing the second sub-set of resources. 19. (canceled) 20. A system comprising: a first database storing first information gathered over a first period of time about occupancies of resources belonging to a set of resources, and one or more processing circuitries, wherein one of the one or more processing circuitries is arranged to select a first sub-set of the set of resources on the basis of the first information, the resources belonging to the first sub-set being the ones from among the set of resources which have, according to the first information, highest probabilities of matching requirements related to estimated usage time and/or needed capacity, wherein characterized in that the system further comprises a second database storing second information gathered over a second period of time about the occupancies of the resources belonging to the first sub-set of resources, the second period of time being shorter than the first period of time, and one of the one or more processing circuitries is arranged to select a second sub-set from among the first sub-set of resources on the basis of the second information, the resources belonging to the second sub-set of resources being the ones from among the first sub-set of resources which, according to the second information, have highest probabilities of matching the requirements related to the estimated usage time and/or the needed capacity.
A method for selecting one or more resources for use from among a set of resources comprises obtaining ( 201 ) a first sub-set of the resources. The resources belonging to the first sub-set are the ones which have, according to occupancy information gathered over a long period of time, highest probabilities of matching requirements related to estimated usage time and/or needed capacity. The method further comprises selecting ( 202 ) a second sub-set from among the first sub-set of the resources on the basis of second occupancy information gathered over a short period of time. The resources belonging to the second sub-set are the ones from among the first sub-set which, according to the second information, have highest probabilities of matching requirements related to estimated usage time and/or needed capacity. The subsequent use of the long and short term occupancy information increases the probability of optimal selection. The resources can be, for example, radio channels from which one radio channel is to be selected.1. A device for selecting one or more resources for use from among a set of resources, the device comprising: a processing circuitry arranged to obtain a first sub-set of the set of resources, the resources belonging to the first sub-set being the ones from among the set of resources which have, according to first information gathered over a first period of time about occupancies of resources belonging to the set of resources, highest probabilities of matching requirements related to estimated usage time and/or needed capacity, wherein the processing circuitry is further arranged to select a second sub-set from among the first sub-set of resources on the basis of second information gathered over a second period of time about the occupancies of the resources belonging to the first sub-set of resources, the second period of time being shorter than the first period of time and the resources belonging to the second sub-set of resources being the ones from among the first sub-set of resources which, according to the second information, have highest probabilities of matching the requirements related to the estimated usage time and/or the needed capacity. 2. The device according to claim 1, wherein the device is a radio communication device and the processing circuitry is arranged to select one or more radio channels for data transmission from among a set of radio channels, the set of radio channels representing the set of resources and the selected one or more radio channels representing the second sub-set of resources. 3. The A device according to claim 2, wherein the processing circuitry is arranged to control the radio communication device to send a query to a server device to get a first sub-set of radio channels from the server device as a response to a need for the data transmission, the first sub-set of radio channels representing the first sub-set of resources, and to sense free radio channels from among the first sub-set, and the processing circuitry is arranged to select one radio channel for the data transmission from among the free radio channels on the basis of the second information. 4. The device according to claim 2, wherein the processing circuitry is arranged to control the radio communication device to send a query to a server device to get the first information from the server device as a response to a need for the data transmission, the processing circuitry is arranged to select a first sub-set of radio channels from among the set of radio channels on the basis of the first information, the first sub-set of radio channels representing the first sub-set of resources, the processing circuitry is arranged to control the radio communication device to sense free radio channels from among the first sub-set, and the processing circuitry is arranged to select one radio channel for the data transmission from among the free radio channels on the basis of the second information. 5. The device according to claim 3, wherein the second information is arranged to indicate a measured degree of occupancy of each radio channel belonging to the first sub-set of radio channels, and the processing circuitry is arranged to select the one having the lowest measured degree of occupancy from among the first sub-set of radio channels. 6. The device according to claim 3, wherein the second information is arranged to indicate statistical properties of use of each radio channel belonging to the first sub-set of radio channels, and the processing circuitry is arranged to select the one, from among the first sub-set of radio channels, which has the lowest probability of being used during a coming pre-determined time period. 7. The device according to claim 3, wherein the processing circuitry is arranged to update the second information with information obtained by the sensing the free radio channels. 8. The device according to claim 3, wherein the processing circuitry is arranged to control the radio communication device to transmit data for a pre-determined time period after selection of the one radio channel for the data transmission, and the processing circuitry is arranged to re-perform the sensing of the free radio channels and the selection of the one radio channel for the data transmission after the pre-determined time period as a response to a situation in which there is still the need for the data transmission. 9. A method for selecting one or more resources for use from among a set of resources, the method comprising: obtaining a first sub-set of the set of resources, the resources belonging to the first sub-set being the ones from among the set of resources which have, according to first information gathered over a first period of time about occupancies of resources belonging to the set of resources, highest probabilities of matching requirements related to estimated usage time and/or needed capacity, wherein the method further comprises selecting a second sub-set from among the first sub-set of resources on the basis of second information gathered over a second period of time about the occupancies of the resources belonging to the first sub-set of resources, the second period of time being shorter than the first period of time and the resources belonging to the second sub-set of resources being the ones from among the first sub-set of resources which, according to the second information, have highest probabilities of matching the requirements related to the estimated usage time and/or the needed capacity. 10. The method according to claim 9, wherein the selecting the second sub-set of resources is selecting one or more radio channels for data transmission from among a set of radio channels, the set of radio channels representing the set of resources and the selected one or more radio channels representing the second sub-set of resources. 11. The A method according to claim 10, wherein the method comprises: controlling a radio communication device to send a query to a server device to get a first sub-set of radio channels from the server device as a response to a need for the data transmission, the first sub-set of radio channels representing the first sub-set of resources, controlling the radio communication device to sense free radio channels from among the first sub-set, and selecting one radio channel for the data transmission from among the free radio channels on the basis of the second information. 12. The A method according to claim 10, wherein the method comprises: controlling a radio communication device to send a query to a server device to get the first information from the server device as a response to a need for the data transmission, selecting a first sub-set of radio channels from among the set of radio channels on the basis of the first information, the first sub-set of radio channels representing the first sub-set of resources, controlling the radio communication device to sense free radio channels from among the first sub-set, and selecting one radio channel for the data transmission from among the free radio channels on the basis of the second information. 13. The method according to claim 11, wherein the second information indicates a measured degree of occupancy of each radio channel belonging to the first sub-set of radio channels, and the one having the lowest measured degree of occupancy is selected from among the first sub-set of radio channels for the data transmission. 14. The method according to claim 11, wherein the second information indicates statistical properties of use of each radio channel belonging to the first sub-set of radio channels, and the one which has the lowest probability of being used during a coming pre-determined time period is selected from among the first sub-set of radio channels for the data transmission. 15. The method according to claim 11, wherein the second information is updated with information obtained by the sensing the free radio channels. 16. The method according to claim 11, wherein the method comprises: controlling the radio communication device to transmit data for a pre-determined time period after the selection of the one radio channel for the data transmission, and re-performing the sensing of the free radio channels and the selection of the one radio channel for the data transmission after the pre-determined time period as a response to a situation in which there is still the need for the data transmission. 17. A non-volatile computer readable medium encoded with a computer program for selecting one or more resources for use from among a set of resources, the computer program comprising computer executable instructions for controlling a programmable processor to obtain a first sub-set of the set of resources, the resources belonging to the first sub-set being the ones from among the set of resources which have, according to first information gathered over a first period of time about occupancies of resources belonging to the set of resources, highest probabilities of matching requirements related to estimated usage time and/or needed capacity, wherein the computer program comprises computer executable instructions for controlling the programmable processor to select a second sub-set from among the first sub-set of resources on the basis of second information gathered over a second period of time about the occupancies of the resources belonging to the first sub-set of resources, the second period of time being shorter than the first period of time and the resources belonging to the second sub-set of resources being the ones from among the first sub-set of resources which, according to the second information, have highest probabilities of matching the requirements related to the estimated usage time and/or the needed capacity. 18. The non-volatile computer readable medium according to claim 17, wherein the computer executable instructions for controlling the programmable processor to select the second sub-set are computer executable instructions for controlling the programmable processor to select one or more radio channels for data transmission from among a set of radio channels, the set of radio channels representing the set of resources and the selected one or more radio channels representing the second sub-set of resources. 19. (canceled) 20. A system comprising: a first database storing first information gathered over a first period of time about occupancies of resources belonging to a set of resources, and one or more processing circuitries, wherein one of the one or more processing circuitries is arranged to select a first sub-set of the set of resources on the basis of the first information, the resources belonging to the first sub-set being the ones from among the set of resources which have, according to the first information, highest probabilities of matching requirements related to estimated usage time and/or needed capacity, wherein characterized in that the system further comprises a second database storing second information gathered over a second period of time about the occupancies of the resources belonging to the first sub-set of resources, the second period of time being shorter than the first period of time, and one of the one or more processing circuitries is arranged to select a second sub-set from among the first sub-set of resources on the basis of the second information, the resources belonging to the second sub-set of resources being the ones from among the first sub-set of resources which, according to the second information, have highest probabilities of matching the requirements related to the estimated usage time and/or the needed capacity.
2,600
10,172
10,172
14,935,919
2,672
An electronic device having a display includes a base connected to a part of a main body of the electronic device, the display configured to receive a touch input and integrally formed with at least one support, and at least one vibrator arranged on the display to vibrate the display in accordance with the touch input, wherein the display is apart from the base in a portion excluding the support.
1. An electronic apparatus comprising: a main body; and a user interface including: a base connectable to the main body of the electronic apparatus; a display configured to receive a touch input; a vibrator provided on the display and configured to induce vibrations on the display based on the received touch input; and a support connecting the base to the display so that the display is structurally isolated from the base except for being connected by the support, to thereby reduce a transfer of the vibrations on the display to the base. 2. The electronic apparatus as claimed in claim 1, wherein the display is elastically supported on the base through the support. 3. The electronic apparatus as claimed in claim 1, wherein the support supports the display so that the display vibrates in a direction orthogonal to the base. 4. The electronic apparatus as claimed in claim 3, wherein a rear surface of the display is arranged to be apart from a front surface of the base. 5. The electronic apparatus as claimed in claim 4, wherein the display is connected to the base through a fastener to maintain a gap distance from the base. 6. The electronic apparatus as claimed in claim 1, wherein the support supports the display so that the display vibrates in a direction parallel to the base. 7. The electronic apparatus as claimed in claim 6, wherein the support is symmetrically formed on opposite sides of the display. 8. The electronic apparatus as claimed in claim 7, further comprising at least one guide member having one side fixed to the base and configured to guide the display in the parallel direction, wherein a slot, into which a part of the display is slidably inserted, is formed on the guide member. 9. The electronic apparatus as claimed in claim 6, wherein a part of the support is separably coupled to the base. 10. The electronic apparatus as claimed in claim 9, wherein the support comprises: a support projection extending from the display; and a coupling projection formed to extend from a front end of the support projection and slidably coupled to a coupling hole formed on the base. 11. The electronic apparatus as claimed in claim 9, wherein the support comprises: a coupling projection extending from the display; an arm formed at a front end of the coupling projection; and a fixing projection formed at a front end of the arm and coupled to a fixing hole formed on the base. 12. The electronic apparatus as claimed in claim 6, wherein the base has an edge portion that surrounds a side portion of the display, and the side portion of the display is arranged to be apart from the edge portion of the base. 13. The electronic apparatus as claimed in claim 1, wherein a frequency of the vibrator coincides with a natural frequency of the support. 14. The electronic apparatus as claimed in claim 1, wherein the display comprises: a touch screen configured to receive the touch input; and a frame configured to support a rear surface of the touch screen, wherein the support is formed to project from the frame. 15. The electronic apparatus as claimed in claim 14, wherein the frame and the support are integrally injection-molded. 16. The electronic apparatus as claimed in claim 1, further comprising a buffering member arranged between the base and the display. 17. The electronic apparatus as claimed in claim 1, wherein the electronic apparatus is at least one of an image forming apparatus, medical equipment, and industrial equipment. 18. An image forming apparatus comprising: a main body; and a user interface including: a base connectable to the main body of the image forming apparatus; a display configured to receive a touch input; a vibrator fixed to the display and configured to induce vibrations on the display based on the received touch input; and a plurality of supports separably connecting the base to the display so that the display is structurally isolated from the base except for being connected by the plurality of supports, to thereby reduce a transfer of the vibrations on the display to the base. 19. The image forming apparatus as claimed in claim 18, wherein the display is elastically supported on the base through the plurality of supports. 20. The image forming apparatus as claimed in claim 19, wherein the plurality of supports are made of an elastic material.
An electronic device having a display includes a base connected to a part of a main body of the electronic device, the display configured to receive a touch input and integrally formed with at least one support, and at least one vibrator arranged on the display to vibrate the display in accordance with the touch input, wherein the display is apart from the base in a portion excluding the support.1. An electronic apparatus comprising: a main body; and a user interface including: a base connectable to the main body of the electronic apparatus; a display configured to receive a touch input; a vibrator provided on the display and configured to induce vibrations on the display based on the received touch input; and a support connecting the base to the display so that the display is structurally isolated from the base except for being connected by the support, to thereby reduce a transfer of the vibrations on the display to the base. 2. The electronic apparatus as claimed in claim 1, wherein the display is elastically supported on the base through the support. 3. The electronic apparatus as claimed in claim 1, wherein the support supports the display so that the display vibrates in a direction orthogonal to the base. 4. The electronic apparatus as claimed in claim 3, wherein a rear surface of the display is arranged to be apart from a front surface of the base. 5. The electronic apparatus as claimed in claim 4, wherein the display is connected to the base through a fastener to maintain a gap distance from the base. 6. The electronic apparatus as claimed in claim 1, wherein the support supports the display so that the display vibrates in a direction parallel to the base. 7. The electronic apparatus as claimed in claim 6, wherein the support is symmetrically formed on opposite sides of the display. 8. The electronic apparatus as claimed in claim 7, further comprising at least one guide member having one side fixed to the base and configured to guide the display in the parallel direction, wherein a slot, into which a part of the display is slidably inserted, is formed on the guide member. 9. The electronic apparatus as claimed in claim 6, wherein a part of the support is separably coupled to the base. 10. The electronic apparatus as claimed in claim 9, wherein the support comprises: a support projection extending from the display; and a coupling projection formed to extend from a front end of the support projection and slidably coupled to a coupling hole formed on the base. 11. The electronic apparatus as claimed in claim 9, wherein the support comprises: a coupling projection extending from the display; an arm formed at a front end of the coupling projection; and a fixing projection formed at a front end of the arm and coupled to a fixing hole formed on the base. 12. The electronic apparatus as claimed in claim 6, wherein the base has an edge portion that surrounds a side portion of the display, and the side portion of the display is arranged to be apart from the edge portion of the base. 13. The electronic apparatus as claimed in claim 1, wherein a frequency of the vibrator coincides with a natural frequency of the support. 14. The electronic apparatus as claimed in claim 1, wherein the display comprises: a touch screen configured to receive the touch input; and a frame configured to support a rear surface of the touch screen, wherein the support is formed to project from the frame. 15. The electronic apparatus as claimed in claim 14, wherein the frame and the support are integrally injection-molded. 16. The electronic apparatus as claimed in claim 1, further comprising a buffering member arranged between the base and the display. 17. The electronic apparatus as claimed in claim 1, wherein the electronic apparatus is at least one of an image forming apparatus, medical equipment, and industrial equipment. 18. An image forming apparatus comprising: a main body; and a user interface including: a base connectable to the main body of the image forming apparatus; a display configured to receive a touch input; a vibrator fixed to the display and configured to induce vibrations on the display based on the received touch input; and a plurality of supports separably connecting the base to the display so that the display is structurally isolated from the base except for being connected by the plurality of supports, to thereby reduce a transfer of the vibrations on the display to the base. 19. The image forming apparatus as claimed in claim 18, wherein the display is elastically supported on the base through the plurality of supports. 20. The image forming apparatus as claimed in claim 19, wherein the plurality of supports are made of an elastic material.
2,600
10,173
10,173
15,842,057
2,643
Systems and methods for improved geolocation in a low power wide area network are disclosed. One example method may include receiving an instruction to determine a geolocation of an end in a low power wide area network. An instruction may be transmitted to the end node for the end node to transmit a high-energy geolocation signal at a power of about 0.5 Watt to about 1 Watt. The end node may transmit the high-energy geolocation signal and a plurality of gateways of the low power wide area network may receive the high-energy geolocation signal. A plurality of receipt times may be identified. Each receipt time may be indicative of the time at which the high-energy geolocation signal was received by the respective gateway of the plurality of gateways. Based at least in part on the plurality of receipt times, a geolocation of the end node may be determined.
1. A method comprising: causing an end node in a low power wide area network to transmit a high-energy geolocation signal at a power between about 0.5 Watts and about 1 Watt; identifying a plurality of receipt times, each receipt time of the plurality of receipt times indicative of a time at which the high-energy geolocation signal was received by a respective gateway of a plurality of gateways of the low power wide area network; and based at least in part on the plurality of receipt times, determining a geolocation of the end node. 2. The method of claim 1, wherein the high-energy geolocation signal comprises a transmission time at which the end node transmitted the high-energy geolocation signal, and the determining the geolocation of the end node is further based on the transmission time. 3. The method of claim 1, wherein the causing the end node to transmit the high-energy geolocation signal comprises causing the end node to transmit the high-energy geolocation signal at a specified time. 4. The method of claim 1, wherein the causing the end node to transmit the high-energy geolocation signal comprises causing the end node to transmit a pre-notification communication indicating a time at which the end node will transmit the high-energy geolocation signal. 5. The method of claim 4, wherein the pre-notification communication further comprises a channel on which the high-energy geolocation signal will be transmitted by the end node. 6. A method comprising: causing transmission, to an end node of a low power wide area network, of one or more parameters of a geolocation signal to be transmitted by the end node; receiving, from each gateway of a plurality of gateways of the low power wide area network, data indicative of a time at which a respective gateway of the plurality of gateways received the geolocation signal transmitted, according to the one or more parameters, from the end node; and based at least in part on the data indicative of the times at which the respective gateways each received the geolocation signal from the end node, determining a geolocation of the end node. 7. The method of claim 6, wherein the one or more parameters comprise a power at which the geolocation signal is to be transmitted. 8. The method of claim 7, wherein the power is based on at least one of: an estimated distance between the end node and at least one of the plurality of gateways, and a presence, or lack thereof, of an obstruction between the end node and the at least one of the plurality of gateways. 9. The method of claim 6, wherein the one or more parameters comprise a first channel on which the end node is to transmit the geolocation signal, and wherein the first channel is based on interference on a different second channel. 10. The method of claim 6, wherein the one or more parameters comprise a time at which the end node is to transmit the geolocation signal, and wherein the time is based on a periodic interference in an environment in which the end node and plurality of gateways operate. 11. The method of claim 6, wherein the one or more parameters comprise an attribute of a waveform of the geolocation signal. 12. The method of claim 11, wherein the attribute of the waveform of the geolocation signal comprises two or more maxima and minima. 13. A method comprising: causing transmission, to an end node of a low power wide area network, of a set of instructions to determine a channel on which a geolocation signal is to be transmitted by the end node; causing the end node to transmit the geolocation signal on a channel determined by execution of the set of instructions by the end node; identifying a plurality of receipt times, each receipt time of the plurality of receipt times indicative of a time at which the geolocation signal was received, on the channel, by a respective gateway of a plurality of gateways of the low power wide area network; and based at least in part on the plurality of receipt times, determining a geolocation of the end node. 14. The method of claim 13, wherein the plurality of gateways comprise a gateway-channel cluster in which each gateway operates on the same channel at a given time, and wherein the channel on which each gateway in the gateway-channel cluster operates corresponds to the channel on which the geolocation signal will be transmitted by the end node. 15. The method of claim 13, further comprising: causing transmission, to the end node, of an instruction for the end node to transmit the geolocation signal at a pre-determined time, wherein the geolocation signal is transmitted by the end node at the pre-determined time. 16. The method of claim 13, wherein the geolocation signal is transmitted by the end node at a power of between about 0.5 Watts and about 1 Watt. 17. The method of claim 14, further comprising: causing transmission, to at least one gateway of the plurality of gateways, an instruction to tune a transceiver of the at least one gateway to the channel, wherein the geolocation signal is received by the at least one gateway via the respective transceiver tuned to the channel. 18. The method of claim 17, wherein the instruction to tune the transceiver of the at least one gateway to the channel further comprises a time to tune the transceiver of the at least one gateway to the channel, and wherein the transceiver of the at least one gateway is tuned to the channel at the time. 19. The method of claim 13, further comprising: causing the end node to transmit a pre-notification communication indicating a time at which the end node will transmit the geolocation signal. 20. The method of claim 19, wherein the pre-notification communication further comprises the channel.
Systems and methods for improved geolocation in a low power wide area network are disclosed. One example method may include receiving an instruction to determine a geolocation of an end in a low power wide area network. An instruction may be transmitted to the end node for the end node to transmit a high-energy geolocation signal at a power of about 0.5 Watt to about 1 Watt. The end node may transmit the high-energy geolocation signal and a plurality of gateways of the low power wide area network may receive the high-energy geolocation signal. A plurality of receipt times may be identified. Each receipt time may be indicative of the time at which the high-energy geolocation signal was received by the respective gateway of the plurality of gateways. Based at least in part on the plurality of receipt times, a geolocation of the end node may be determined.1. A method comprising: causing an end node in a low power wide area network to transmit a high-energy geolocation signal at a power between about 0.5 Watts and about 1 Watt; identifying a plurality of receipt times, each receipt time of the plurality of receipt times indicative of a time at which the high-energy geolocation signal was received by a respective gateway of a plurality of gateways of the low power wide area network; and based at least in part on the plurality of receipt times, determining a geolocation of the end node. 2. The method of claim 1, wherein the high-energy geolocation signal comprises a transmission time at which the end node transmitted the high-energy geolocation signal, and the determining the geolocation of the end node is further based on the transmission time. 3. The method of claim 1, wherein the causing the end node to transmit the high-energy geolocation signal comprises causing the end node to transmit the high-energy geolocation signal at a specified time. 4. The method of claim 1, wherein the causing the end node to transmit the high-energy geolocation signal comprises causing the end node to transmit a pre-notification communication indicating a time at which the end node will transmit the high-energy geolocation signal. 5. The method of claim 4, wherein the pre-notification communication further comprises a channel on which the high-energy geolocation signal will be transmitted by the end node. 6. A method comprising: causing transmission, to an end node of a low power wide area network, of one or more parameters of a geolocation signal to be transmitted by the end node; receiving, from each gateway of a plurality of gateways of the low power wide area network, data indicative of a time at which a respective gateway of the plurality of gateways received the geolocation signal transmitted, according to the one or more parameters, from the end node; and based at least in part on the data indicative of the times at which the respective gateways each received the geolocation signal from the end node, determining a geolocation of the end node. 7. The method of claim 6, wherein the one or more parameters comprise a power at which the geolocation signal is to be transmitted. 8. The method of claim 7, wherein the power is based on at least one of: an estimated distance between the end node and at least one of the plurality of gateways, and a presence, or lack thereof, of an obstruction between the end node and the at least one of the plurality of gateways. 9. The method of claim 6, wherein the one or more parameters comprise a first channel on which the end node is to transmit the geolocation signal, and wherein the first channel is based on interference on a different second channel. 10. The method of claim 6, wherein the one or more parameters comprise a time at which the end node is to transmit the geolocation signal, and wherein the time is based on a periodic interference in an environment in which the end node and plurality of gateways operate. 11. The method of claim 6, wherein the one or more parameters comprise an attribute of a waveform of the geolocation signal. 12. The method of claim 11, wherein the attribute of the waveform of the geolocation signal comprises two or more maxima and minima. 13. A method comprising: causing transmission, to an end node of a low power wide area network, of a set of instructions to determine a channel on which a geolocation signal is to be transmitted by the end node; causing the end node to transmit the geolocation signal on a channel determined by execution of the set of instructions by the end node; identifying a plurality of receipt times, each receipt time of the plurality of receipt times indicative of a time at which the geolocation signal was received, on the channel, by a respective gateway of a plurality of gateways of the low power wide area network; and based at least in part on the plurality of receipt times, determining a geolocation of the end node. 14. The method of claim 13, wherein the plurality of gateways comprise a gateway-channel cluster in which each gateway operates on the same channel at a given time, and wherein the channel on which each gateway in the gateway-channel cluster operates corresponds to the channel on which the geolocation signal will be transmitted by the end node. 15. The method of claim 13, further comprising: causing transmission, to the end node, of an instruction for the end node to transmit the geolocation signal at a pre-determined time, wherein the geolocation signal is transmitted by the end node at the pre-determined time. 16. The method of claim 13, wherein the geolocation signal is transmitted by the end node at a power of between about 0.5 Watts and about 1 Watt. 17. The method of claim 14, further comprising: causing transmission, to at least one gateway of the plurality of gateways, an instruction to tune a transceiver of the at least one gateway to the channel, wherein the geolocation signal is received by the at least one gateway via the respective transceiver tuned to the channel. 18. The method of claim 17, wherein the instruction to tune the transceiver of the at least one gateway to the channel further comprises a time to tune the transceiver of the at least one gateway to the channel, and wherein the transceiver of the at least one gateway is tuned to the channel at the time. 19. The method of claim 13, further comprising: causing the end node to transmit a pre-notification communication indicating a time at which the end node will transmit the geolocation signal. 20. The method of claim 19, wherein the pre-notification communication further comprises the channel.
2,600
10,174
10,174
14,964,286
2,621
One embodiment provides a method, including: identifying, using a processor, a location of a user input device relative to an input surface; detecting, using a sensor, that the user input device has moved a predetermined distance from the input surface; receiving, using at least one other sensor, movement data of the user input device; and modifying, based on the movement data, the identified location of the user input device relative to the input surface. Other aspects are described and claimed.
1. A method, comprising: identifying, using a processor, a location of a user input device relative to an input surface; detecting, using a sensor, that the user input device has moved a predetermined distance from the input surface; receiving, using at least one other sensor, movement data of the user input device; and modifying, based on the movement data, the identified location of the user input device relative to the input surface. 2. The method of claim 1, wherein the input surface comprises a touch sensitive surface. 3. The method of claim 2, wherein the touch surface comprises at least one of: passive and active. 4. The method of claim 1, wherein the user input device comprises a stylus. 5. The method of claim 1, wherein the at least one other sensor is selected from the group consisting of: an accelerometer, a gravity sensor, a gyroscope, a rotational vector sensor, an orientation sensor, an infrared sensor, an optical sensor, and a magnetometer. 6. The method of claim 1, further comprising identifying, using the at least one other sensor, an orientation of the user input device. 7. The method of claim 6, wherein the receiving user input is disabled if the orientation of the user input device exceeds a predetermined threshold. 8. The method of claim 1, further comprising detecting, using the sensor, that the user input device has reentered the predetermined distance from the input surface. 9. The method of claim 8, further comprising responsive to the user input device reentering the predetermined distance, calibrating the at least one other sensor based on the location determined by the sensor. 10. The method of claim 1, wherein the user input device and the input surface communicate via a wireless communication protocol. 11. A system, comprising: an input surface; a processor operatively coupled to the input surface; and a memory device that stores instructions executable by a processor to: identify a location of a user input device relative to the input surface; detect that the user input device has moved a predetermined distance from the input surface; receive movement data of the user input device; and modify, based on the movement data, the identified location of the user input device relative to the input surface. 12. The system of claim 11, wherein the input surface comprises a touch sensitive surface; and. wherein the touch surface comprises at least one of: passive and active. 13. The system of claim 11, wherein the user input device comprises a stylus. 14. The system of claim 11, wherein the at least one other sensor is selected from the group consisting of of: an accelerometer, a gravity sensor, a gyroscope, a rotational vector sensor, an orientation sensor, an infrared sensor, an optical sensor, and a magnetometer. 15. The system of claim 11, further comprising identifying, using the at least one other sensor, an orientation of the user input device. 16. The system of claim 15, wherein the receiving user input is disabled if the orientation of the user input device exceeds a predetermined threshold. 17. The system of claim 11, further comprising detecting, using the sensor, that the user input device has reentered the predetermined distance from the input surface. 18. The system of claim 17, further comprising responsive to the user input device reentering the predetermined distance, calibrating the at least one other sensor based on the location determined by the sensor. 19. The system of claim 11, wherein the user input device and the input surface communicate via a wireless communication protocol. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that identifies a location of a user input device relative to an input surface; code that detects, using a sensor, that the user input device has moved a predetermined distance from the input surface; code that receives movement data of the user input device; and code that modifies, based on the movement data, the identified location of the user input device relative to the input surface.
One embodiment provides a method, including: identifying, using a processor, a location of a user input device relative to an input surface; detecting, using a sensor, that the user input device has moved a predetermined distance from the input surface; receiving, using at least one other sensor, movement data of the user input device; and modifying, based on the movement data, the identified location of the user input device relative to the input surface. Other aspects are described and claimed.1. A method, comprising: identifying, using a processor, a location of a user input device relative to an input surface; detecting, using a sensor, that the user input device has moved a predetermined distance from the input surface; receiving, using at least one other sensor, movement data of the user input device; and modifying, based on the movement data, the identified location of the user input device relative to the input surface. 2. The method of claim 1, wherein the input surface comprises a touch sensitive surface. 3. The method of claim 2, wherein the touch surface comprises at least one of: passive and active. 4. The method of claim 1, wherein the user input device comprises a stylus. 5. The method of claim 1, wherein the at least one other sensor is selected from the group consisting of: an accelerometer, a gravity sensor, a gyroscope, a rotational vector sensor, an orientation sensor, an infrared sensor, an optical sensor, and a magnetometer. 6. The method of claim 1, further comprising identifying, using the at least one other sensor, an orientation of the user input device. 7. The method of claim 6, wherein the receiving user input is disabled if the orientation of the user input device exceeds a predetermined threshold. 8. The method of claim 1, further comprising detecting, using the sensor, that the user input device has reentered the predetermined distance from the input surface. 9. The method of claim 8, further comprising responsive to the user input device reentering the predetermined distance, calibrating the at least one other sensor based on the location determined by the sensor. 10. The method of claim 1, wherein the user input device and the input surface communicate via a wireless communication protocol. 11. A system, comprising: an input surface; a processor operatively coupled to the input surface; and a memory device that stores instructions executable by a processor to: identify a location of a user input device relative to the input surface; detect that the user input device has moved a predetermined distance from the input surface; receive movement data of the user input device; and modify, based on the movement data, the identified location of the user input device relative to the input surface. 12. The system of claim 11, wherein the input surface comprises a touch sensitive surface; and. wherein the touch surface comprises at least one of: passive and active. 13. The system of claim 11, wherein the user input device comprises a stylus. 14. The system of claim 11, wherein the at least one other sensor is selected from the group consisting of of: an accelerometer, a gravity sensor, a gyroscope, a rotational vector sensor, an orientation sensor, an infrared sensor, an optical sensor, and a magnetometer. 15. The system of claim 11, further comprising identifying, using the at least one other sensor, an orientation of the user input device. 16. The system of claim 15, wherein the receiving user input is disabled if the orientation of the user input device exceeds a predetermined threshold. 17. The system of claim 11, further comprising detecting, using the sensor, that the user input device has reentered the predetermined distance from the input surface. 18. The system of claim 17, further comprising responsive to the user input device reentering the predetermined distance, calibrating the at least one other sensor based on the location determined by the sensor. 19. The system of claim 11, wherein the user input device and the input surface communicate via a wireless communication protocol. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that identifies a location of a user input device relative to an input surface; code that detects, using a sensor, that the user input device has moved a predetermined distance from the input surface; code that receives movement data of the user input device; and code that modifies, based on the movement data, the identified location of the user input device relative to the input surface.
2,600
10,175
10,175
15,231,137
2,612
A user may create an avatar and/or animated sequence illustrating a particular object or living being performing a certain activity, using images of portions of the object or living being extracted from a still image or set of still images of the object or living being.
1. A method for producing an image or video representing an end-user in a contemporaneous context selected based on a mood of the end-user, the method comprising: monitoring one or more sources of news and social traffic to determine currently trending topics; gathering information about certain topics of the currently trending topics, the information comprising one or more images and any textual content attached to or associated with the one or more images; classifying the gathered information, to determine one or more characteristics; presenting to an end-user, one or more questions related to the certain topics of the currently trending topics; estimating a mood or an emotion of the end-user, using responses of the end-user to the one or more questions; retrieving one or more images related to particular topics found to be currently trending topics, according to the estimated mood or emotion of the end-user and the particular topics; and creating one or more combined images comprising an avatar representing the end-user and one or more background images selected from the retrieved one or more images according to the one or more characteristics, the particular topics, and the estimated mood or emotion of the end-user, for display. 2-21. (canceled)
A user may create an avatar and/or animated sequence illustrating a particular object or living being performing a certain activity, using images of portions of the object or living being extracted from a still image or set of still images of the object or living being.1. A method for producing an image or video representing an end-user in a contemporaneous context selected based on a mood of the end-user, the method comprising: monitoring one or more sources of news and social traffic to determine currently trending topics; gathering information about certain topics of the currently trending topics, the information comprising one or more images and any textual content attached to or associated with the one or more images; classifying the gathered information, to determine one or more characteristics; presenting to an end-user, one or more questions related to the certain topics of the currently trending topics; estimating a mood or an emotion of the end-user, using responses of the end-user to the one or more questions; retrieving one or more images related to particular topics found to be currently trending topics, according to the estimated mood or emotion of the end-user and the particular topics; and creating one or more combined images comprising an avatar representing the end-user and one or more background images selected from the retrieved one or more images according to the one or more characteristics, the particular topics, and the estimated mood or emotion of the end-user, for display. 2-21. (canceled)
2,600
10,176
10,176
14,760,501
2,692
In accordance with an example embodiment of the present invention, an apparatus is disclosed with: a touch surface having one or more key tops configured to identify one or more keys to a user; a key sensing circuitry configured to detect a key press of any one or more of the key tops; a network of electromagnetic touch detectors configured to continually detect touching of the touch surface; and an elastic layer between the key sensing circuitry and the key tops configured to relay pressing forces from the key tops to the key sensing circuitry.
1. An apparatus, comprising: a touch surface comprising one or more key tops configured to identify one or more keys to a user; a key sensing circuitry configured to detect a key press of any one or more of the key tops; a network of electromagnetic touch detectors configured to continually detect touching of the touch surface; and an elastic layer between the key sensing circuitry and the key tops configured to relay pressing forces from the key tops to the key sensing circuitry. 2. The apparatus of claim 1, wherein: the elastic layer comprises rubber. 3. The apparatus of claim 1, wherein: the elastic layer comprises thermoplastic polyurethane. 4. The apparatus of claim 1, wherein: the network of electromagnetic detectors is formed by selective activation plating. 5. The apparatus of claim 1, wherein: the network of electromagnetic detectors is formed by super energy beam induced deposition. 6. The apparatus of claim 1, wherein: the elastic layer comprises two opposite sides that are: a first side facing towards the key sensing circuitry; and a second side opposite to the first side; the network of electromagnetic detectors is integrated to the second side of the elastic layer. 7. The apparatus of claim 1, further comprising a touch sensing layer comprising the network of electromagnetic detectors. 8. The apparatus of claim 7, wherein the touch sensing layer is plastic film. 9. The apparatus of claim 7, further comprising an exterior layer having a first side configured to form the touch surface; wherein: the touch sensing layer is resiliently biased with the elastic layer against the exterior layer such that on pressing one of the one or more key tops so that movement of the touch sensing layer is greater with respect to the pressed key top. 10. The apparatus of claim 1, further comprising an exterior layer having a first side configured to form the touch surface; wherein the exterior layer further comprises a second side opposite to the first side; and the network of electromagnetic touch detectors is formed on the second side of the exterior layer. 11. The apparatus of claim 1, wherein: the touch surface has a first region and a second region non-overlapping the first region; and the key tops are solely comprised by the first region. 12. The apparatus of claim 1, wherein: the electromagnetic touch detectors are capacitive touch sensors. 13. A device comprising: a display; and an apparatus comprising: a touch surface comprising one or more key tops configured to identify one or more keys to a user; a key sensing circuitry configured to detect a key press of any one or more of the key tops; a network of electromagnetic touch detectors configured to continually detect touching of the touch surface; and an elastic layer between the key sensing circuitry and the key tops configured to relay pressing forces from the key tops to the key sensing circuitry. 14. The device of claim 13, wherein: the device is a mobile telephone. 15. The device of claim 13, wherein: the device is a laptop computer and the apparatus is configured to form a touch pad. 16. A method comprising: forming a touch surface comprising one or more key tops configured to identify one or more keys to a user; forming a key sensing circuitry configured to detect a key press of any one or more of the key tops; forming a network of electromagnetic touch detectors configured to continually detect touching of the touch surface; and forming an elastic layer between the key sensing circuitry and the key tops configured to relay pressing forces from the key tops to the key sensing circuitry. 17. The method of claim 16, wherein the network of electromagnetic detectors is formed by selective activation plating. 18. The method of claim 16, wherein the network of electromagnetic detectors is formed by super energy beam induced deposition. 19. The method of claim 16, wherein network of electromagnetic detectors is formed onto a rear surface of an exterior layer that forms the key tops. 20. The method of claim 16, wherein network of electromagnetic detectors is formed onto the elastic layer.
In accordance with an example embodiment of the present invention, an apparatus is disclosed with: a touch surface having one or more key tops configured to identify one or more keys to a user; a key sensing circuitry configured to detect a key press of any one or more of the key tops; a network of electromagnetic touch detectors configured to continually detect touching of the touch surface; and an elastic layer between the key sensing circuitry and the key tops configured to relay pressing forces from the key tops to the key sensing circuitry.1. An apparatus, comprising: a touch surface comprising one or more key tops configured to identify one or more keys to a user; a key sensing circuitry configured to detect a key press of any one or more of the key tops; a network of electromagnetic touch detectors configured to continually detect touching of the touch surface; and an elastic layer between the key sensing circuitry and the key tops configured to relay pressing forces from the key tops to the key sensing circuitry. 2. The apparatus of claim 1, wherein: the elastic layer comprises rubber. 3. The apparatus of claim 1, wherein: the elastic layer comprises thermoplastic polyurethane. 4. The apparatus of claim 1, wherein: the network of electromagnetic detectors is formed by selective activation plating. 5. The apparatus of claim 1, wherein: the network of electromagnetic detectors is formed by super energy beam induced deposition. 6. The apparatus of claim 1, wherein: the elastic layer comprises two opposite sides that are: a first side facing towards the key sensing circuitry; and a second side opposite to the first side; the network of electromagnetic detectors is integrated to the second side of the elastic layer. 7. The apparatus of claim 1, further comprising a touch sensing layer comprising the network of electromagnetic detectors. 8. The apparatus of claim 7, wherein the touch sensing layer is plastic film. 9. The apparatus of claim 7, further comprising an exterior layer having a first side configured to form the touch surface; wherein: the touch sensing layer is resiliently biased with the elastic layer against the exterior layer such that on pressing one of the one or more key tops so that movement of the touch sensing layer is greater with respect to the pressed key top. 10. The apparatus of claim 1, further comprising an exterior layer having a first side configured to form the touch surface; wherein the exterior layer further comprises a second side opposite to the first side; and the network of electromagnetic touch detectors is formed on the second side of the exterior layer. 11. The apparatus of claim 1, wherein: the touch surface has a first region and a second region non-overlapping the first region; and the key tops are solely comprised by the first region. 12. The apparatus of claim 1, wherein: the electromagnetic touch detectors are capacitive touch sensors. 13. A device comprising: a display; and an apparatus comprising: a touch surface comprising one or more key tops configured to identify one or more keys to a user; a key sensing circuitry configured to detect a key press of any one or more of the key tops; a network of electromagnetic touch detectors configured to continually detect touching of the touch surface; and an elastic layer between the key sensing circuitry and the key tops configured to relay pressing forces from the key tops to the key sensing circuitry. 14. The device of claim 13, wherein: the device is a mobile telephone. 15. The device of claim 13, wherein: the device is a laptop computer and the apparatus is configured to form a touch pad. 16. A method comprising: forming a touch surface comprising one or more key tops configured to identify one or more keys to a user; forming a key sensing circuitry configured to detect a key press of any one or more of the key tops; forming a network of electromagnetic touch detectors configured to continually detect touching of the touch surface; and forming an elastic layer between the key sensing circuitry and the key tops configured to relay pressing forces from the key tops to the key sensing circuitry. 17. The method of claim 16, wherein the network of electromagnetic detectors is formed by selective activation plating. 18. The method of claim 16, wherein the network of electromagnetic detectors is formed by super energy beam induced deposition. 19. The method of claim 16, wherein network of electromagnetic detectors is formed onto a rear surface of an exterior layer that forms the key tops. 20. The method of claim 16, wherein network of electromagnetic detectors is formed onto the elastic layer.
2,600
10,177
10,177
14,651,260
2,616
At least one aspect of the present disclosure describes a system for facilitating generation of animated content. The system includes a rule management module and a content generation module. The rule management module is configured to receive a plurality of content generation rules. The content generation module is communicatively coupled to the rule management module and configured to generate an animated content configuration in accordance with the plurality of content generation rules. The animated content configuration specifies at least two content elements to be rendered on a display and their relationships to one another, and a transition, which modifies a attribute of a content elements over a period of time.
1. A computer-implemented system for facilitating automatic generation of animated content to be rendered on an electronically addressable display, the system comprising: a rule management module configured to receive a plurality of rules on content generation, the plurality of rules comprising a rule on probability factor, the rule on probability factor specifying the probability of content configurations including a configuration element that has a particular attribute value; and a content generation module communicatively coupled to the rule management module and configured to generate an animated content configuration in accordance with the rule on probability factor, the animated content configuration specifying at least two content elements to be rendered on a display and their relationships to one another on the display, and a transition, which modifies a displayed attribute of one of the two content elements over a period of time. 2. The computer-implemented system of claim 1, wherein the configuration element comprises at least one of a content element, a relationship, a transition, a size adjustment, and a position adjustment. 3. The computer-implemented system of claim 1, wherein the attribute of one of the two content elements comprises size, opacity, degree of curvature, color, brightness, hue, and contrast. 4. The computer-implemented system of claim 1, wherein the image transformation on one of the plurality of content elements comprises a change over time on a relationship among the one of the plurality of content elements and another content element. 5. The computer-implemented system of claim 1, wherein the initial configuration further comprises a duration of the piece of animated content, a starting time of the one of the plurality of content elements, and an end time of the one of the plurality of content elements. 6. The computer-implemented system of claim 1, further comprising: an assembling module adapted to assemble the animated content configuration to a piece of animated content by arranging the plurality of content elements according to the one or more relationships and compiling an animation of the one of the plurality of content elements according to the transition. 7. The computer-implemented system of claim 6, wherein the assembling module is further adapted to store the piece of animated content to a multimedia digital file. 8. The computer-implemented system of claim 7, further comprising: a computer coupled to an electronically addressable display, and wherein the computer is programmed to cause the multimedia digital file to be rendered on the electronically addressable display. 9. The computer-implemented system of claim 1, wherein the plurality of rules further comprises at least one of rules on content elements, rules on size adjustments, rules on position adjustments, rules on relationships, rules on transitions, and rules on visual perception. 10. The computer-implemented system of claim 6, further comprising: a visual attention model (VAM) evaluator adapted to apply a VAM on the assembled piece of animated content to generate a VAM output and determine if the assembled piece of animated content satisfies the plurality of rules based on the VAM output. 11. A method for animated content generation, comprising: receiving a plurality of rules on content generation, the plurality of rules comprising a rule on probability factor specifying probability of content configurations including a configuration element that has a particular attribute value; and generating, by a processor, an animated content configuration in accordance with the rule on probability factor, wherein the animated content configuration comprises an initial configuration and a transition, wherein the initial configuration comprises a plurality of content elements and one or more relationships among the plurality of content elements, wherein the transition comprises an image transformation on one of the plurality of content elements, and wherein the animated content configuration is operable to be assembled to a piece of animated content. 12. The method of claim 11, wherein the configuration element comprises at least one of a content element, a relationship, a transition, a size adjustment, and a position adjustment. 13. The method of claim 11, wherein the image transformation on one of the plurality of content elements comprises a change over time on an attribute of the one of the plurality of content elements, and the attribute comprises size, opacity, degree of curvature, color, brightness, hue, and contrast. 14. The method of claim 11, further comprising: assembling, by a processor, the animated content configuration to a piece of animated content by arranging the plurality of content elements according to the one or more relationships and compiling an animation of the one of the plurality of content elements according to the transition. 15. The method of claim 11, wherein the plurality of rules further comprises at least one of rules on content elements, rules on relationships, rules on position adjustments, rules on size adjustments, and rules on visual perception.
At least one aspect of the present disclosure describes a system for facilitating generation of animated content. The system includes a rule management module and a content generation module. The rule management module is configured to receive a plurality of content generation rules. The content generation module is communicatively coupled to the rule management module and configured to generate an animated content configuration in accordance with the plurality of content generation rules. The animated content configuration specifies at least two content elements to be rendered on a display and their relationships to one another, and a transition, which modifies a attribute of a content elements over a period of time.1. A computer-implemented system for facilitating automatic generation of animated content to be rendered on an electronically addressable display, the system comprising: a rule management module configured to receive a plurality of rules on content generation, the plurality of rules comprising a rule on probability factor, the rule on probability factor specifying the probability of content configurations including a configuration element that has a particular attribute value; and a content generation module communicatively coupled to the rule management module and configured to generate an animated content configuration in accordance with the rule on probability factor, the animated content configuration specifying at least two content elements to be rendered on a display and their relationships to one another on the display, and a transition, which modifies a displayed attribute of one of the two content elements over a period of time. 2. The computer-implemented system of claim 1, wherein the configuration element comprises at least one of a content element, a relationship, a transition, a size adjustment, and a position adjustment. 3. The computer-implemented system of claim 1, wherein the attribute of one of the two content elements comprises size, opacity, degree of curvature, color, brightness, hue, and contrast. 4. The computer-implemented system of claim 1, wherein the image transformation on one of the plurality of content elements comprises a change over time on a relationship among the one of the plurality of content elements and another content element. 5. The computer-implemented system of claim 1, wherein the initial configuration further comprises a duration of the piece of animated content, a starting time of the one of the plurality of content elements, and an end time of the one of the plurality of content elements. 6. The computer-implemented system of claim 1, further comprising: an assembling module adapted to assemble the animated content configuration to a piece of animated content by arranging the plurality of content elements according to the one or more relationships and compiling an animation of the one of the plurality of content elements according to the transition. 7. The computer-implemented system of claim 6, wherein the assembling module is further adapted to store the piece of animated content to a multimedia digital file. 8. The computer-implemented system of claim 7, further comprising: a computer coupled to an electronically addressable display, and wherein the computer is programmed to cause the multimedia digital file to be rendered on the electronically addressable display. 9. The computer-implemented system of claim 1, wherein the plurality of rules further comprises at least one of rules on content elements, rules on size adjustments, rules on position adjustments, rules on relationships, rules on transitions, and rules on visual perception. 10. The computer-implemented system of claim 6, further comprising: a visual attention model (VAM) evaluator adapted to apply a VAM on the assembled piece of animated content to generate a VAM output and determine if the assembled piece of animated content satisfies the plurality of rules based on the VAM output. 11. A method for animated content generation, comprising: receiving a plurality of rules on content generation, the plurality of rules comprising a rule on probability factor specifying probability of content configurations including a configuration element that has a particular attribute value; and generating, by a processor, an animated content configuration in accordance with the rule on probability factor, wherein the animated content configuration comprises an initial configuration and a transition, wherein the initial configuration comprises a plurality of content elements and one or more relationships among the plurality of content elements, wherein the transition comprises an image transformation on one of the plurality of content elements, and wherein the animated content configuration is operable to be assembled to a piece of animated content. 12. The method of claim 11, wherein the configuration element comprises at least one of a content element, a relationship, a transition, a size adjustment, and a position adjustment. 13. The method of claim 11, wherein the image transformation on one of the plurality of content elements comprises a change over time on an attribute of the one of the plurality of content elements, and the attribute comprises size, opacity, degree of curvature, color, brightness, hue, and contrast. 14. The method of claim 11, further comprising: assembling, by a processor, the animated content configuration to a piece of animated content by arranging the plurality of content elements according to the one or more relationships and compiling an animation of the one of the plurality of content elements according to the transition. 15. The method of claim 11, wherein the plurality of rules further comprises at least one of rules on content elements, rules on relationships, rules on position adjustments, rules on size adjustments, and rules on visual perception.
2,600
10,178
10,178
14,813,704
2,689
A method according to an exemplary aspect of the present disclosure includes, among other things, controlling a vehicle system to refine a travel range estimation of an electrified vehicle if a desired destination cannot be reached under current driving conditions. The controlling step includes warning a driver about a travel range based on the driver's driving habits, coaching the driver to modify the driving habits, and adjusting operation of at least one vehicle subsystem.
1. A method, comprising: controlling a vehicle system to refine a travel range estimation of an electrified vehicle if a desired destination cannot be reached under current driving conditions, the controlling step including warning a driver about a travel range based on the driver's driving habits, coaching the driver to modify the driving habits, and adjusting operation of at least one vehicle subsystem. 2. The method as recited in claim 1, wherein the vehicle system includes a control module configured to execute the controlling step. 3. The method as recited in claim 1, wherein the refined travel range estimation is based on a combination of battery information, driver information and telematics information. 4. The method as recited in claim 3, wherein the battery information includes at least battery usable energy, battery state of charge, battery power capabilities, and battery thermal states. 5. The method as recited in claim 3, wherein the driver information includes at least the driving habits and driver preferences of the driver. 6. The method as recited in claim 3, wherein the telematics information includes at least traffic conditions, weather conditions, and road conditions. 7. The method as recited in claim 1, wherein the warning step and the coaching step include issuing a visual or audible output to the driver. 8. The method as recited in claim 1, wherein the adjusting step includes reducing an auxiliary power usage associated with the at least one vehicle subsystem. 9. The method as recited in claim 1, wherein the adjusting step includes automatically lowering the travel speed of the electrified vehicle. 10. The method as recited in claim 1, wherein the adjusting step includes automatically maximizing regenerative braking. 11. The method as recited in claim 1, comprising, after calculating the refined travel range estimation, determining whether the desired destination is reachable based on the refined travel range estimation. 12. The method as recited in claim 11, comprising rerouting the electrified vehicle to a nearby charging station if the desired destination is not reachable based on the refined travel range estimation. 13. The method as recited in claim 1, wherein the coaching step is performed only if manually turned ON. 14. The method as recited in claim 1, comprising selecting the desired destination prior to performing the controlling step. 15. The method as recited in claim 1, comprising collecting battery information, driver information and telematics information prior to performing the controlling step. 16. A vehicle system, comprising: a high voltage battery pack; a vehicle subsystem selectively powered by said high voltage battery pack; and a control system configured to warn a driver about a travel range based on the driver's driving habits and adjust operation of said vehicle subsystem if a desired destination cannot be reached under current driving conditions. 17. The vehicle system as recited in claim 16, wherein said control system is configured to coach the driver to modify the driving habits. 18. The vehicle system as recited in claim 16, wherein said control system includes a battery management system. 19. The vehicle system as recited in claim 16, wherein said control system is configured to receive battery information from said high voltage battery pack and navigation information from a navigation system. 20. The vehicle system as recited in claim 16, wherein said control system is configured to receive driver information and telematics information.
A method according to an exemplary aspect of the present disclosure includes, among other things, controlling a vehicle system to refine a travel range estimation of an electrified vehicle if a desired destination cannot be reached under current driving conditions. The controlling step includes warning a driver about a travel range based on the driver's driving habits, coaching the driver to modify the driving habits, and adjusting operation of at least one vehicle subsystem.1. A method, comprising: controlling a vehicle system to refine a travel range estimation of an electrified vehicle if a desired destination cannot be reached under current driving conditions, the controlling step including warning a driver about a travel range based on the driver's driving habits, coaching the driver to modify the driving habits, and adjusting operation of at least one vehicle subsystem. 2. The method as recited in claim 1, wherein the vehicle system includes a control module configured to execute the controlling step. 3. The method as recited in claim 1, wherein the refined travel range estimation is based on a combination of battery information, driver information and telematics information. 4. The method as recited in claim 3, wherein the battery information includes at least battery usable energy, battery state of charge, battery power capabilities, and battery thermal states. 5. The method as recited in claim 3, wherein the driver information includes at least the driving habits and driver preferences of the driver. 6. The method as recited in claim 3, wherein the telematics information includes at least traffic conditions, weather conditions, and road conditions. 7. The method as recited in claim 1, wherein the warning step and the coaching step include issuing a visual or audible output to the driver. 8. The method as recited in claim 1, wherein the adjusting step includes reducing an auxiliary power usage associated with the at least one vehicle subsystem. 9. The method as recited in claim 1, wherein the adjusting step includes automatically lowering the travel speed of the electrified vehicle. 10. The method as recited in claim 1, wherein the adjusting step includes automatically maximizing regenerative braking. 11. The method as recited in claim 1, comprising, after calculating the refined travel range estimation, determining whether the desired destination is reachable based on the refined travel range estimation. 12. The method as recited in claim 11, comprising rerouting the electrified vehicle to a nearby charging station if the desired destination is not reachable based on the refined travel range estimation. 13. The method as recited in claim 1, wherein the coaching step is performed only if manually turned ON. 14. The method as recited in claim 1, comprising selecting the desired destination prior to performing the controlling step. 15. The method as recited in claim 1, comprising collecting battery information, driver information and telematics information prior to performing the controlling step. 16. A vehicle system, comprising: a high voltage battery pack; a vehicle subsystem selectively powered by said high voltage battery pack; and a control system configured to warn a driver about a travel range based on the driver's driving habits and adjust operation of said vehicle subsystem if a desired destination cannot be reached under current driving conditions. 17. The vehicle system as recited in claim 16, wherein said control system is configured to coach the driver to modify the driving habits. 18. The vehicle system as recited in claim 16, wherein said control system includes a battery management system. 19. The vehicle system as recited in claim 16, wherein said control system is configured to receive battery information from said high voltage battery pack and navigation information from a navigation system. 20. The vehicle system as recited in claim 16, wherein said control system is configured to receive driver information and telematics information.
2,600
10,179
10,179
15,474,275
2,667
An apparatus is provided for automated object tracking in a video feed. The apparatus receives and sequentially processes a plurality of frames of the video feed to track objects. In particular, a plurality of objects in a frame are detected and assigned to a respective track fragment. A kinematic, visual, temporal or machine learning-based feature of an object is then identified and stored in metadata associated with the track fragment. A track fragment for the object is identified in earlier frames based on a comparison of the feature and a corresponding feature in metadata associated with the earlier frames. The track fragments for the object in the frame and the object in the earlier frames are linked to form a track of the object. The apparatus then outputs the video feed with the track of the object as an overlay thereon.
1. A method for automated object tracking in a video feed, the method comprising: receiving a video feed including a plurality of frames; sequentially processing each frame of the plurality of frames, including at least: detecting a plurality of objects in the frame, and for each object of the plurality of objects, assigning the object to a track fragment for the object in the frame, wherein the plurality of objects are detected and assigned using computer vision, machine learning, and a catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects; and identifying a kinematic, visual, temporal or machine learning-based feature of the object, and storing the kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment to which the object is assigned; and further for at least some of the plurality of frames, identifying a track fragment for the object in one or more earlier frames based on a comparison of the kinematic, visual, temporal or machine learning-based feature of the object and a corresponding kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment for the object in the one or more earlier frames; and linking the track fragment for the object in the frame and the track fragment for the object in the one or more earlier frames to form a longer track fragment that is a track of the object; and for each object of the plurality of objects, transforming the track of the object to a common frame of reference to generate a common reference frame having the tracks of the plurality of objects mapped thereto; and outputting the video feed with the common reference frame and the mapped tracks of the plurality of objects as an overlay thereon. 2. The method of claim 1, wherein identifying the track fragment for the object in the one or more earlier frames includes identifying the track fragment in an instance in which a statistical variance between the kinematic, visual, temporal or machine learning-based feature and the corresponding kinematic, visual, temporal or machine learning-based feature is below a predetermined threshold. 3. The method of claim 1 further comprising maintaining a database of active track fragments including the track fragment for an object detected in the frame or in an earlier frame within a threshold number of frames, and a database of suspended track fragments including the track fragment for an object not detected in the frame or in an earlier frame within the threshold number of frames, and wherein identifying the track fragment for the object in the one or more earlier frames includes searching the database of active track fragments or the database of suspended track fragments to identify the track fragment for the object within the track fragments maintained therein. 4. The method of claim 3, wherein in an instance in which the track fragment for the object in the one or more earlier frames is identified in the database of suspended track fragments, identifying the track fragment for the object in the one or more earlier frames further includes moving the track fragment for the object in the one or more earlier frames from the database of suspended track fragments to the database of active track fragments, the track fragment for an object not detected in the frame or in an earlier frame within a second threshold number of frames being deleted from the database of suspended track fragments. 5. The method of claim 1, wherein sequentially processing each frame of the plurality of frames further includes assigning a unique identifier to the object in a first instance in which the object is detected, wherein identifying the kinematic, visual, temporal or machine learning-based feature of the object includes associating the kinematic, visual, temporal or machine learning-based feature with the unique identifier in the metadata associated with the track fragment to which the object is assigned, and wherein outputting the video feed includes generating corresponding workflow analytics for an object in the video feed, the workflow analytics being associated with the unique identifier of the object. 6. The method of claim 1, wherein the video feed includes a plurality of video feeds, and wherein receiving the video feed and sequentially processing each frame of the plurality of video frames includes receiving the video feed and sequentially processing each frame of the plurality of video frames for each of at least a first video feed and a second video feed, and in response to at least one object being detected in a frame of the first video feed and a frame of the second video feed, the method further comprising linking the track fragment for the object in the frame of the first video feed and the track fragment for the object in the frame of the second video feed. 7. The method of claim 1, wherein at least one frame of the plurality of frames includes an occlusion, and detecting the plurality of objects in the frame includes detecting at least one occluded object of the plurality of objects, and assigning the occluded object to a track fragment using computer vision, machine learning, and the catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects. 8. An apparatus for automated object tracking in a video feed, the apparatus comprising a processor and a memory storing executable instructions that, in response to execution by the processor, cause the apparatus to at least: receive a video feed including a plurality of frames; sequentially process each frame of the plurality of frames, including at least: detecting a plurality of objects in the frame, and for each object of the plurality of objects, assigning the object to a track fragment for the object in the frame, wherein the plurality of objects are detected and assigned using computer vision, machine learning, and a catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects; and identifying a kinematic, visual, temporal or machine learning-based feature of the object, and store the kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment to which the object is assigned; and further for at least some of the plurality of frames, identifying a track fragment for the object in one or more earlier frames based on a comparison of the kinematic, visual, temporal or machine learning-based feature of the object and a corresponding kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment for the object in the one or more earlier frames; and linking the track fragment for the object in the frame and the track fragment for the object in the one or more earlier frames to form a longer track fragment that is a track of the object; and for each object of the plurality of objects, transform the track of the object to a common frame of reference to generate a common reference frame having the tracks of the plurality of objects mapped thereto; and output the video feed with the common reference frame and the mapped tracks of the plurality of objects as an overlay thereon. 9. The apparatus of claim 8, wherein the apparatus identifying the track fragment for the object in the one or more earlier frames includes identifying the track fragment in an instance in which a statistical variance between the kinematic, visual, temporal or machine learning-based feature and the corresponding kinematic, visual, temporal or machine learning-based feature is below a predetermined threshold. 10. The apparatus of claim 8, wherein the memory stores executable instructions that, in response to execution by the processor, cause the apparatus to further maintain a database of active track fragments including the track fragment for any object detected in the frame or in an earlier frame within a threshold number of frames, and a database of suspended track fragments including the track fragment for any object not detected in the frame or in an earlier frame within the threshold number of frames, and wherein the apparatus identifying the track fragment for the object in the one or more earlier frames includes searching the database of active track fragments or the database of suspended track fragments to identify the track fragment for the object within the track fragments maintained therein. 11. The apparatus of claim 10, wherein in an instance in which the track fragment for the object in the one or more earlier frames is identified in the database of suspended track fragments, the apparatus identifying the track fragment for the object in the one or more earlier frames further includes moving the track fragment for the object in the one or more earlier frames from the database of suspended track fragments to the database of active track fragments, the track fragment for any object not detected in the frame or in an earlier frame within a second threshold number of frames being deleted from the database of suspended track fragments. 12. The apparatus of claim 8, wherein the apparatus being caused to sequentially process each frame of the plurality of frames further includes assigning a unique identifier to the object in a first instance in which the object is detected, wherein the apparatus identifying the kinematic, visual, temporal or machine learning-based feature of the object includes associating the kinematic, visual, temporal or machine learning-based feature with the unique identifier in the metadata associated with the track fragment to which the object is assigned, and wherein the apparatus being caused to output the video feed includes generating corresponding workflow analytics for an object in the video feed, the workflow analytics being associated with the unique identifier of the object. 13. The apparatus of claim 8, wherein the video feed includes a plurality of video feeds, and wherein the apparatus being caused to receive the video feed and sequentially process each frame of the plurality of video frames includes receiving the video feed and sequentially process each frame of the plurality of video frames for each of at least a first video feed and a second video feed, and in response to at least one object being detected in a frame of the first video feed and a frame of the second video feed, the apparatus further linking the track fragment for the object in the frame of the first video feed and the track fragment for the object in the frame of the second video feed. 14. The apparatus of claim 8, wherein at least one frame of the plurality of frames includes an occlusion, and the apparatus detecting the plurality of objects in the frame includes detecting at least one occluded object of the plurality of objects, and assigning the occluded object to a track fragment using computer vision, machine learning, and the catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects. 15. A computer-readable storage medium for automated object tracking in a video feed, the computer-readable storage medium having computer-readable program code stored therein that, in response to execution by a processor, causes an apparatus to at least: receive a video feed including a plurality of frames; sequentially process each frame of the plurality of frames, including at least: detecting a plurality of objects in the frame, and for each object of the plurality of object, assigning the object to a track fragment for the object in the frame, wherein the plurality of objects are detected and assigned using computer vision, machine learning, and a catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects; and identifying a kinematic, visual, temporal or machine learning-based feature of the object, and store the kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment to which the object is assigned; and further for at least some of the plurality of frames, identifying a track fragment for the object in one or more earlier frames based on a comparison of the kinematic, visual, temporal or machine learning-based feature of the object and a corresponding kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment for the object in the one or more earlier frames; and linking the track fragment for the object in the frame and the track fragment for the object in the one or more earlier frames to form a longer track fragment that is a track of the object; and for each object of the plurality of objects, transform the track of the object to a common frame of reference to generate a common reference frame having the tracks of the plurality of objects mapped thereto; and output the video feed with the common reference frame and the mapped tracks of the plurality of objects as an overlay thereon. 16. The computer-readable storage medium of claim 15, wherein the apparatus identifying the track fragment for the object in the one or more earlier frames includes identifying the track fragment in an instance in which a statistical variance between the kinematic, visual, temporal or machine learning-based feature and the corresponding kinematic, visual, temporal or machine learning-based feature is below a predetermined threshold. 17. The computer-readable storage medium of claim 15 having computer-readable program code stored therein that, in response to execution by a processor, causes the apparatus to further maintain a database of active track fragments including the track fragment for an object detected in the frame or in an earlier frame within a threshold number of frames, and a database of suspended track fragments including the track fragment for an object not detected in the frame or in an earlier frame within the threshold number of frames, and wherein the apparatus identifying the track fragment for the object in the one or more earlier frames includes searching the database of active track fragments or the database of suspended track fragments to identify the track fragment for the object within the track fragments maintained therein. 18. The computer-readable storage medium of claim 17, wherein in an instance in which the track fragment for the object in the one or more earlier frames is identified in the database of suspended track fragments, the apparatus identifying the track fragment for the object in the one or more earlier frames further includes moving the track fragment for the object in the one or more earlier frames from the database of suspended track fragments to the database of active track fragments, the track fragment for any object not detected in the frame or in an earlier frame within a second threshold number of frames being deleted from the database of suspended track fragments. 19. The computer-readable storage medium of claim 15, wherein the apparatus being caused to sequentially process each frame of the plurality of frames further includes assigning a unique identifier to the object in a first instance in which the object is detected, wherein the identifying the kinematic, visual, temporal or machine learning-based feature of the object includes associating the kinematic, visual, temporal or machine learning-based feature with the unique identifier in the metadata associated with the track fragment to which the object is assigned, and wherein the apparatus being caused to output the video feed includes generating corresponding workflow analytics for the object, the workflow analytics being associated with the unique identifier. 20. The computer-readable storage medium of claim 15, wherein the video feed includes a plurality of video feeds, and wherein the apparatus being caused to receive the video feed and sequentially process each frame of the plurality of video frames includes receiving the video feed and sequentially processing each frame of the plurality of video frames for each of at least a first video feed and a second video feed, and in response to at least one object being detected in a frame of the first video feed and a frame of the second video feed, the apparatus further linking the track fragment for the object in the frame of the first video feed and the track fragment for the object in the frame of the second video feed. 21. The computer-readable storage medium of claim 20, wherein at least one frame of the plurality of frames includes an occlusion, and the apparatus detecting the plurality of objects in the frame includes being caused to detect at least one occluded object of the plurality of objects, and assigning the occluded object to a track fragment using computer vision, machine learning, and the catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects.
An apparatus is provided for automated object tracking in a video feed. The apparatus receives and sequentially processes a plurality of frames of the video feed to track objects. In particular, a plurality of objects in a frame are detected and assigned to a respective track fragment. A kinematic, visual, temporal or machine learning-based feature of an object is then identified and stored in metadata associated with the track fragment. A track fragment for the object is identified in earlier frames based on a comparison of the feature and a corresponding feature in metadata associated with the earlier frames. The track fragments for the object in the frame and the object in the earlier frames are linked to form a track of the object. The apparatus then outputs the video feed with the track of the object as an overlay thereon.1. A method for automated object tracking in a video feed, the method comprising: receiving a video feed including a plurality of frames; sequentially processing each frame of the plurality of frames, including at least: detecting a plurality of objects in the frame, and for each object of the plurality of objects, assigning the object to a track fragment for the object in the frame, wherein the plurality of objects are detected and assigned using computer vision, machine learning, and a catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects; and identifying a kinematic, visual, temporal or machine learning-based feature of the object, and storing the kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment to which the object is assigned; and further for at least some of the plurality of frames, identifying a track fragment for the object in one or more earlier frames based on a comparison of the kinematic, visual, temporal or machine learning-based feature of the object and a corresponding kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment for the object in the one or more earlier frames; and linking the track fragment for the object in the frame and the track fragment for the object in the one or more earlier frames to form a longer track fragment that is a track of the object; and for each object of the plurality of objects, transforming the track of the object to a common frame of reference to generate a common reference frame having the tracks of the plurality of objects mapped thereto; and outputting the video feed with the common reference frame and the mapped tracks of the plurality of objects as an overlay thereon. 2. The method of claim 1, wherein identifying the track fragment for the object in the one or more earlier frames includes identifying the track fragment in an instance in which a statistical variance between the kinematic, visual, temporal or machine learning-based feature and the corresponding kinematic, visual, temporal or machine learning-based feature is below a predetermined threshold. 3. The method of claim 1 further comprising maintaining a database of active track fragments including the track fragment for an object detected in the frame or in an earlier frame within a threshold number of frames, and a database of suspended track fragments including the track fragment for an object not detected in the frame or in an earlier frame within the threshold number of frames, and wherein identifying the track fragment for the object in the one or more earlier frames includes searching the database of active track fragments or the database of suspended track fragments to identify the track fragment for the object within the track fragments maintained therein. 4. The method of claim 3, wherein in an instance in which the track fragment for the object in the one or more earlier frames is identified in the database of suspended track fragments, identifying the track fragment for the object in the one or more earlier frames further includes moving the track fragment for the object in the one or more earlier frames from the database of suspended track fragments to the database of active track fragments, the track fragment for an object not detected in the frame or in an earlier frame within a second threshold number of frames being deleted from the database of suspended track fragments. 5. The method of claim 1, wherein sequentially processing each frame of the plurality of frames further includes assigning a unique identifier to the object in a first instance in which the object is detected, wherein identifying the kinematic, visual, temporal or machine learning-based feature of the object includes associating the kinematic, visual, temporal or machine learning-based feature with the unique identifier in the metadata associated with the track fragment to which the object is assigned, and wherein outputting the video feed includes generating corresponding workflow analytics for an object in the video feed, the workflow analytics being associated with the unique identifier of the object. 6. The method of claim 1, wherein the video feed includes a plurality of video feeds, and wherein receiving the video feed and sequentially processing each frame of the plurality of video frames includes receiving the video feed and sequentially processing each frame of the plurality of video frames for each of at least a first video feed and a second video feed, and in response to at least one object being detected in a frame of the first video feed and a frame of the second video feed, the method further comprising linking the track fragment for the object in the frame of the first video feed and the track fragment for the object in the frame of the second video feed. 7. The method of claim 1, wherein at least one frame of the plurality of frames includes an occlusion, and detecting the plurality of objects in the frame includes detecting at least one occluded object of the plurality of objects, and assigning the occluded object to a track fragment using computer vision, machine learning, and the catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects. 8. An apparatus for automated object tracking in a video feed, the apparatus comprising a processor and a memory storing executable instructions that, in response to execution by the processor, cause the apparatus to at least: receive a video feed including a plurality of frames; sequentially process each frame of the plurality of frames, including at least: detecting a plurality of objects in the frame, and for each object of the plurality of objects, assigning the object to a track fragment for the object in the frame, wherein the plurality of objects are detected and assigned using computer vision, machine learning, and a catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects; and identifying a kinematic, visual, temporal or machine learning-based feature of the object, and store the kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment to which the object is assigned; and further for at least some of the plurality of frames, identifying a track fragment for the object in one or more earlier frames based on a comparison of the kinematic, visual, temporal or machine learning-based feature of the object and a corresponding kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment for the object in the one or more earlier frames; and linking the track fragment for the object in the frame and the track fragment for the object in the one or more earlier frames to form a longer track fragment that is a track of the object; and for each object of the plurality of objects, transform the track of the object to a common frame of reference to generate a common reference frame having the tracks of the plurality of objects mapped thereto; and output the video feed with the common reference frame and the mapped tracks of the plurality of objects as an overlay thereon. 9. The apparatus of claim 8, wherein the apparatus identifying the track fragment for the object in the one or more earlier frames includes identifying the track fragment in an instance in which a statistical variance between the kinematic, visual, temporal or machine learning-based feature and the corresponding kinematic, visual, temporal or machine learning-based feature is below a predetermined threshold. 10. The apparatus of claim 8, wherein the memory stores executable instructions that, in response to execution by the processor, cause the apparatus to further maintain a database of active track fragments including the track fragment for any object detected in the frame or in an earlier frame within a threshold number of frames, and a database of suspended track fragments including the track fragment for any object not detected in the frame or in an earlier frame within the threshold number of frames, and wherein the apparatus identifying the track fragment for the object in the one or more earlier frames includes searching the database of active track fragments or the database of suspended track fragments to identify the track fragment for the object within the track fragments maintained therein. 11. The apparatus of claim 10, wherein in an instance in which the track fragment for the object in the one or more earlier frames is identified in the database of suspended track fragments, the apparatus identifying the track fragment for the object in the one or more earlier frames further includes moving the track fragment for the object in the one or more earlier frames from the database of suspended track fragments to the database of active track fragments, the track fragment for any object not detected in the frame or in an earlier frame within a second threshold number of frames being deleted from the database of suspended track fragments. 12. The apparatus of claim 8, wherein the apparatus being caused to sequentially process each frame of the plurality of frames further includes assigning a unique identifier to the object in a first instance in which the object is detected, wherein the apparatus identifying the kinematic, visual, temporal or machine learning-based feature of the object includes associating the kinematic, visual, temporal or machine learning-based feature with the unique identifier in the metadata associated with the track fragment to which the object is assigned, and wherein the apparatus being caused to output the video feed includes generating corresponding workflow analytics for an object in the video feed, the workflow analytics being associated with the unique identifier of the object. 13. The apparatus of claim 8, wherein the video feed includes a plurality of video feeds, and wherein the apparatus being caused to receive the video feed and sequentially process each frame of the plurality of video frames includes receiving the video feed and sequentially process each frame of the plurality of video frames for each of at least a first video feed and a second video feed, and in response to at least one object being detected in a frame of the first video feed and a frame of the second video feed, the apparatus further linking the track fragment for the object in the frame of the first video feed and the track fragment for the object in the frame of the second video feed. 14. The apparatus of claim 8, wherein at least one frame of the plurality of frames includes an occlusion, and the apparatus detecting the plurality of objects in the frame includes detecting at least one occluded object of the plurality of objects, and assigning the occluded object to a track fragment using computer vision, machine learning, and the catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects. 15. A computer-readable storage medium for automated object tracking in a video feed, the computer-readable storage medium having computer-readable program code stored therein that, in response to execution by a processor, causes an apparatus to at least: receive a video feed including a plurality of frames; sequentially process each frame of the plurality of frames, including at least: detecting a plurality of objects in the frame, and for each object of the plurality of object, assigning the object to a track fragment for the object in the frame, wherein the plurality of objects are detected and assigned using computer vision, machine learning, and a catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects; and identifying a kinematic, visual, temporal or machine learning-based feature of the object, and store the kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment to which the object is assigned; and further for at least some of the plurality of frames, identifying a track fragment for the object in one or more earlier frames based on a comparison of the kinematic, visual, temporal or machine learning-based feature of the object and a corresponding kinematic, visual, temporal or machine learning-based feature in metadata associated with the track fragment for the object in the one or more earlier frames; and linking the track fragment for the object in the frame and the track fragment for the object in the one or more earlier frames to form a longer track fragment that is a track of the object; and for each object of the plurality of objects, transform the track of the object to a common frame of reference to generate a common reference frame having the tracks of the plurality of objects mapped thereto; and output the video feed with the common reference frame and the mapped tracks of the plurality of objects as an overlay thereon. 16. The computer-readable storage medium of claim 15, wherein the apparatus identifying the track fragment for the object in the one or more earlier frames includes identifying the track fragment in an instance in which a statistical variance between the kinematic, visual, temporal or machine learning-based feature and the corresponding kinematic, visual, temporal or machine learning-based feature is below a predetermined threshold. 17. The computer-readable storage medium of claim 15 having computer-readable program code stored therein that, in response to execution by a processor, causes the apparatus to further maintain a database of active track fragments including the track fragment for an object detected in the frame or in an earlier frame within a threshold number of frames, and a database of suspended track fragments including the track fragment for an object not detected in the frame or in an earlier frame within the threshold number of frames, and wherein the apparatus identifying the track fragment for the object in the one or more earlier frames includes searching the database of active track fragments or the database of suspended track fragments to identify the track fragment for the object within the track fragments maintained therein. 18. The computer-readable storage medium of claim 17, wherein in an instance in which the track fragment for the object in the one or more earlier frames is identified in the database of suspended track fragments, the apparatus identifying the track fragment for the object in the one or more earlier frames further includes moving the track fragment for the object in the one or more earlier frames from the database of suspended track fragments to the database of active track fragments, the track fragment for any object not detected in the frame or in an earlier frame within a second threshold number of frames being deleted from the database of suspended track fragments. 19. The computer-readable storage medium of claim 15, wherein the apparatus being caused to sequentially process each frame of the plurality of frames further includes assigning a unique identifier to the object in a first instance in which the object is detected, wherein the identifying the kinematic, visual, temporal or machine learning-based feature of the object includes associating the kinematic, visual, temporal or machine learning-based feature with the unique identifier in the metadata associated with the track fragment to which the object is assigned, and wherein the apparatus being caused to output the video feed includes generating corresponding workflow analytics for the object, the workflow analytics being associated with the unique identifier. 20. The computer-readable storage medium of claim 15, wherein the video feed includes a plurality of video feeds, and wherein the apparatus being caused to receive the video feed and sequentially process each frame of the plurality of video frames includes receiving the video feed and sequentially processing each frame of the plurality of video frames for each of at least a first video feed and a second video feed, and in response to at least one object being detected in a frame of the first video feed and a frame of the second video feed, the apparatus further linking the track fragment for the object in the frame of the first video feed and the track fragment for the object in the frame of the second video feed. 21. The computer-readable storage medium of claim 20, wherein at least one frame of the plurality of frames includes an occlusion, and the apparatus detecting the plurality of objects in the frame includes being caused to detect at least one occluded object of the plurality of objects, and assigning the occluded object to a track fragment using computer vision, machine learning, and the catalog of kinematic, visual, temporal or machine learning-based features of identifiable objects.
2,600
10,180
10,180
15,947,196
2,683
A system includes a processor configured to determine an applicable pre-defined vehicle system state set, defining preferred vehicle system states when a driver is away from a vehicle. The processor is also configured to determine a driver exit-event while a vehicle state in the state set varies from a preferred setting and notify a driver mobile device of the variance, responsive to the exit event.
1. A system comprising: a processor configured to: determine an applicable pre-defined vehicle system state set, defining preferred vehicle system states when a driver is away from a vehicle; determine a driver exit-event while a vehicle state in the state set varies from a preferred setting; and notify a driver mobile device of the variance, responsive to the exit event. 2. The system of claim 1, wherein the vehicle state includes window state. 3. The system of claim 2, wherein the preferred setting defines a window position. 4. The system of claim 1, wherein the vehicle state includes an onboard charger state. 5. The system of claim 4, wherein the preferred setting defines a charging-use state. 6. The system of claim 1, wherein the vehicle state includes a vehicle light state. 7. The system of claim 6, wherein the preferred setting defines a light power state. 8. The system of claim 6, wherein the vehicle lights include hazard lights. 9. The system of claim 6, wherein the vehicle lights include headlights. 10. The system of claim 6, wherein the vehicle lights include interior lights. 11. The system of claim 1, wherein the vehicle state includes a vehicle access state. 12. The system of claim 11, wherein the preferred setting defines a vehicle access condition. 13. The system of claim 12, wherein the condition includes a lock state. 14. The system of claim 12, wherein the condition includes a position state. 15. The system of claim 1, wherein the vehicle state includes a vehicle level state. 16. The system of claim 15, wherein the preferred setting includes a maximum off-level measurement. 17. The system of claim 16, wherein the vehicle state further includes an emergency brake state and wherein the preferred setting includes an emergency brake engagement setting accommodating the vehicle level state. 18. The system of claim 1, wherein the vehicle state includes an overall passive power draw and wherein the preferred setting includes a maximum power draw accommodating at least a current battery charge level. 19. A system comprising: a processor configured to: determine that a vehicle system state does not match a preferred state setting, responsive to determining that a driver has exited a vehicle; send a state notification message, including the present system state, to a driver mobile device; receive a state modification response from the mobile device; and adjust the vehicle system state in accordance with the state modification response. 20. A system comprising: a processor configured to: determine that a vehicle system state does not match a preferred state setting, responsive to determining that a driver has exited a vehicle; and revert the vehicle system state to the preferred state setting automatically, responsive to the determination that the vehicle system state does not match the preferred state setting.
A system includes a processor configured to determine an applicable pre-defined vehicle system state set, defining preferred vehicle system states when a driver is away from a vehicle. The processor is also configured to determine a driver exit-event while a vehicle state in the state set varies from a preferred setting and notify a driver mobile device of the variance, responsive to the exit event.1. A system comprising: a processor configured to: determine an applicable pre-defined vehicle system state set, defining preferred vehicle system states when a driver is away from a vehicle; determine a driver exit-event while a vehicle state in the state set varies from a preferred setting; and notify a driver mobile device of the variance, responsive to the exit event. 2. The system of claim 1, wherein the vehicle state includes window state. 3. The system of claim 2, wherein the preferred setting defines a window position. 4. The system of claim 1, wherein the vehicle state includes an onboard charger state. 5. The system of claim 4, wherein the preferred setting defines a charging-use state. 6. The system of claim 1, wherein the vehicle state includes a vehicle light state. 7. The system of claim 6, wherein the preferred setting defines a light power state. 8. The system of claim 6, wherein the vehicle lights include hazard lights. 9. The system of claim 6, wherein the vehicle lights include headlights. 10. The system of claim 6, wherein the vehicle lights include interior lights. 11. The system of claim 1, wherein the vehicle state includes a vehicle access state. 12. The system of claim 11, wherein the preferred setting defines a vehicle access condition. 13. The system of claim 12, wherein the condition includes a lock state. 14. The system of claim 12, wherein the condition includes a position state. 15. The system of claim 1, wherein the vehicle state includes a vehicle level state. 16. The system of claim 15, wherein the preferred setting includes a maximum off-level measurement. 17. The system of claim 16, wherein the vehicle state further includes an emergency brake state and wherein the preferred setting includes an emergency brake engagement setting accommodating the vehicle level state. 18. The system of claim 1, wherein the vehicle state includes an overall passive power draw and wherein the preferred setting includes a maximum power draw accommodating at least a current battery charge level. 19. A system comprising: a processor configured to: determine that a vehicle system state does not match a preferred state setting, responsive to determining that a driver has exited a vehicle; send a state notification message, including the present system state, to a driver mobile device; receive a state modification response from the mobile device; and adjust the vehicle system state in accordance with the state modification response. 20. A system comprising: a processor configured to: determine that a vehicle system state does not match a preferred state setting, responsive to determining that a driver has exited a vehicle; and revert the vehicle system state to the preferred state setting automatically, responsive to the determination that the vehicle system state does not match the preferred state setting.
2,600
10,181
10,181
15,368,011
2,627
In various embodiments, a user frustration monitoring and control system may be part of or otherwise be operably coupled to a device to detect and respond to user frustration with the device or with other components in communication with the device. A user frustration monitoring and control system may detect when, out of frustration, a user jars or hits a set-top box or other device in communication with the set-top box, such as a remote-control device or monitor, and will perform an action to address the detected user frustration. This action may be, for example, performing a diagnostic action and/or communicating a helpful message or instructions to the user.
1. A system for detecting user frustration with an electronic device, the system comprising: at least one processor; at least one memory coupled to the at least one processor; a motion sensor affixed to the electronic device and coupled to the at least one processor and the at least one memory, wherein the at least one memory has computer-executable instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to: receive an electronic signal from the motion sensor that is indicative of the electronic device being jarred by a user; in response to receiving the electronic signal from the motion sensor that is indicative of the electronic device being jarred by a user, determine whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration; and in response to the determination whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration, perform an electronic action to address the user frustration based on a determination that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 2. The system of claim 1, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to determine whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration by at least causing the at least one processor to: obtain a measurement of an amount of dynamic acceleration of the electronic device based on the electronic signal received from the motion sensor affixed to the electronic device; and determine whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration based on the obtained measurement of the amount of dynamic acceleration of the electronic device. 3. The system of claim 2, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to determine whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration by at least causing the at least one processor to: compare the measurement of the amount of dynamic acceleration to a threshold amount of dynamic acceleration; determine whether the measurement of the amount of dynamic acceleration exceeds the threshold amount of dynamic acceleration; in response to a determination that the measurement of the amount of dynamic acceleration exceeds the threshold amount of dynamic acceleration, determine that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration; and in response to a determination that the measurement of the amount of dynamic acceleration does not exceed the threshold amount of dynamic acceleration, determine that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is not indicative of user frustration. 4. The system of claim 2, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: obtain a measurement of the direction of dynamic acceleration of the electronic device based on the electronic signal received from the motion sensor affixed to the electronic device, wherein the determination of whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration is also based on the obtained measurement of the direction of dynamic acceleration of the electronic device. 5. The system of claim 2, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: obtain a measurement of a length of time of a period of dynamic acceleration of the electronic device based on the electronic signal received from the motion sensor affixed to the electronic device, wherein the determination of whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration is also based on the obtained measurement of the length of time of the period of dynamic acceleration of the electronic device. 6. The system of claim 2, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: determine whether the electronic device is in a mode to accept user input, wherein the determination of whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration is also based on the determination whether the electronic device is in a mode to accept user input, such that if a determination was made by the at least one processor that the electronic device is not in a mode to accept user input, the at least one processor determines that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is not indicative of user frustration. 7. The system of claim 1, wherein the motion sensor is an accelerometer affixed to the electronic device and the electronic signal from the motion sensor that is indicative of the electronic device being jarred by a user is a continuous voltage that is proportional to acceleration of the accelerometer affixed to the electronic device. 8. The system of claim 1, wherein the motion sensor is an accelerometer affixed to the electronic device and has a digital interface that is either an Inter-Integrated Circuit serial computer bus (I2C) or a Serial Peripheral Interface (SPI) bus. 9. The system of claim 1, wherein the motion sensor affixed to the electronic device is located inside the electronic device. 10. The system of claim 1, wherein the motion sensor affixed to the electronic device is affixed to an outside of a housing of the electronic device. 11. The system of claim 1, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to automatically perform the electronic action to address the user frustration based on the determination whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration, by at least causing the at least one processor to: cause the electronic device to perform a diagnostic action in response to a determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 12. The system of claim 11, wherein the diagnostic action includes one or more of: placing an automated service call to a manufacturer or service provider for the electronic device; contacting a remote server of a manufacturer or service provider for the electronic device; sending diagnostic data to a remote server of a manufacturer or service provider for the electronic device; monitoring audio or video signals input to or output from the electronic device; performing one or more tests regarding audio or video signals input to or output from the electronic device; checking configuration of the electronic device; testing operations of the electronic device; detecting a failure or fail condition of the electronic device; capturing video, audio, electronic, infrared (IR) or radio frequency (RF) information input to or output from the electronic device; varying AC voltages to the electronic device; detecting loss or fade of signal input to or output from the electronic device; electronically checking data regarding current weather or natural disaster issues that may affect signal quality or performance of the electronic device; communicating with a remote monitoring system to determine existence of a systemic problem related to a plurality of electronic devices within a particular geographical region; and determining current operating states, conditions, messages or functionalities of the electronic device. 13. The system of claim 11, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: cause the electronic device to perform a corrective action based on the diagnostic action performed in response to the determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 14. The system of claim 11, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: cause the electronic device to electronically provide a notification to the user based on the diagnostic action performed in response to the determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 15. The system of claim 14, wherein the notification to the user includes one or more of: data representing troubleshooting options for the user regarding the electronic device; help menu options for the user regarding the electronic device; contact information for technical support regarding the electronic device; directions regarding technical support for the electronic device; a link to data regarding technical support for the electronic device; results of the diagnostic action performed in response to the determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration and instructions to the user on how to take a corrective action based on the diagnostic action performed in response to the determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 16. The system of claim 14, wherein the diagnostic action includes the computer-executable instructions, when executed, causing the at least one processor to communicate with a remote monitoring system to determine whether a cause of the user frustration is a systemic problem related to a plurality of electronic devices within a particular geographical region. 17. The system of claim 1 further comprising: a microphone coupled to the at least one processor and the at least one memory, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: receive an electronic signal from the microphone that is indicative of the electronic device being physically hit by a user; and determine whether the electronic signal from the microphone includes audio characteristics of a sound indicative of the electronic device being physically hit by a user, wherein the determination whether the electronic signal received from the motion sensor is indicative of user frustration is based on the electronic signal received from the motion sensor and a determination that the electronic signal from the microphone includes audio characteristics of a sound indicative of the electronic device being physically hit by the user. 18. The system of claim 1 wherein the electronic device is a receiving device, a set-top box, a remote-control device, a media player, a television, a monitor, a user input device, a keyboard or a steering wheel. 19. A method in a system for detecting user frustration with an electronic device, the method comprising: determining, by at least one computer processor, whether a signal received from a motion sensor of the electronic device is indicative of user frustration based on an amount of dynamic acceleration of the electronic device indicated by the signal received from the motion sensor of the electronic device; and in response to the determination by the at least one computer processor whether the signal received from the motion sensor of the electronic device is indicative of user frustration based on an amount of dynamic acceleration of the electronic device indicated by the signal received from the motion sensor of the electronic device, communicating, by the at least one computer processor, information to the user regarding the determination whether the signal received from a motion sensor of the electronic device is indicative of user frustration. 20. The method of claim 19 wherein the determination whether the signal received from the motion sensor of the electronic device is indicative of user frustration is also based on a measured level of an audio signal received from a microphone of the electronic device at a same time the signal from the motion sensor of the electronic device is received. 21. The method of claim 20 wherein the signal received from the motion sensor of the electronic device and the audio signal received from the microphone of the electronic device, based on which the determination is made whether the signal received from the motion sensor of the electronic device is indicative of user frustration, is received by a monitoring server located remotely from the electronic device and is in communication with the electronic device over a communications network. 22. A non-transitory computer-readable storage medium having computer executable instructions thereon, that when executed by a computer processor, cause the following method for detecting user frustration with an electronic device to be performed: electronically monitoring one or more motion sensors of the electronic device; and based on the electronically monitoring one or more motion sensors of the electronic device: determining whether a measurement of an amount of dynamic acceleration of the electronic device exceeds a first threshold; and determining whether a measurement of a length of time of a period of the dynamic acceleration of the electronic device exceeds a second threshold; and in response to a determination that both the measurement of the amount of dynamic acceleration of the electronic device exceeds the first threshold and that the measurement of the length of time of the period of the dynamic acceleration of the electronic device exceeds the second threshold, output a signal associated with occurrence of user frustration. 23. The non-transitory computer-readable storage medium of claim 22 wherein the electronically monitoring the one or more motion sensors of the electronic device occurs in response to the electronic device outputting a graphical user interface. 24. The non-transitory computer-readable storage medium of claim 22 wherein the electronically monitoring of the one or more motion sensors of the electronic device occurs in response to the electronic device outputting video or audio. 25. The non-transitory computer-readable storage medium of claim 22 wherein the electronic device is a remote-control of a receiving device and the signal associated with occurrence of user frustration is communicated from the remote-control to the receiving device to cause the receiving device to take an action to address the user frustration.
In various embodiments, a user frustration monitoring and control system may be part of or otherwise be operably coupled to a device to detect and respond to user frustration with the device or with other components in communication with the device. A user frustration monitoring and control system may detect when, out of frustration, a user jars or hits a set-top box or other device in communication with the set-top box, such as a remote-control device or monitor, and will perform an action to address the detected user frustration. This action may be, for example, performing a diagnostic action and/or communicating a helpful message or instructions to the user.1. A system for detecting user frustration with an electronic device, the system comprising: at least one processor; at least one memory coupled to the at least one processor; a motion sensor affixed to the electronic device and coupled to the at least one processor and the at least one memory, wherein the at least one memory has computer-executable instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to: receive an electronic signal from the motion sensor that is indicative of the electronic device being jarred by a user; in response to receiving the electronic signal from the motion sensor that is indicative of the electronic device being jarred by a user, determine whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration; and in response to the determination whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration, perform an electronic action to address the user frustration based on a determination that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 2. The system of claim 1, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to determine whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration by at least causing the at least one processor to: obtain a measurement of an amount of dynamic acceleration of the electronic device based on the electronic signal received from the motion sensor affixed to the electronic device; and determine whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration based on the obtained measurement of the amount of dynamic acceleration of the electronic device. 3. The system of claim 2, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to determine whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration by at least causing the at least one processor to: compare the measurement of the amount of dynamic acceleration to a threshold amount of dynamic acceleration; determine whether the measurement of the amount of dynamic acceleration exceeds the threshold amount of dynamic acceleration; in response to a determination that the measurement of the amount of dynamic acceleration exceeds the threshold amount of dynamic acceleration, determine that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration; and in response to a determination that the measurement of the amount of dynamic acceleration does not exceed the threshold amount of dynamic acceleration, determine that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is not indicative of user frustration. 4. The system of claim 2, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: obtain a measurement of the direction of dynamic acceleration of the electronic device based on the electronic signal received from the motion sensor affixed to the electronic device, wherein the determination of whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration is also based on the obtained measurement of the direction of dynamic acceleration of the electronic device. 5. The system of claim 2, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: obtain a measurement of a length of time of a period of dynamic acceleration of the electronic device based on the electronic signal received from the motion sensor affixed to the electronic device, wherein the determination of whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration is also based on the obtained measurement of the length of time of the period of dynamic acceleration of the electronic device. 6. The system of claim 2, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: determine whether the electronic device is in a mode to accept user input, wherein the determination of whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration is also based on the determination whether the electronic device is in a mode to accept user input, such that if a determination was made by the at least one processor that the electronic device is not in a mode to accept user input, the at least one processor determines that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is not indicative of user frustration. 7. The system of claim 1, wherein the motion sensor is an accelerometer affixed to the electronic device and the electronic signal from the motion sensor that is indicative of the electronic device being jarred by a user is a continuous voltage that is proportional to acceleration of the accelerometer affixed to the electronic device. 8. The system of claim 1, wherein the motion sensor is an accelerometer affixed to the electronic device and has a digital interface that is either an Inter-Integrated Circuit serial computer bus (I2C) or a Serial Peripheral Interface (SPI) bus. 9. The system of claim 1, wherein the motion sensor affixed to the electronic device is located inside the electronic device. 10. The system of claim 1, wherein the motion sensor affixed to the electronic device is affixed to an outside of a housing of the electronic device. 11. The system of claim 1, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to automatically perform the electronic action to address the user frustration based on the determination whether the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration, by at least causing the at least one processor to: cause the electronic device to perform a diagnostic action in response to a determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 12. The system of claim 11, wherein the diagnostic action includes one or more of: placing an automated service call to a manufacturer or service provider for the electronic device; contacting a remote server of a manufacturer or service provider for the electronic device; sending diagnostic data to a remote server of a manufacturer or service provider for the electronic device; monitoring audio or video signals input to or output from the electronic device; performing one or more tests regarding audio or video signals input to or output from the electronic device; checking configuration of the electronic device; testing operations of the electronic device; detecting a failure or fail condition of the electronic device; capturing video, audio, electronic, infrared (IR) or radio frequency (RF) information input to or output from the electronic device; varying AC voltages to the electronic device; detecting loss or fade of signal input to or output from the electronic device; electronically checking data regarding current weather or natural disaster issues that may affect signal quality or performance of the electronic device; communicating with a remote monitoring system to determine existence of a systemic problem related to a plurality of electronic devices within a particular geographical region; and determining current operating states, conditions, messages or functionalities of the electronic device. 13. The system of claim 11, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: cause the electronic device to perform a corrective action based on the diagnostic action performed in response to the determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 14. The system of claim 11, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: cause the electronic device to electronically provide a notification to the user based on the diagnostic action performed in response to the determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 15. The system of claim 14, wherein the notification to the user includes one or more of: data representing troubleshooting options for the user regarding the electronic device; help menu options for the user regarding the electronic device; contact information for technical support regarding the electronic device; directions regarding technical support for the electronic device; a link to data regarding technical support for the electronic device; results of the diagnostic action performed in response to the determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration and instructions to the user on how to take a corrective action based on the diagnostic action performed in response to the determination by the at least one processor that the electronic signal received from the motion sensor that is indicative of the electronic device being jarred by a user is indicative of user frustration. 16. The system of claim 14, wherein the diagnostic action includes the computer-executable instructions, when executed, causing the at least one processor to communicate with a remote monitoring system to determine whether a cause of the user frustration is a systemic problem related to a plurality of electronic devices within a particular geographical region. 17. The system of claim 1 further comprising: a microphone coupled to the at least one processor and the at least one memory, wherein the computer-executable instructions, when executed by the at least one processor, further cause the at least one processor to: receive an electronic signal from the microphone that is indicative of the electronic device being physically hit by a user; and determine whether the electronic signal from the microphone includes audio characteristics of a sound indicative of the electronic device being physically hit by a user, wherein the determination whether the electronic signal received from the motion sensor is indicative of user frustration is based on the electronic signal received from the motion sensor and a determination that the electronic signal from the microphone includes audio characteristics of a sound indicative of the electronic device being physically hit by the user. 18. The system of claim 1 wherein the electronic device is a receiving device, a set-top box, a remote-control device, a media player, a television, a monitor, a user input device, a keyboard or a steering wheel. 19. A method in a system for detecting user frustration with an electronic device, the method comprising: determining, by at least one computer processor, whether a signal received from a motion sensor of the electronic device is indicative of user frustration based on an amount of dynamic acceleration of the electronic device indicated by the signal received from the motion sensor of the electronic device; and in response to the determination by the at least one computer processor whether the signal received from the motion sensor of the electronic device is indicative of user frustration based on an amount of dynamic acceleration of the electronic device indicated by the signal received from the motion sensor of the electronic device, communicating, by the at least one computer processor, information to the user regarding the determination whether the signal received from a motion sensor of the electronic device is indicative of user frustration. 20. The method of claim 19 wherein the determination whether the signal received from the motion sensor of the electronic device is indicative of user frustration is also based on a measured level of an audio signal received from a microphone of the electronic device at a same time the signal from the motion sensor of the electronic device is received. 21. The method of claim 20 wherein the signal received from the motion sensor of the electronic device and the audio signal received from the microphone of the electronic device, based on which the determination is made whether the signal received from the motion sensor of the electronic device is indicative of user frustration, is received by a monitoring server located remotely from the electronic device and is in communication with the electronic device over a communications network. 22. A non-transitory computer-readable storage medium having computer executable instructions thereon, that when executed by a computer processor, cause the following method for detecting user frustration with an electronic device to be performed: electronically monitoring one or more motion sensors of the electronic device; and based on the electronically monitoring one or more motion sensors of the electronic device: determining whether a measurement of an amount of dynamic acceleration of the electronic device exceeds a first threshold; and determining whether a measurement of a length of time of a period of the dynamic acceleration of the electronic device exceeds a second threshold; and in response to a determination that both the measurement of the amount of dynamic acceleration of the electronic device exceeds the first threshold and that the measurement of the length of time of the period of the dynamic acceleration of the electronic device exceeds the second threshold, output a signal associated with occurrence of user frustration. 23. The non-transitory computer-readable storage medium of claim 22 wherein the electronically monitoring the one or more motion sensors of the electronic device occurs in response to the electronic device outputting a graphical user interface. 24. The non-transitory computer-readable storage medium of claim 22 wherein the electronically monitoring of the one or more motion sensors of the electronic device occurs in response to the electronic device outputting video or audio. 25. The non-transitory computer-readable storage medium of claim 22 wherein the electronic device is a remote-control of a receiving device and the signal associated with occurrence of user frustration is communicated from the remote-control to the receiving device to cause the receiving device to take an action to address the user frustration.
2,600
10,182
10,182
15,154,409
2,672
Methods and systems are described for storing content that match topics of interest selected by a user or an automated process. Audio information associated with the content can be extracted, parsed, and grouped into topics. Incoming content with audio information that matches the topics of interest selected can be stored and made available to the user for later playback.
1. A method comprising: receiving content comprising a text component; determining, based on parsing the text component, one or more topics associated with the content; determining that at least one topic matches a topic of interest to a user; determining, based on the at least one topic, at least a portion of the content relevant to the topic of interest; causing the portion of the content relevant to the topic of interest to be stored; and sending, to a user device associated with the user, an indication that the portion of the content relevant to the topic of interest is stored. 2. The method of claim 1, further comprising buffering the content, wherein causing the portion of the content relevant to the topic of interest to be stored comprises causing the buffered content to be stored as part of the portion of the content. 3. The method of claim 1, further comprising receiving, from the user device, input and determining the topic of interest based on the input. 4. The method of claim 3, wherein determining the topic of interest based on the input comprises suggesting the topic of interest based on the input and user information and receiving, from the user device, a selection of the topic of interest. 5. The method of claim 1, wherein a portion of the text component relevant to the matched topic is associated with a time range of the content, wherein determining, based on the at least one topic, at least the portion of the content relevant to the topic of interest comprises determining the portion of the content based on an association of the at least one topic with the time range. 6. The method of claim 1, further comprising determining a first time in the stored content associated with the matched topic, wherein determining, based on the at least one topic, at least the portion of the content relevant to the topic of interest is based on the first time. 7. The method of claim 6, wherein causing the portion of the content relevant to the topic of interest to be stored comprises causing the storing of the portion of the content with a start time of the portion of the content being a predetermined time prior to the first time. 8. The method of claim 6, wherein causing the portion of the content relevant to the topic of interest to be stored comprises causing the stored portion of the content to have an end time at a predetermined time after the first time. 9. The method of claim 6, further comprising: monitoring the text component of the content for additional references to the matched topic; updating the first time for each instance of additional references to the matched topic occurring in the content; and causing the stored portion of the content to end at a predetermined time after the updated first time. 10. The method of claim 1, wherein the text component comprises closed captions, and wherein determining, based on parsing the text component, the one or more topics associated with the content comprises parsing the closed captions. 11. The method of claim 1, wherein the content comprises a content stream associated with a linear content channel. 12. A system, comprising: a first one or more computing devices configured for, receiving content comprising a text component, and determining, based on parsing the text component, one or more topics associated with the content; a second one or more computing devices configured for, receiving the one or more topics from the first one or more computing devices, determining that at least one topic matches a topic of interest to a user, determining, based on the at least one topic, at least a portion of the content relevant to the topic of interest, and sending an instruction to store the portion of the content relevant to the topic of interest; and a storage device configured for, receiving the instruction from the second one or more computing devices, and storing the portion of the content in response to receiving the instruction. 13. The system of claim 12, wherein the storage device is further configured for buffering the content, wherein storing the portion of the content comprises causing the buffered content to be stored as part of the portion of the content. 14. The system of claim 12, wherein the first one or more computing devices or the second one or more computing devices are configured for receiving, from a user device, input and determining the topic of interest based on the input. 15. The system of claim 14, wherein the determining the topic of interest based on the input comprises: suggesting the topic of interest based on the input and user information; and receiving, from the user device, a selection of the topic of interest. 16. The method of claim 12, wherein a portion of the text component relevant to the matched topic is associated with a time range of the content, wherein determining, based on the at least one topic, at least the portion of the content relevant to the topic of interest comprises determining the portion of the content based on an association of the at least one topic with the time range. 17. The system of claim 12, wherein the second one or more computing devices are further configured for determining a first time in the stored content associated with the matched topic, wherein determining, based on the at least one topic, at least the portion of the content relevant to the topic of interest is based on the first time. 18. The system of claim 17, wherein storing the portion of the content comprises causing the storing the portion of the content with a start time of the portion of the content being a predetermined time prior to the first time. 19. The system of claim 12, wherein the text component comprises closed captions, and wherein determining, based on parsing the text component, the one or more topics associated with the content comprises parsing the closed captions. 20. A method comprising: receiving, at a first time, a message comprising a topic and a reference to content; determining that the topic matches a topic of interest to at least one user; determining, based on the topic matching the topic of interest, at least a portion of the content, beginning at a predetermined time prior to the first time, relevant to the topic of interest; causing the portion of the content to be stored; and sending, to at least one user device associated with the at least one user, an indication that the portion of the content is stored.
Methods and systems are described for storing content that match topics of interest selected by a user or an automated process. Audio information associated with the content can be extracted, parsed, and grouped into topics. Incoming content with audio information that matches the topics of interest selected can be stored and made available to the user for later playback.1. A method comprising: receiving content comprising a text component; determining, based on parsing the text component, one or more topics associated with the content; determining that at least one topic matches a topic of interest to a user; determining, based on the at least one topic, at least a portion of the content relevant to the topic of interest; causing the portion of the content relevant to the topic of interest to be stored; and sending, to a user device associated with the user, an indication that the portion of the content relevant to the topic of interest is stored. 2. The method of claim 1, further comprising buffering the content, wherein causing the portion of the content relevant to the topic of interest to be stored comprises causing the buffered content to be stored as part of the portion of the content. 3. The method of claim 1, further comprising receiving, from the user device, input and determining the topic of interest based on the input. 4. The method of claim 3, wherein determining the topic of interest based on the input comprises suggesting the topic of interest based on the input and user information and receiving, from the user device, a selection of the topic of interest. 5. The method of claim 1, wherein a portion of the text component relevant to the matched topic is associated with a time range of the content, wherein determining, based on the at least one topic, at least the portion of the content relevant to the topic of interest comprises determining the portion of the content based on an association of the at least one topic with the time range. 6. The method of claim 1, further comprising determining a first time in the stored content associated with the matched topic, wherein determining, based on the at least one topic, at least the portion of the content relevant to the topic of interest is based on the first time. 7. The method of claim 6, wherein causing the portion of the content relevant to the topic of interest to be stored comprises causing the storing of the portion of the content with a start time of the portion of the content being a predetermined time prior to the first time. 8. The method of claim 6, wherein causing the portion of the content relevant to the topic of interest to be stored comprises causing the stored portion of the content to have an end time at a predetermined time after the first time. 9. The method of claim 6, further comprising: monitoring the text component of the content for additional references to the matched topic; updating the first time for each instance of additional references to the matched topic occurring in the content; and causing the stored portion of the content to end at a predetermined time after the updated first time. 10. The method of claim 1, wherein the text component comprises closed captions, and wherein determining, based on parsing the text component, the one or more topics associated with the content comprises parsing the closed captions. 11. The method of claim 1, wherein the content comprises a content stream associated with a linear content channel. 12. A system, comprising: a first one or more computing devices configured for, receiving content comprising a text component, and determining, based on parsing the text component, one or more topics associated with the content; a second one or more computing devices configured for, receiving the one or more topics from the first one or more computing devices, determining that at least one topic matches a topic of interest to a user, determining, based on the at least one topic, at least a portion of the content relevant to the topic of interest, and sending an instruction to store the portion of the content relevant to the topic of interest; and a storage device configured for, receiving the instruction from the second one or more computing devices, and storing the portion of the content in response to receiving the instruction. 13. The system of claim 12, wherein the storage device is further configured for buffering the content, wherein storing the portion of the content comprises causing the buffered content to be stored as part of the portion of the content. 14. The system of claim 12, wherein the first one or more computing devices or the second one or more computing devices are configured for receiving, from a user device, input and determining the topic of interest based on the input. 15. The system of claim 14, wherein the determining the topic of interest based on the input comprises: suggesting the topic of interest based on the input and user information; and receiving, from the user device, a selection of the topic of interest. 16. The method of claim 12, wherein a portion of the text component relevant to the matched topic is associated with a time range of the content, wherein determining, based on the at least one topic, at least the portion of the content relevant to the topic of interest comprises determining the portion of the content based on an association of the at least one topic with the time range. 17. The system of claim 12, wherein the second one or more computing devices are further configured for determining a first time in the stored content associated with the matched topic, wherein determining, based on the at least one topic, at least the portion of the content relevant to the topic of interest is based on the first time. 18. The system of claim 17, wherein storing the portion of the content comprises causing the storing the portion of the content with a start time of the portion of the content being a predetermined time prior to the first time. 19. The system of claim 12, wherein the text component comprises closed captions, and wherein determining, based on parsing the text component, the one or more topics associated with the content comprises parsing the closed captions. 20. A method comprising: receiving, at a first time, a message comprising a topic and a reference to content; determining that the topic matches a topic of interest to at least one user; determining, based on the topic matching the topic of interest, at least a portion of the content, beginning at a predetermined time prior to the first time, relevant to the topic of interest; causing the portion of the content to be stored; and sending, to at least one user device associated with the at least one user, an indication that the portion of the content is stored.
2,600
10,183
10,183
15,231,228
2,677
Methods and apparatuses for detecting user speech are described. In one example, a method for detecting user speech includes receiving a microphone output signal corresponding to sound received at a microphone and identifying a spoken vowel sound in the microphone signal. The method further includes outputting an indication of user speech detection responsive to identifying the spoken vowel sound.
1. A method for detecting user speech comprising: receiving a microphone output signal corresponding to sound received at a microphone; converting the microphone output signal to a digital audio signal; identifying a spoken vowel sound in the sound received at the microphone from the digital audio signal; and outputting an indication of user speech detection responsive to identifying the spoken vowel sound. 2. The method of claim 1, further comprising filtering out a low frequency stationary noise below 300 Hz present in the sound. 3. The method of claim 2, wherein the stationary noise comprises heating, ventilation, and air conditioning (HVAC) noise. 4. The method of claim 1, further comprising: outputting a stationary noise comprising a sound masking noise in an open space, wherein the microphone is disposed in proximity to a ceiling area of the open space and the sound masking noise is present in the sound received at the microphone, wherein identifying the spoken vowel sound is immune to the presence of the sound masking noise. 5. The method of claim 1, wherein identifying the spoken vowel sound in the sound received at the microphone from the digital audio signal comprises detecting harmonic frequency signal components. 6. The method of claim 5, wherein the harmonic frequency signal components comprise energy in a plurality of higher frequency harmonics. 7. The method of claim 1, wherein identifying the spoken vowel sound in the sound received at the microphone from the digital audio signal comprises finding a circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum. 8. The method of claim 7, further comprising reducing the impact of stationary noise by applying a non-liner median filter to a result of the circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum. 9. A system comprising: a microphone arranged to detect sound in an open space; a speech detection system comprising: a first module configured to convert the sound received at the microphone to a digital audio signal; and a second module configured to identify a spoken vowel sound in the sound received at the microphone from the digital audio signal and output an indication of user speech responsive to identifying the spoken vowel sound; and a sound masking system configured to receive the indication of user speech detection from the speech detection system and output or adjust a sound masking noise into the open space responsive to the indication of user speech. 10. The system of claim 9, wherein the sound received at the microphone comprises the sound masking noise output from the sound masking system and the second module is further configured to identify the spoken vowel sound with immunity to the presence of the sound masking noise. 11. The system of claim 9, wherein the sound received at the microphone comprises a stationary noise and the second module is further configured to operate to identify the spoken vowel sound with immunity to the presence of the stationary noise. 12. The system of claim 11, wherein the stationary noise comprises heating, ventilation, and air conditioning (HVAC) noise. 13. The system of claim 9, wherein the second module is configured to detect harmonic frequency signal components to identify the spoken vowel sound. 14. The system of claim 13, wherein the harmonic frequency signal components comprise energy in a plurality of higher frequency harmonics. 15. The system of claim 9, wherein the second module is configured to find a circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum to identify the spoken vowel sound. 16. The system of claim 15, wherein the second module is further configured to reduce the impact of stationary noise by applying a non-liner median filter to a result of the circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum 17. One or more non-transitory computer-readable storage media having computer-executable instructions stored thereon which, when executed by one or more computers, cause the one more computers to perform operations comprising: receiving a microphone output signal corresponding to sound received at a microphone; converting the microphone output signal to a digital audio signal; identifying a spoken vowel sound in the sound received at the microphone from the digital audio signal; and outputting an indication of user speech detection responsive to identifying the spoken vowel sound. 18. The one or more non-transitory computer-readable storage media of claim 17, wherein the operations further comprise: outputting a stationary noise comprising a sound masking noise in an open space, wherein the microphone is disposed in proximity to a ceiling area of the open space and the sound masking noise is present in the sound received at the microphone; and identifying the spoken vowel sound with immunity to the sound masking noise present in the sound received at the microphone. 19. The one or more non-transitory computer-readable storage media of claim 17, wherein identifying the spoken vowel sound in the sound received at the microphone from the digital audio signal comprises detecting harmonic frequency signal components. 20. The one or more non-transitory computer-readable storage media of claim 17, wherein identifying the spoken vowel sound in the sound received at the microphone from the digital audio signal comprises finding a circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum.
Methods and apparatuses for detecting user speech are described. In one example, a method for detecting user speech includes receiving a microphone output signal corresponding to sound received at a microphone and identifying a spoken vowel sound in the microphone signal. The method further includes outputting an indication of user speech detection responsive to identifying the spoken vowel sound.1. A method for detecting user speech comprising: receiving a microphone output signal corresponding to sound received at a microphone; converting the microphone output signal to a digital audio signal; identifying a spoken vowel sound in the sound received at the microphone from the digital audio signal; and outputting an indication of user speech detection responsive to identifying the spoken vowel sound. 2. The method of claim 1, further comprising filtering out a low frequency stationary noise below 300 Hz present in the sound. 3. The method of claim 2, wherein the stationary noise comprises heating, ventilation, and air conditioning (HVAC) noise. 4. The method of claim 1, further comprising: outputting a stationary noise comprising a sound masking noise in an open space, wherein the microphone is disposed in proximity to a ceiling area of the open space and the sound masking noise is present in the sound received at the microphone, wherein identifying the spoken vowel sound is immune to the presence of the sound masking noise. 5. The method of claim 1, wherein identifying the spoken vowel sound in the sound received at the microphone from the digital audio signal comprises detecting harmonic frequency signal components. 6. The method of claim 5, wherein the harmonic frequency signal components comprise energy in a plurality of higher frequency harmonics. 7. The method of claim 1, wherein identifying the spoken vowel sound in the sound received at the microphone from the digital audio signal comprises finding a circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum. 8. The method of claim 7, further comprising reducing the impact of stationary noise by applying a non-liner median filter to a result of the circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum. 9. A system comprising: a microphone arranged to detect sound in an open space; a speech detection system comprising: a first module configured to convert the sound received at the microphone to a digital audio signal; and a second module configured to identify a spoken vowel sound in the sound received at the microphone from the digital audio signal and output an indication of user speech responsive to identifying the spoken vowel sound; and a sound masking system configured to receive the indication of user speech detection from the speech detection system and output or adjust a sound masking noise into the open space responsive to the indication of user speech. 10. The system of claim 9, wherein the sound received at the microphone comprises the sound masking noise output from the sound masking system and the second module is further configured to identify the spoken vowel sound with immunity to the presence of the sound masking noise. 11. The system of claim 9, wherein the sound received at the microphone comprises a stationary noise and the second module is further configured to operate to identify the spoken vowel sound with immunity to the presence of the stationary noise. 12. The system of claim 11, wherein the stationary noise comprises heating, ventilation, and air conditioning (HVAC) noise. 13. The system of claim 9, wherein the second module is configured to detect harmonic frequency signal components to identify the spoken vowel sound. 14. The system of claim 13, wherein the harmonic frequency signal components comprise energy in a plurality of higher frequency harmonics. 15. The system of claim 9, wherein the second module is configured to find a circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum to identify the spoken vowel sound. 16. The system of claim 15, wherein the second module is further configured to reduce the impact of stationary noise by applying a non-liner median filter to a result of the circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum 17. One or more non-transitory computer-readable storage media having computer-executable instructions stored thereon which, when executed by one or more computers, cause the one more computers to perform operations comprising: receiving a microphone output signal corresponding to sound received at a microphone; converting the microphone output signal to a digital audio signal; identifying a spoken vowel sound in the sound received at the microphone from the digital audio signal; and outputting an indication of user speech detection responsive to identifying the spoken vowel sound. 18. The one or more non-transitory computer-readable storage media of claim 17, wherein the operations further comprise: outputting a stationary noise comprising a sound masking noise in an open space, wherein the microphone is disposed in proximity to a ceiling area of the open space and the sound masking noise is present in the sound received at the microphone; and identifying the spoken vowel sound with immunity to the sound masking noise present in the sound received at the microphone. 19. The one or more non-transitory computer-readable storage media of claim 17, wherein identifying the spoken vowel sound in the sound received at the microphone from the digital audio signal comprises detecting harmonic frequency signal components. 20. The one or more non-transitory computer-readable storage media of claim 17, wherein identifying the spoken vowel sound in the sound received at the microphone from the digital audio signal comprises finding a circular autocorrelation of the absolute value of a short time hamming windowed audio spectrum.
2,600
10,184
10,184
13,863,840
2,613
Implementations provide methods including actions of processing patient data to generate one or more graphical representations of the patient data, at least one graphical representation of the one or more graphical representations including a waveform, displaying at least one waveform segment of the waveform, and displaying calipers associated with the at least one waveform segment, each caliper being associated with an interval, where displaying the calipers includes, for each caliper: receiving a measurement value of the interval associated with the caliper, determining respective positions of a first handle and a second handle of the caliper based on the measurement, and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment.
1. A computer-implemented method executed using one or more processors, the method comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying, by a display of a computing device, at least one waveform segment of the waveform; and displaying, by the display of a computing device, a plurality of calipers associated with the at least one waveform segment, each caliper of the plurality of calipers being associated with an interval of the at least one waveform segment, wherein displaying the plurality of calipers comprises, for each caliper: receiving a measurement value of the interval associated with the caliper; determining, relative to the at least one waveform segment, respective positions of a first handle and a second handle of the caliper based on the measurement; and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment. 2. The method of claim 1, further comprising displaying the measurement value proximate to the caliper. 3. The method of claim 1, further comprising: receiving user input associated with a handle of a first caliper, the user input indicating movement of the handle from a first position to a second position; and determining an updated value for a first measurement value associated with the first caliper based on the second position. 4. The method of claim 3, further comprising: moving a handle of a second caliper in response to movement of the handle of the first caliper; and determining an updated value for a second measurement value associated with the second caliper based on the second position. 5. The method of claim 1, further comprising receiving, by the one or more processors, the patient data. 6. A computer-readable storage device coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying, by a display of a computing device, at least one waveform segment of the waveform; and displaying, by the display of a computing device, a plurality of calipers associated with the at least one waveform segment, each caliper of the plurality of calipers being associated with an interval of the at least one waveform segment, wherein displaying the plurality of calipers comprises, for each caliper: receiving a measurement value of the interval associated with the caliper; determining, relative to the at least one waveform segment, respective positions of a first handle and a second handle of the caliper based on the measurement; and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment. 7. A system, comprising: one or more processors; and a computer-readable storage medium in communication with the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying, by a display of a computing device, at least one waveform segment of the waveform; and displaying, by the display of a computing device, a plurality of calipers associated with the at least one waveform segment, each caliper of the plurality of calipers being associated with an interval of the at least one waveform segment, wherein displaying the plurality of calipers comprises, for each caliper: receiving a measurement value of the interval associated with the caliper; determining, relative to the at least one waveform segment, respective positions of a first handle and a second handle of the caliper based on the measurement; and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment. 8. A computer-implemented method executed using one or more processors, the method comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying a first waveform segment of the waveform in a primary layer, the first waveform segment being associated with a first time period; displaying a second waveform segment of the waveform in a first secondary layer, the second waveform segment being associated with a second time period; and displaying a third waveform segment of the waveform in a second secondary layer, the third waveform segment being associated with a third time period. 9. The method of claim 8, further comprising: receiving user input; and in response to the user input, scrolling the first, second and third waveform segments through the primary layer and the first and second secondary layers, such that the first waveform segment is displayed in the second secondary layer, the second waveform segment is displayed in a third secondary layer and the third waveform segment is displayed in the primary layer. 10. The method of claim 8, wherein the second time period is earlier in time than the first time period. 11. The method of claim 8, wherein the second time period is later in time than the first time period. 12. The method of claim 8, wherein the third time period is earlier in time than the second time period. 13. The method of claim 8, wherein the third time period is later in time than the second time period. 14. The method of claim 8, further comprising receiving, by the one or more processors, the patient data. 15. A computer-readable storage device coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying a first waveform segment of the waveform in a primary layer, the first waveform segment being associated with a first time period; displaying a second waveform segment of the waveform in a first secondary layer, the second waveform segment being associated with a second time period; and displaying a third waveform segment of the waveform in a second secondary layer, the third waveform segment being associated with a third time period. 16. A system, comprising: one or more processors; and a computer-readable storage medium in communication with the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying a first waveform segment of the waveform in a primary layer, the first waveform segment being associated with a first time period; displaying a second waveform segment of the waveform in a first secondary layer, the second waveform segment being associated with a second time period; and displaying a third waveform segment of the waveform in a second secondary layer, the third waveform segment being associated with a third time period.
Implementations provide methods including actions of processing patient data to generate one or more graphical representations of the patient data, at least one graphical representation of the one or more graphical representations including a waveform, displaying at least one waveform segment of the waveform, and displaying calipers associated with the at least one waveform segment, each caliper being associated with an interval, where displaying the calipers includes, for each caliper: receiving a measurement value of the interval associated with the caliper, determining respective positions of a first handle and a second handle of the caliper based on the measurement, and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment.1. A computer-implemented method executed using one or more processors, the method comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying, by a display of a computing device, at least one waveform segment of the waveform; and displaying, by the display of a computing device, a plurality of calipers associated with the at least one waveform segment, each caliper of the plurality of calipers being associated with an interval of the at least one waveform segment, wherein displaying the plurality of calipers comprises, for each caliper: receiving a measurement value of the interval associated with the caliper; determining, relative to the at least one waveform segment, respective positions of a first handle and a second handle of the caliper based on the measurement; and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment. 2. The method of claim 1, further comprising displaying the measurement value proximate to the caliper. 3. The method of claim 1, further comprising: receiving user input associated with a handle of a first caliper, the user input indicating movement of the handle from a first position to a second position; and determining an updated value for a first measurement value associated with the first caliper based on the second position. 4. The method of claim 3, further comprising: moving a handle of a second caliper in response to movement of the handle of the first caliper; and determining an updated value for a second measurement value associated with the second caliper based on the second position. 5. The method of claim 1, further comprising receiving, by the one or more processors, the patient data. 6. A computer-readable storage device coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying, by a display of a computing device, at least one waveform segment of the waveform; and displaying, by the display of a computing device, a plurality of calipers associated with the at least one waveform segment, each caliper of the plurality of calipers being associated with an interval of the at least one waveform segment, wherein displaying the plurality of calipers comprises, for each caliper: receiving a measurement value of the interval associated with the caliper; determining, relative to the at least one waveform segment, respective positions of a first handle and a second handle of the caliper based on the measurement; and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment. 7. A system, comprising: one or more processors; and a computer-readable storage medium in communication with the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying, by a display of a computing device, at least one waveform segment of the waveform; and displaying, by the display of a computing device, a plurality of calipers associated with the at least one waveform segment, each caliper of the plurality of calipers being associated with an interval of the at least one waveform segment, wherein displaying the plurality of calipers comprises, for each caliper: receiving a measurement value of the interval associated with the caliper; determining, relative to the at least one waveform segment, respective positions of a first handle and a second handle of the caliper based on the measurement; and displaying the first handle and the second handle in the respective positions relative to the at least one waveform segment. 8. A computer-implemented method executed using one or more processors, the method comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying a first waveform segment of the waveform in a primary layer, the first waveform segment being associated with a first time period; displaying a second waveform segment of the waveform in a first secondary layer, the second waveform segment being associated with a second time period; and displaying a third waveform segment of the waveform in a second secondary layer, the third waveform segment being associated with a third time period. 9. The method of claim 8, further comprising: receiving user input; and in response to the user input, scrolling the first, second and third waveform segments through the primary layer and the first and second secondary layers, such that the first waveform segment is displayed in the second secondary layer, the second waveform segment is displayed in a third secondary layer and the third waveform segment is displayed in the primary layer. 10. The method of claim 8, wherein the second time period is earlier in time than the first time period. 11. The method of claim 8, wherein the second time period is later in time than the first time period. 12. The method of claim 8, wherein the third time period is earlier in time than the second time period. 13. The method of claim 8, wherein the third time period is later in time than the second time period. 14. The method of claim 8, further comprising receiving, by the one or more processors, the patient data. 15. A computer-readable storage device coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying a first waveform segment of the waveform in a primary layer, the first waveform segment being associated with a first time period; displaying a second waveform segment of the waveform in a first secondary layer, the second waveform segment being associated with a second time period; and displaying a third waveform segment of the waveform in a second secondary layer, the third waveform segment being associated with a third time period. 16. A system, comprising: one or more processors; and a computer-readable storage medium in communication with the one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: processing, by the one or more processors, patient data to generate one or more graphical representations of the patient data, the patient data reflective of one or more physiological characteristics of a patient, at least one graphical representation of the one or more graphical representations comprising a waveform; displaying a first waveform segment of the waveform in a primary layer, the first waveform segment being associated with a first time period; displaying a second waveform segment of the waveform in a first secondary layer, the second waveform segment being associated with a second time period; and displaying a third waveform segment of the waveform in a second secondary layer, the third waveform segment being associated with a third time period.
2,600
10,185
10,185
14,802,088
2,654
A mobile device that is capable of automatically starting and ending the recording of an audio signal captured by at least one microphone is presented. The mobile device is capable of adjusting a number of parameters related with audio logging based on the context information of the audio input signal.
1-84. (canceled) 85. A method for a mobile device, the method comprising: in response to automatically detecting a start event indicator, processing first portion of audio input signal to obtain first information; determining at least one recording parameter based on the first information; and reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter. 86. (canceled) 87. The method according to claim 85, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the mobile device. 88. The method according to claim 85, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device. 89. The method according to claim 85, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration. 90. The method according to claim 85, wherein the first information is context information describing an environment in which the mobile device is recording. 91. The method according to claim 85, wherein the first information is context information describing a characteristic of the audio input signal. 92. The method according to claim 85, wherein the start event indicator is based on a signal transmitted over a wireless channel. 93-97. (canceled) 98. An apparatus for a mobile device, the apparatus comprising: an audio logging processor configured to: automatically detect a start event indicator; process first portion of audio input signal to obtain first information, in response to the detecting of the start event indicator; and determine at least one recording parameter based on the first information; and an audio capturing unit configured to reconfigure itself based on the determined at least one recording parameter. 99. (canceled) 100. The apparatus according to claim 98, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit. 101. The apparatus according to claim 98, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device. 102. The apparatus according to claim 98, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration. 103. The apparatus according to claim 98, wherein the first information is context information indicative of environment in which the mobile device is recording. 104. The apparatus according to claim 98, wherein the first information is context information indicative of a characteristic of the audio input signal. 105. The apparatus according to claim 98, wherein the start event indicator is based on a signal transmitted over a wireless channel. 106-110. (canceled) 111. An apparatus for a mobile device, the apparatus comprising: means for automatically detecting a start event indicator; means for processing first portion of audio input signal to obtain first information in response to detecting the start event indicator; means for determining at least one recording parameter based on the first information; and means for reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter. 112. (canceled) 113. The apparatus according to claim 111, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit. 114. The apparatus according to claim 111, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device. 115. The apparatus according to claim 111, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration. 116. The apparatus according to claim 111, wherein the first information is context information indicative of environment in which the mobile device is recording. 117. The apparatus according to claim 111, wherein the first information is context information indicative of a characteristic of the audio input signal. 118. The apparatus according to claim 111, wherein the start event indicator is based on a signal transmitted over a wireless channel. 119-123. (canceled) 124. A non-transitory computer-readable medium comprising instructions which when executed by a processor cause the processor to: automatically detect a start event indicator; process first portion of audio input signal to obtain first information in response to detecting the start event indicator; determine at least one recording parameter based on the first information; and reconfigure an audio capturing unit of the mobile device based on the determined at least one recording parameter. 125. (canceled) 126. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit. 127. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device. 128. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration. 129. The computer-readable medium according to claim 124, wherein the first information is context information indicative of environment in which the mobile device is recording. 130. The computer-readable medium according to claim 124, wherein the first information is context information indicative of a characteristic of the audio input signal. 131. The computer-readable medium according to claim 124, wherein the start event indicator is based on a signal transmitted over a wireless channel. 132-136. (canceled)
A mobile device that is capable of automatically starting and ending the recording of an audio signal captured by at least one microphone is presented. The mobile device is capable of adjusting a number of parameters related with audio logging based on the context information of the audio input signal.1-84. (canceled) 85. A method for a mobile device, the method comprising: in response to automatically detecting a start event indicator, processing first portion of audio input signal to obtain first information; determining at least one recording parameter based on the first information; and reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter. 86. (canceled) 87. The method according to claim 85, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the mobile device. 88. The method according to claim 85, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device. 89. The method according to claim 85, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration. 90. The method according to claim 85, wherein the first information is context information describing an environment in which the mobile device is recording. 91. The method according to claim 85, wherein the first information is context information describing a characteristic of the audio input signal. 92. The method according to claim 85, wherein the start event indicator is based on a signal transmitted over a wireless channel. 93-97. (canceled) 98. An apparatus for a mobile device, the apparatus comprising: an audio logging processor configured to: automatically detect a start event indicator; process first portion of audio input signal to obtain first information, in response to the detecting of the start event indicator; and determine at least one recording parameter based on the first information; and an audio capturing unit configured to reconfigure itself based on the determined at least one recording parameter. 99. (canceled) 100. The apparatus according to claim 98, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit. 101. The apparatus according to claim 98, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device. 102. The apparatus according to claim 98, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration. 103. The apparatus according to claim 98, wherein the first information is context information indicative of environment in which the mobile device is recording. 104. The apparatus according to claim 98, wherein the first information is context information indicative of a characteristic of the audio input signal. 105. The apparatus according to claim 98, wherein the start event indicator is based on a signal transmitted over a wireless channel. 106-110. (canceled) 111. An apparatus for a mobile device, the apparatus comprising: means for automatically detecting a start event indicator; means for processing first portion of audio input signal to obtain first information in response to detecting the start event indicator; means for determining at least one recording parameter based on the first information; and means for reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter. 112. (canceled) 113. The apparatus according to claim 111, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit. 114. The apparatus according to claim 111, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device. 115. The apparatus according to claim 111, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration. 116. The apparatus according to claim 111, wherein the first information is context information indicative of environment in which the mobile device is recording. 117. The apparatus according to claim 111, wherein the first information is context information indicative of a characteristic of the audio input signal. 118. The apparatus according to claim 111, wherein the start event indicator is based on a signal transmitted over a wireless channel. 119-123. (canceled) 124. A non-transitory computer-readable medium comprising instructions which when executed by a processor cause the processor to: automatically detect a start event indicator; process first portion of audio input signal to obtain first information in response to detecting the start event indicator; determine at least one recording parameter based on the first information; and reconfigure an audio capturing unit of the mobile device based on the determined at least one recording parameter. 125. (canceled) 126. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit. 127. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device. 128. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration. 129. The computer-readable medium according to claim 124, wherein the first information is context information indicative of environment in which the mobile device is recording. 130. The computer-readable medium according to claim 124, wherein the first information is context information indicative of a characteristic of the audio input signal. 131. The computer-readable medium according to claim 124, wherein the start event indicator is based on a signal transmitted over a wireless channel. 132-136. (canceled)
2,600
10,186
10,186
15,243,624
2,654
A queueless contact center is described along with various methods and mechanisms for administering the same. The contact center proposed herein provides the ability to, among other things, achieve true one-to-one matching. Solutions are also provided for managing data structures utilized by the queueless contact center. Furthermore, mechanisms for generating traditional queue-based performance views and metrics for the queueless contact center are proposed to help facilitate a smooth transition from traditional queue-based contact centers to the next generation contact centers described herein.
1. A method, comprising: determining, by a microprocessor, an available work item from a set of work items in a contact center; wherein the contact center comprises a resource pool devoid of queues, wherein some of the resources in the resource pool are paired resources that are matched with respective paired work items from the set of work items, and other resources in the resource pool are available resources that are not matched with any of the work items from the set of work items, and wherein every one of the available resources and paired resources comprise a resource attribute combination that indicates its abilities, analyzing, by the microprocessor, the available work item for processing requirements; determining, by the microprocessor, an available work item attribute combination that satisfies the processing requirements; and scanning, by the microprocessor, the resource attribute combination of every one of the available resources and the paired resources to determine a set of qualified resources that have the resource attribute combinations that match the available work item attribute combination. 2. The method of claim 1, further comprising: generating, by the microprocessor, a resource data structure comprising the resource attribute combinations of the available resources and the paired resources, wherein the resource data structure is a resource bitmap. 3. The method of claim 2, wherein each bit in the resource bitmap corresponds to a single one of the available resources and the paired resources in the resource pool. 4. The method of claim 3, wherein a value of each bit in the resource bitmap is assigned either a one or a zero depending upon whether the resource attribute combination of each of the available resource and the paired resource is qualified to be matched with the available work item attribute combination. 5. The method of claim 3, wherein every resource in the contact center is represented in the resource pool and the resource bitmap. 6. The method of claim 5, wherein the resource bitmap comprises more values of zero than values of one. 7. The method of claim 2, wherein the resource bitmap is continuous in memory and wherein multiple bits of the resource bitmap are evaluated simultaneously during the scanning of the resource attribute combination of every one of the available resources and the paired resources. 8. The method of claim 7, wherein a Boolean value is computed by the microprocessor for the multiple bits such that the multiple bits can be evaluated via the Boolean value. 9. The method of claim 1, further comprising: analyzing, by the microprocessor, an eligibility of each resource in the set of qualified resources; based on the analysis of the eligibility of each resource in the set of qualified resources, determining, by the microprocessor, a selected resource from the set of qualified resources; and assigning, by the microprocessor, the available work item to the selected resource, wherein the selected resource is an optimal resource for the available work item at the time the resource data structure is generated. 10. A contact center, comprising: a microprocessor; a microprocessor executable work assignment engine that, when executed by the microprocessor in one or more servers, makes work assignment decisions for work items received in the contact center; and a plurality of bitmaps, the plurality of bitmaps including a work item bitmap and a resource bitmap, wherein the work item bitmap correlates work items to a work item attribute combination identifier associated with processing requirements of each work item, wherein the resource bitmap correlates resources to a resource attribute combination identifier associated with processing capabilities of each resource, wherein the resources belong to a resource pool that is devoid of queues, wherein the resource pool comprises paired resources that are matched, by the work assignment engine, with respective paired work items from the work items, and available resources that are not matched with any of the work items, and wherein the work item attribute combination identifiers of the work items are matched, by the work assignment engine, with the resource attribute combination identifiers of the available resources and the paired resources by scanning one or both of the work item bitmap and resource bitmap. 11. The contact center of claim 10, wherein each bit in the resource bitmap corresponds singly to each one of the available resources and each one of the paired resources in the resource pool, and wherein the work item bitmap is associated with a work pool comprising the work items, and each bit in the work item bitmap corresponds singly to each one of the work items in the work pool. 12. The contact center of claim 11, wherein a value of each bit in the resource bitmap is assigned either a one or zero depending upon whether the available resource or the paired resource is qualified to be assigned to a particular work item and wherein every available resource and paired resource in the contact center is represented in the resource pool and the resource bitmap. 13. The contact center of claim 12, wherein the value assigned to each bit in the resource bitmap depends upon whether the available resource or paired resource has the resource attribute combination identifier equal to the work item attribute combination identifier. 14. The contact center of claim 11, wherein the resource bitmap is continuous in memory. 15. The contact center of claim 11, wherein multiple bits of the resource bitmap are evaluated simultaneously during the scanning of the resource bitmap. 16. The contact center of claim 15, wherein a Boolean value is computed for the multiple bits such that the multiple bits can be evaluated via the Boolean value. 17. A method, comprising: determining, by a microprocessor, a resource from a set of resources in a resource pool in a contact center; wherein the resource pool is devoid of queues, and wherein some of the resources in the resource pool are paired resources that are matched with respective paired work items from the set of work items, and other resources in the resource pool are available resources that are not matched with any of the work items from the set of work items, and the resource is one of the available resources and the paired resources, determining, by the microprocessor, a resource attribute combination that indicates abilities of the resource; determining, by the microprocessor, work item attribute combinations that satisfy processing requirements of work items in the contact center; and scanning, by the microprocessor, the work item attribute combination of each of the work items to determine a set of qualified work items that have the work item attribute combinations that match the resource attribute combinations of the resource. 18. The method of claim 17, further comprising: generating, by the microprocessor, a work item data structure comprising the work item attribute combinations of the work items, wherein the work item data structure is a work item bitmap, wherein the work item bitmap is associated with a work item pool and each bit in the work item bitmap corresponds to a single work item in the work item pool, wherein a value of each bit in the work item bitmap is assigned either a one or a zero depending upon whether the work item attribute combination matches the resource attribute combination, and wherein the resource bitmap comprises mostly values of zero. 19. The method of claim 18, wherein the work item bitmap is continuous in memory and wherein multiple bits of the work item bitmap are evaluated simultaneously, by the microprocessor, during the scanning of the work item bitmap by computing a Boolean value for the multiple bits such that the multiple bits can be evaluated via the Boolean value. 20. The method of claim 17, further comprising: analyzing, by the microprocessor, an eligibility of each work item in the set of qualified work items; based on the analysis of the eligibility of the work items in the set of qualified work items, determining, by the microprocessor, a selected work item from the set of qualified work items; and assigning, by the microprocessor, the selected work item to the resource.
A queueless contact center is described along with various methods and mechanisms for administering the same. The contact center proposed herein provides the ability to, among other things, achieve true one-to-one matching. Solutions are also provided for managing data structures utilized by the queueless contact center. Furthermore, mechanisms for generating traditional queue-based performance views and metrics for the queueless contact center are proposed to help facilitate a smooth transition from traditional queue-based contact centers to the next generation contact centers described herein.1. A method, comprising: determining, by a microprocessor, an available work item from a set of work items in a contact center; wherein the contact center comprises a resource pool devoid of queues, wherein some of the resources in the resource pool are paired resources that are matched with respective paired work items from the set of work items, and other resources in the resource pool are available resources that are not matched with any of the work items from the set of work items, and wherein every one of the available resources and paired resources comprise a resource attribute combination that indicates its abilities, analyzing, by the microprocessor, the available work item for processing requirements; determining, by the microprocessor, an available work item attribute combination that satisfies the processing requirements; and scanning, by the microprocessor, the resource attribute combination of every one of the available resources and the paired resources to determine a set of qualified resources that have the resource attribute combinations that match the available work item attribute combination. 2. The method of claim 1, further comprising: generating, by the microprocessor, a resource data structure comprising the resource attribute combinations of the available resources and the paired resources, wherein the resource data structure is a resource bitmap. 3. The method of claim 2, wherein each bit in the resource bitmap corresponds to a single one of the available resources and the paired resources in the resource pool. 4. The method of claim 3, wherein a value of each bit in the resource bitmap is assigned either a one or a zero depending upon whether the resource attribute combination of each of the available resource and the paired resource is qualified to be matched with the available work item attribute combination. 5. The method of claim 3, wherein every resource in the contact center is represented in the resource pool and the resource bitmap. 6. The method of claim 5, wherein the resource bitmap comprises more values of zero than values of one. 7. The method of claim 2, wherein the resource bitmap is continuous in memory and wherein multiple bits of the resource bitmap are evaluated simultaneously during the scanning of the resource attribute combination of every one of the available resources and the paired resources. 8. The method of claim 7, wherein a Boolean value is computed by the microprocessor for the multiple bits such that the multiple bits can be evaluated via the Boolean value. 9. The method of claim 1, further comprising: analyzing, by the microprocessor, an eligibility of each resource in the set of qualified resources; based on the analysis of the eligibility of each resource in the set of qualified resources, determining, by the microprocessor, a selected resource from the set of qualified resources; and assigning, by the microprocessor, the available work item to the selected resource, wherein the selected resource is an optimal resource for the available work item at the time the resource data structure is generated. 10. A contact center, comprising: a microprocessor; a microprocessor executable work assignment engine that, when executed by the microprocessor in one or more servers, makes work assignment decisions for work items received in the contact center; and a plurality of bitmaps, the plurality of bitmaps including a work item bitmap and a resource bitmap, wherein the work item bitmap correlates work items to a work item attribute combination identifier associated with processing requirements of each work item, wherein the resource bitmap correlates resources to a resource attribute combination identifier associated with processing capabilities of each resource, wherein the resources belong to a resource pool that is devoid of queues, wherein the resource pool comprises paired resources that are matched, by the work assignment engine, with respective paired work items from the work items, and available resources that are not matched with any of the work items, and wherein the work item attribute combination identifiers of the work items are matched, by the work assignment engine, with the resource attribute combination identifiers of the available resources and the paired resources by scanning one or both of the work item bitmap and resource bitmap. 11. The contact center of claim 10, wherein each bit in the resource bitmap corresponds singly to each one of the available resources and each one of the paired resources in the resource pool, and wherein the work item bitmap is associated with a work pool comprising the work items, and each bit in the work item bitmap corresponds singly to each one of the work items in the work pool. 12. The contact center of claim 11, wherein a value of each bit in the resource bitmap is assigned either a one or zero depending upon whether the available resource or the paired resource is qualified to be assigned to a particular work item and wherein every available resource and paired resource in the contact center is represented in the resource pool and the resource bitmap. 13. The contact center of claim 12, wherein the value assigned to each bit in the resource bitmap depends upon whether the available resource or paired resource has the resource attribute combination identifier equal to the work item attribute combination identifier. 14. The contact center of claim 11, wherein the resource bitmap is continuous in memory. 15. The contact center of claim 11, wherein multiple bits of the resource bitmap are evaluated simultaneously during the scanning of the resource bitmap. 16. The contact center of claim 15, wherein a Boolean value is computed for the multiple bits such that the multiple bits can be evaluated via the Boolean value. 17. A method, comprising: determining, by a microprocessor, a resource from a set of resources in a resource pool in a contact center; wherein the resource pool is devoid of queues, and wherein some of the resources in the resource pool are paired resources that are matched with respective paired work items from the set of work items, and other resources in the resource pool are available resources that are not matched with any of the work items from the set of work items, and the resource is one of the available resources and the paired resources, determining, by the microprocessor, a resource attribute combination that indicates abilities of the resource; determining, by the microprocessor, work item attribute combinations that satisfy processing requirements of work items in the contact center; and scanning, by the microprocessor, the work item attribute combination of each of the work items to determine a set of qualified work items that have the work item attribute combinations that match the resource attribute combinations of the resource. 18. The method of claim 17, further comprising: generating, by the microprocessor, a work item data structure comprising the work item attribute combinations of the work items, wherein the work item data structure is a work item bitmap, wherein the work item bitmap is associated with a work item pool and each bit in the work item bitmap corresponds to a single work item in the work item pool, wherein a value of each bit in the work item bitmap is assigned either a one or a zero depending upon whether the work item attribute combination matches the resource attribute combination, and wherein the resource bitmap comprises mostly values of zero. 19. The method of claim 18, wherein the work item bitmap is continuous in memory and wherein multiple bits of the work item bitmap are evaluated simultaneously, by the microprocessor, during the scanning of the work item bitmap by computing a Boolean value for the multiple bits such that the multiple bits can be evaluated via the Boolean value. 20. The method of claim 17, further comprising: analyzing, by the microprocessor, an eligibility of each work item in the set of qualified work items; based on the analysis of the eligibility of the work items in the set of qualified work items, determining, by the microprocessor, a selected work item from the set of qualified work items; and assigning, by the microprocessor, the selected work item to the resource.
2,600
10,187
10,187
15,393,500
2,647
An application such as a contacts management application may be configured to display information for multiple contacts. The displayed information may include carrier information for certain contacts, indicating which of multiple cellular communication carriers provides cellular communication services for each contact. This may allow a user to provide recommendations to the user's contacts, such as promotional messages suggesting that the contacts switch to the carrier of the user. In some embodiments, a control may be displayed that automatically initiates a promotional message to a particular contact.
1. One or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform actions comprising: accessing a contacts database to obtain contact information for multiple contacts, the contact information indicating a cellular telephone number of each of the multiple contacts; submitting a query to a network-accessible service, the query specifying a cellular telephone number of a particular contact of the multiple contacts; receiving a response to the query from the network-accessible service, the response indicating identifying information of a first cellular communication carrier; determining an identifier of the first cellular communication carrier based at least in part on the identifying information; listing the multiple contacts on a display of the device while a contact list is being accessed; and showing on the display, in association with the particular contact, the identifier of the first cellular communication carrier while the contact list is being accessed. 2. The one or more non-transitory computer-readable media of claim 1, wherein: the identifying information comprises a hostname of the first cellular communication carrier; and the determining is based at least in part on the hostname. 3. The one or more non-transitory computer-readable media of claim 1, wherein: the identifying information comprises a uniform resource identifier (URI); the actions further comprising parsing the URI to determine a hostname specified by the URI; and the determining is based at least in part on the hostname. 4. The one or more non-transitory computer-readable media of claim 1, wherein the identifier comprises text that identifies the first cellular communication carrier. 5. The one or more non-transitory computer-readable media of claim 1, wherein the identifier comprises a logo of the first cellular communication carrier. 6. The one or more non-transitory computer-readable media of claim 1, the actions further comprising displaying identifiers for respectively corresponding ones of the contacts, wherein for each contact of the contacts, the identifier indicates which of multiple cellular communication carriers provides cellular communication services for the contact. 7. The one or more non-transitory computer-readable media of claim 6, the actions further comprising visibly grouping a set of the contacts for which a single one of the multiple cellular communication carriers provides cellular communication services. 8. The one or more non-transitory computer-readable media of claim 1, wherein the device receives cellular communication services from a home cellular communication carrier, the actions further comprising: showing on the display, in association with the particular contact, an offer to send a message to the contact; and in response to selection of the message by a user of the device, initiating a message to the contact on behalf of the user, the message promoting the home cellular communication carrier. 9. A method comprising: providing a graphical user interface that shows a contact list having information corresponding to multiple contacts, the information including cellular telephone numbers; submitting a query to obtain information identifying a first cellular communication carrier that provides cellular communication services to a particular contact of the multiple contacts, the query specifying a cellular telephone number of the particular contact; and showing in the contact list, in association with the particular contact, an identifier corresponding to the first cellular communication carrier. 10. The method of claim 9, further comprising: receiving a response to the query, the response indicating a uniform resource identifier (URI) corresponding to the cellular telephone number of the particular contact; parsing the URI to determine a hostname specified by the URI; and determining the identifier based at least in part on the URI. 11. The method of claim 9, wherein the identifier comprises at least one of (a) text that identifies the first cellular communication carrier; and (b) a logo of the first cellular communication carrier. 12. The method of claim 9, wherein submitting the query comprises submitting the query to an E.164 Number to URI Mapping (ENUM) server; the method further comprising receiving a response from the ENUM server, the response indicating a service address associated with the particular contact. 13. The method of claim 9, further comprising: showing in the graphical user interface, in association with the particular contact, an offer to initiate a message to the particular contact; and in response to selection of the offer by a user, initiating the message to the particular contact on behalf of the user, the message promoting a second cellular communication carrier. 14. The method of claim 9, wherein the graphical user interface further shows a call history that specifies the cellular telephone number of the particular contact. 15. The method of claim 9, wherein the graphical user interface lists multiple contacts, the particular contact being among the multiple contacts. 16. A communication device comprising: one or more processors; a display; one or more non-transitory computer-readable media storing computer-executable instructions that, when executed on the one or more processors, cause the one or more processors to perform actions comprising: accessing a contacts database to obtain contact information for multiple contacts, the contact information indicating a cellular telephone number of each of the multiple contacts; listing at least some of the contact information on the display; for each contact of a plurality of the contacts, determining a cellular communication carrier that provides cellular communication services for the contact; and showing on the display, while a contact list is being accessed and in association with each contact of the plurality of the contacts, an identifier of the cellular communication carrier that provides cellular communication services for the contact. 17. The communication device of claim 16, wherein determining the cellular communication carrier comprises submitting a query to a network-accessible service, the query specifying at least one cellular telephone number. 18. The communication device of claim 16, wherein the identifier comprises at least one of (a) text that identifies the cellular communication carrier; and (b) a logo of the cellular communication carrier. 19. The communication device of claim 16, wherein determining the cellular communication carrier comprises obtaining a uniform resource identifier (URI) corresponding to one of the multiple contacts. 20. The communication device of claim 16, the actions further comprising: showing one the display, in association with a particular contact, an offer to initiate a message to the particular contact; and in response to selection of the offer, initiating the message to the particular contact, the message promoting a second cellular communication carrier.
An application such as a contacts management application may be configured to display information for multiple contacts. The displayed information may include carrier information for certain contacts, indicating which of multiple cellular communication carriers provides cellular communication services for each contact. This may allow a user to provide recommendations to the user's contacts, such as promotional messages suggesting that the contacts switch to the carrier of the user. In some embodiments, a control may be displayed that automatically initiates a promotional message to a particular contact.1. One or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform actions comprising: accessing a contacts database to obtain contact information for multiple contacts, the contact information indicating a cellular telephone number of each of the multiple contacts; submitting a query to a network-accessible service, the query specifying a cellular telephone number of a particular contact of the multiple contacts; receiving a response to the query from the network-accessible service, the response indicating identifying information of a first cellular communication carrier; determining an identifier of the first cellular communication carrier based at least in part on the identifying information; listing the multiple contacts on a display of the device while a contact list is being accessed; and showing on the display, in association with the particular contact, the identifier of the first cellular communication carrier while the contact list is being accessed. 2. The one or more non-transitory computer-readable media of claim 1, wherein: the identifying information comprises a hostname of the first cellular communication carrier; and the determining is based at least in part on the hostname. 3. The one or more non-transitory computer-readable media of claim 1, wherein: the identifying information comprises a uniform resource identifier (URI); the actions further comprising parsing the URI to determine a hostname specified by the URI; and the determining is based at least in part on the hostname. 4. The one or more non-transitory computer-readable media of claim 1, wherein the identifier comprises text that identifies the first cellular communication carrier. 5. The one or more non-transitory computer-readable media of claim 1, wherein the identifier comprises a logo of the first cellular communication carrier. 6. The one or more non-transitory computer-readable media of claim 1, the actions further comprising displaying identifiers for respectively corresponding ones of the contacts, wherein for each contact of the contacts, the identifier indicates which of multiple cellular communication carriers provides cellular communication services for the contact. 7. The one or more non-transitory computer-readable media of claim 6, the actions further comprising visibly grouping a set of the contacts for which a single one of the multiple cellular communication carriers provides cellular communication services. 8. The one or more non-transitory computer-readable media of claim 1, wherein the device receives cellular communication services from a home cellular communication carrier, the actions further comprising: showing on the display, in association with the particular contact, an offer to send a message to the contact; and in response to selection of the message by a user of the device, initiating a message to the contact on behalf of the user, the message promoting the home cellular communication carrier. 9. A method comprising: providing a graphical user interface that shows a contact list having information corresponding to multiple contacts, the information including cellular telephone numbers; submitting a query to obtain information identifying a first cellular communication carrier that provides cellular communication services to a particular contact of the multiple contacts, the query specifying a cellular telephone number of the particular contact; and showing in the contact list, in association with the particular contact, an identifier corresponding to the first cellular communication carrier. 10. The method of claim 9, further comprising: receiving a response to the query, the response indicating a uniform resource identifier (URI) corresponding to the cellular telephone number of the particular contact; parsing the URI to determine a hostname specified by the URI; and determining the identifier based at least in part on the URI. 11. The method of claim 9, wherein the identifier comprises at least one of (a) text that identifies the first cellular communication carrier; and (b) a logo of the first cellular communication carrier. 12. The method of claim 9, wherein submitting the query comprises submitting the query to an E.164 Number to URI Mapping (ENUM) server; the method further comprising receiving a response from the ENUM server, the response indicating a service address associated with the particular contact. 13. The method of claim 9, further comprising: showing in the graphical user interface, in association with the particular contact, an offer to initiate a message to the particular contact; and in response to selection of the offer by a user, initiating the message to the particular contact on behalf of the user, the message promoting a second cellular communication carrier. 14. The method of claim 9, wherein the graphical user interface further shows a call history that specifies the cellular telephone number of the particular contact. 15. The method of claim 9, wherein the graphical user interface lists multiple contacts, the particular contact being among the multiple contacts. 16. A communication device comprising: one or more processors; a display; one or more non-transitory computer-readable media storing computer-executable instructions that, when executed on the one or more processors, cause the one or more processors to perform actions comprising: accessing a contacts database to obtain contact information for multiple contacts, the contact information indicating a cellular telephone number of each of the multiple contacts; listing at least some of the contact information on the display; for each contact of a plurality of the contacts, determining a cellular communication carrier that provides cellular communication services for the contact; and showing on the display, while a contact list is being accessed and in association with each contact of the plurality of the contacts, an identifier of the cellular communication carrier that provides cellular communication services for the contact. 17. The communication device of claim 16, wherein determining the cellular communication carrier comprises submitting a query to a network-accessible service, the query specifying at least one cellular telephone number. 18. The communication device of claim 16, wherein the identifier comprises at least one of (a) text that identifies the cellular communication carrier; and (b) a logo of the cellular communication carrier. 19. The communication device of claim 16, wherein determining the cellular communication carrier comprises obtaining a uniform resource identifier (URI) corresponding to one of the multiple contacts. 20. The communication device of claim 16, the actions further comprising: showing one the display, in association with a particular contact, an offer to initiate a message to the particular contact; and in response to selection of the offer, initiating the message to the particular contact, the message promoting a second cellular communication carrier.
2,600
10,188
10,188
14,899,814
2,625
An apparatus, the apparatus comprising at least one processor, and at least one memory including computer program code, the at least one memory and the computer program code configured, with the at least one processor, to cause the apparatus to perform at least the following: based on one or more determined input manner characteristics of user electronic-scribed input, associate the user electronic-scribed input with a function to be performed using the user electronic-scribed input.
1-21. (canceled) 22. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: based on one or more determined input manner characteristics of user electronic-scribed input, associate the user electronic-scribed input with a function to be performed using the user electronic-scribed input. 23. The apparatus of claim 22, wherein the one or more determined input manner characteristics are at least one of: a determined input plane of an electronic stylus during the scribing of the user electronic-scribed input; a determined input angle of an electronic stylus during the scribing of the user electronic-scribed input; a determined pressure applied when using an electronic stylus during the scribing of the user electronic-scribed input; a determined surface type on which the user electronic-scribed input is made; a determined size of the user electronic-scribed input; and a determined speed of the user electronic-scribed input. 24. The apparatus of claim 23, wherein the apparatus is configured to associate the determined input manner characteristic of a particular determined surface on which the user electronic-scribed input is made with including the user electronic-scribed input in a particular type of electronic document. 25. The apparatus of claim 23, wherein the apparatus is configured to associate the determined input manner characteristic of a particular determined input plane of an electronic stylus during the scribing of the user electronic-scribed input with transmitting the user electronic-scribed input to a particular device. 26. The apparatus of claim 23, wherein the electronic stylus is the apparatus or is comprised in the apparatus. 27. The apparatus of claim 22, wherein the computer program code is further configured to, with the at least one processor, cause the apparatus to perform at least one of determine, and receive an indication of: the input plane of the electronic stylus during the scribing of the user electronic-scribed input; the input angle of the electronic stylus during the scribing of the user electronic-scribed input; the pressure applied when using the electronic stylus during the scribing of the user electronic-scribed input; the surface type on which the user electronic-scribed input is made; the size of the user electronic-scribed input; and the speed of the user electronic-scribed input. 28. The apparatus of claim 22, wherein the input manner characteristics are determined using one or more of the following devices comprised in an electronic stylus used by a user to scribe the user electronic-scribed input: an accelerometer; a gyroscope; a proximity sensor; a trackball; an optical camera; an infra-red camera; a microphone; and a pressure sensor. 29. The apparatus of claim 22, wherein the computer program code is further configured to, with the at least one processor, cause the apparatus to associate the user electronic-scribed input with the function to allow the user electronic-scribed input to be one or more of: included in a particular type of electronic document; included in a particular entry field of a particular application; identified with a particular language; included in a particular application from a plurality of applications of the same type; transmitted to a particular device; associated with a particular writing style; and associated with an electronic message for transmission using a particular network service card. 30. The apparatus of claim 29, wherein allowing the user electronic-scribed input to be included in a particular type of electronic document comprises including the user electronic-scribed input in one or more of: an e-mail; an SMS message; an MMS message; a chat message; a word processing document; an electronic note; a drawing; a spreadsheet; a database; a search field; a web address; and a social media post. 31. The apparatus of claim 29, wherein allowing the user electronic-scribed input to be included in a particular entry field of a particular application comprises including the user electronic-scribed input in one or more of: a search; a web address field; a social media post field, and a data input field. 32. The apparatus of claim 29, wherein allowing the user electronic-scribed input to be included in a particular application from a plurality of applications of the same type comprises identification of the user electronic-scribed input with one or more of: an e-mail application type; a productivity application type; a messaging application type; a calendar application type; a web browsing application type; a social media application type; and a searching application type. 33. The apparatus of claim 29, wherein allowing the user electronic-scribed input to be associated with a particular writing style comprises association with one or more of a particular text style, text size, text formatting, and text colour. 34. The apparatus of claim 22, wherein the user electronic-scribed input is one or more of: handwritten phonetic text input; and handwritten graphical text input. 35. The apparatus of claim 22, wherein the user electronic-scribed input is user electronic-scribed handwritten text input, and the function to be performed is performed using text represented by the user electronic-scribed handwritten text input. 36. The apparatus of claim 22, wherein the user electronic-scribed input is a drawn picture image input. 37. The apparatus of claim 22, wherein the apparatus is configured to decipher content of the user electronic-scribed input. 38. The apparatus of claim 22, wherein the apparatus is configured to perform at least one of: determination of the input manner characteristics of the user electronic-scribed input, and association of the function using the user electronic-scribed input. 39. The apparatus of claim 22, wherein the apparatus is one or more of: an electronic stylus, a wand, a portable electronic device, a mobile phone, a smartphone, a tablet computer, a surface computer, a laptop computer, a personal digital assistant, a graphics tablet, a pen-based computer, a non-portable electronic device, a desktop computer, a monitor/display, a household appliance, a server, or a module for one or more of the same. 40. A non-transitory computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor perform at least the following: based on one or more determined input manner characteristics of user electronic-scribed input, associate the user electronic-scribed input with a function to be performed using the user electronic-scribed input. 41. A method comprising: based on one or more determined input manner characteristics of user electronic-scribed input, associating the user electronic-scribed input with a function performing the function using the user electronic-scribed input.
An apparatus, the apparatus comprising at least one processor, and at least one memory including computer program code, the at least one memory and the computer program code configured, with the at least one processor, to cause the apparatus to perform at least the following: based on one or more determined input manner characteristics of user electronic-scribed input, associate the user electronic-scribed input with a function to be performed using the user electronic-scribed input.1-21. (canceled) 22. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: based on one or more determined input manner characteristics of user electronic-scribed input, associate the user electronic-scribed input with a function to be performed using the user electronic-scribed input. 23. The apparatus of claim 22, wherein the one or more determined input manner characteristics are at least one of: a determined input plane of an electronic stylus during the scribing of the user electronic-scribed input; a determined input angle of an electronic stylus during the scribing of the user electronic-scribed input; a determined pressure applied when using an electronic stylus during the scribing of the user electronic-scribed input; a determined surface type on which the user electronic-scribed input is made; a determined size of the user electronic-scribed input; and a determined speed of the user electronic-scribed input. 24. The apparatus of claim 23, wherein the apparatus is configured to associate the determined input manner characteristic of a particular determined surface on which the user electronic-scribed input is made with including the user electronic-scribed input in a particular type of electronic document. 25. The apparatus of claim 23, wherein the apparatus is configured to associate the determined input manner characteristic of a particular determined input plane of an electronic stylus during the scribing of the user electronic-scribed input with transmitting the user electronic-scribed input to a particular device. 26. The apparatus of claim 23, wherein the electronic stylus is the apparatus or is comprised in the apparatus. 27. The apparatus of claim 22, wherein the computer program code is further configured to, with the at least one processor, cause the apparatus to perform at least one of determine, and receive an indication of: the input plane of the electronic stylus during the scribing of the user electronic-scribed input; the input angle of the electronic stylus during the scribing of the user electronic-scribed input; the pressure applied when using the electronic stylus during the scribing of the user electronic-scribed input; the surface type on which the user electronic-scribed input is made; the size of the user electronic-scribed input; and the speed of the user electronic-scribed input. 28. The apparatus of claim 22, wherein the input manner characteristics are determined using one or more of the following devices comprised in an electronic stylus used by a user to scribe the user electronic-scribed input: an accelerometer; a gyroscope; a proximity sensor; a trackball; an optical camera; an infra-red camera; a microphone; and a pressure sensor. 29. The apparatus of claim 22, wherein the computer program code is further configured to, with the at least one processor, cause the apparatus to associate the user electronic-scribed input with the function to allow the user electronic-scribed input to be one or more of: included in a particular type of electronic document; included in a particular entry field of a particular application; identified with a particular language; included in a particular application from a plurality of applications of the same type; transmitted to a particular device; associated with a particular writing style; and associated with an electronic message for transmission using a particular network service card. 30. The apparatus of claim 29, wherein allowing the user electronic-scribed input to be included in a particular type of electronic document comprises including the user electronic-scribed input in one or more of: an e-mail; an SMS message; an MMS message; a chat message; a word processing document; an electronic note; a drawing; a spreadsheet; a database; a search field; a web address; and a social media post. 31. The apparatus of claim 29, wherein allowing the user electronic-scribed input to be included in a particular entry field of a particular application comprises including the user electronic-scribed input in one or more of: a search; a web address field; a social media post field, and a data input field. 32. The apparatus of claim 29, wherein allowing the user electronic-scribed input to be included in a particular application from a plurality of applications of the same type comprises identification of the user electronic-scribed input with one or more of: an e-mail application type; a productivity application type; a messaging application type; a calendar application type; a web browsing application type; a social media application type; and a searching application type. 33. The apparatus of claim 29, wherein allowing the user electronic-scribed input to be associated with a particular writing style comprises association with one or more of a particular text style, text size, text formatting, and text colour. 34. The apparatus of claim 22, wherein the user electronic-scribed input is one or more of: handwritten phonetic text input; and handwritten graphical text input. 35. The apparatus of claim 22, wherein the user electronic-scribed input is user electronic-scribed handwritten text input, and the function to be performed is performed using text represented by the user electronic-scribed handwritten text input. 36. The apparatus of claim 22, wherein the user electronic-scribed input is a drawn picture image input. 37. The apparatus of claim 22, wherein the apparatus is configured to decipher content of the user electronic-scribed input. 38. The apparatus of claim 22, wherein the apparatus is configured to perform at least one of: determination of the input manner characteristics of the user electronic-scribed input, and association of the function using the user electronic-scribed input. 39. The apparatus of claim 22, wherein the apparatus is one or more of: an electronic stylus, a wand, a portable electronic device, a mobile phone, a smartphone, a tablet computer, a surface computer, a laptop computer, a personal digital assistant, a graphics tablet, a pen-based computer, a non-portable electronic device, a desktop computer, a monitor/display, a household appliance, a server, or a module for one or more of the same. 40. A non-transitory computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor perform at least the following: based on one or more determined input manner characteristics of user electronic-scribed input, associate the user electronic-scribed input with a function to be performed using the user electronic-scribed input. 41. A method comprising: based on one or more determined input manner characteristics of user electronic-scribed input, associating the user electronic-scribed input with a function performing the function using the user electronic-scribed input.
2,600
10,189
10,189
11,367,749
2,625
Disclosed herein is a multi-functional hand-held device capable of configuring user inputs based on how the device is to be used. Preferably, the multi-functional hand-held device has at most only a few physical buttons, keys, or switches so that its display size can be substantially increased. The multi-functional hand-held device also incorporates a variety of input mechanisms, including touch sensitive screens, touch sensitive housings, display actuators, audio input, etc. The device also incorporates a user-configurable GUI for each of the multiple functions of the devices.
1. A hand-held electronic device, comprising: a multi-touch input surface; and a processing unit operatively connected to said multi-touch input surface, said processing unit capable of receiving a plurality of concurrent touch inputs from a user via said multi-touch input surface and discriminating a user requested action from the touch inputs; and a display device operatively coupled to the processing unit and configured to present a user interface. 2. A hand-held electronic device as recited in claim 1, wherein said hand-held electronic device is includes two or more of the following device functionalities: PDA, mobile phone, music player, camera, video player, game player, handtop, Internet terminal, GPS receiver, and remote control. 3. A hand-held electronic device as recited in claim 1, wherein said hand-held electronic device is capable of reconfiguring or adapting the user interface based on the state or mode of said hand-held electronic device. 4. A hand-held electronic device as recited in claim 3, wherein said display device is a full screen display. 5. A hand-held electronic device as recited in claim 1, wherein said multi-touch input surface is integral with said display device. 6. A hand-held electronic device as recited in claim 5, wherein said hand-held electronic device is includes two or more of the following device functionalities: PDA, mobile phone, music player, camera, video player, game player, camera, handtop, Internet terminal, GPS receiver, and remote control. 7. A hand-held electronic device as recited in claim 5, wherein said multi-touch input surface serves as the primary input means necessary to interact with said hand-held electronic device. 8. A hand-held electronic device as recited in claim 7, wherein said hand-held electronic device includes cross-functional physical buttons. 9. A hand-held electronic device as recited in claim 5, wherein said the multi-touch input surface integral with the display device is a multi-point capacitive touch screen. 10. A hand-held electronic device as recited in claim 9, wherein said hand-held electronic device is operable to recognize touch gestures applied to said multi-touch input surface wherein the touch gestures are used to control aspects of said hand-held electronic device. 11. A hand-held electronic device as recited in claim 1, wherein said hand-held electronic device is operable to receive simultaneous inputs from different inputs devices and perform actions based on the simultaneous inputs. 12. A hand-held electronic device as recited in claim 1, wherein signals from various input devices of said hand-held electronic device have different meanings or outputs based on a mode of said hand-held electronic device. 13. A hand-held electronic device as recited in claim 1, wherein said user interface comprises a standard region and a control region the standard region being used to display data, and the control region including one or more virtual controls for user interaction. 14. A hand-held electronic device as recited in claim 13, wherein at least one of the standard region and the control region are user configurable. 15. A hand-held electronic device as recited in claim 1, wherein said display device comprises a force sensitive display, said force sensitive display producing one or more input signals to be generated when force is exerted thereon. 16. A hand-held electronic device as recited in claims 15, wherein said force sensitive display senses a force indication, and wherein said hand-held electronic device distinguishes the force indication into at least a first touch type and a second touch type. 17. A hand-held electronic device as recited in any of claim 16, wherein the first touch type corresponds to a light touch, and the second touch type corresponds to a hard touch. 18. A hand-held electronic device as recited in claim 1, wherein said hand-held electronic device provides audio or tactile feedback to a user based on user inputs made with respect to said hand-held electronic device. 19. A hand-held electronic device as recited in claim 1, wherein hand-held electronic device is configurable to actively look for signals in a surrounding environment, and change user interface or mode of operation based on the signals. 20. A hand-held computing device, comprising: a housing; a display arrangement positioned within said housing, said display arrangement including a display and a touch screen; and a device configured to generate a signal when some portion of said display arrangement is moved. 21. A hand-held electronic device, comprising: a touch screen; and a processing unit operatively connected to said touch screen, said processing unit concurrently receives a plurality of touch inputs from a user via said touch screen and discriminates a user requested action from the touch inputs, wherein said touch screen serves as the primary input means necessary to interact with said hand-held electronic device. 22. A hand-held electronic device as recited in claim 21, wherein said media device operates as one or more of a mobile phone, a PDA, a media player, a camera, a same player, a handtop, an Internet terminal, a GPS receiver, or a remote controller. 23. A method performed in a computing device having a display and a touch screen positioned over the display, the method comprising: detecting one or more touches; classifying the one or more touches as a primary touch or a secondary touch; filtering out the secondary touches; differentiating whether the primary touch is a light touch or a hard touch; initiating a control event if the primary touch is a light touch; and implementing a selection event if the primary touch if a hard touch.
Disclosed herein is a multi-functional hand-held device capable of configuring user inputs based on how the device is to be used. Preferably, the multi-functional hand-held device has at most only a few physical buttons, keys, or switches so that its display size can be substantially increased. The multi-functional hand-held device also incorporates a variety of input mechanisms, including touch sensitive screens, touch sensitive housings, display actuators, audio input, etc. The device also incorporates a user-configurable GUI for each of the multiple functions of the devices.1. A hand-held electronic device, comprising: a multi-touch input surface; and a processing unit operatively connected to said multi-touch input surface, said processing unit capable of receiving a plurality of concurrent touch inputs from a user via said multi-touch input surface and discriminating a user requested action from the touch inputs; and a display device operatively coupled to the processing unit and configured to present a user interface. 2. A hand-held electronic device as recited in claim 1, wherein said hand-held electronic device is includes two or more of the following device functionalities: PDA, mobile phone, music player, camera, video player, game player, handtop, Internet terminal, GPS receiver, and remote control. 3. A hand-held electronic device as recited in claim 1, wherein said hand-held electronic device is capable of reconfiguring or adapting the user interface based on the state or mode of said hand-held electronic device. 4. A hand-held electronic device as recited in claim 3, wherein said display device is a full screen display. 5. A hand-held electronic device as recited in claim 1, wherein said multi-touch input surface is integral with said display device. 6. A hand-held electronic device as recited in claim 5, wherein said hand-held electronic device is includes two or more of the following device functionalities: PDA, mobile phone, music player, camera, video player, game player, camera, handtop, Internet terminal, GPS receiver, and remote control. 7. A hand-held electronic device as recited in claim 5, wherein said multi-touch input surface serves as the primary input means necessary to interact with said hand-held electronic device. 8. A hand-held electronic device as recited in claim 7, wherein said hand-held electronic device includes cross-functional physical buttons. 9. A hand-held electronic device as recited in claim 5, wherein said the multi-touch input surface integral with the display device is a multi-point capacitive touch screen. 10. A hand-held electronic device as recited in claim 9, wherein said hand-held electronic device is operable to recognize touch gestures applied to said multi-touch input surface wherein the touch gestures are used to control aspects of said hand-held electronic device. 11. A hand-held electronic device as recited in claim 1, wherein said hand-held electronic device is operable to receive simultaneous inputs from different inputs devices and perform actions based on the simultaneous inputs. 12. A hand-held electronic device as recited in claim 1, wherein signals from various input devices of said hand-held electronic device have different meanings or outputs based on a mode of said hand-held electronic device. 13. A hand-held electronic device as recited in claim 1, wherein said user interface comprises a standard region and a control region the standard region being used to display data, and the control region including one or more virtual controls for user interaction. 14. A hand-held electronic device as recited in claim 13, wherein at least one of the standard region and the control region are user configurable. 15. A hand-held electronic device as recited in claim 1, wherein said display device comprises a force sensitive display, said force sensitive display producing one or more input signals to be generated when force is exerted thereon. 16. A hand-held electronic device as recited in claims 15, wherein said force sensitive display senses a force indication, and wherein said hand-held electronic device distinguishes the force indication into at least a first touch type and a second touch type. 17. A hand-held electronic device as recited in any of claim 16, wherein the first touch type corresponds to a light touch, and the second touch type corresponds to a hard touch. 18. A hand-held electronic device as recited in claim 1, wherein said hand-held electronic device provides audio or tactile feedback to a user based on user inputs made with respect to said hand-held electronic device. 19. A hand-held electronic device as recited in claim 1, wherein hand-held electronic device is configurable to actively look for signals in a surrounding environment, and change user interface or mode of operation based on the signals. 20. A hand-held computing device, comprising: a housing; a display arrangement positioned within said housing, said display arrangement including a display and a touch screen; and a device configured to generate a signal when some portion of said display arrangement is moved. 21. A hand-held electronic device, comprising: a touch screen; and a processing unit operatively connected to said touch screen, said processing unit concurrently receives a plurality of touch inputs from a user via said touch screen and discriminates a user requested action from the touch inputs, wherein said touch screen serves as the primary input means necessary to interact with said hand-held electronic device. 22. A hand-held electronic device as recited in claim 21, wherein said media device operates as one or more of a mobile phone, a PDA, a media player, a camera, a same player, a handtop, an Internet terminal, a GPS receiver, or a remote controller. 23. A method performed in a computing device having a display and a touch screen positioned over the display, the method comprising: detecting one or more touches; classifying the one or more touches as a primary touch or a secondary touch; filtering out the secondary touches; differentiating whether the primary touch is a light touch or a hard touch; initiating a control event if the primary touch is a light touch; and implementing a selection event if the primary touch if a hard touch.
2,600
10,190
10,190
15,386,065
2,694
Disclosed herein is a display device including: a pixel array portion; a drive portion; and a power source wiring; the pixel array portion, at least a part of the drive portion configured to drive the pixel array portion and the power source wiring are forming a panel, the pixel array portion including scanning lines disposed in rows, signal lines disposed in columns, and pixels disposed in matrix in portions where the scanning lines and the signal lines cross each other, respectively, the drive portion including a scanner portion configured to drive the pixels in a line-sequential manner through the scanning lines, and a signal portion configured to supply a video signal to each of the signal lines in correspondence to the line-sequential drive, so that an image is displayed on the pixel array portion.
1. A display device comprising a pixel array and a control circuitry; the pixel array being disposed in a pixel array area and including: a plurality of first lines disposed in a first direction, a plurality of signal lines disposed in a second direction perpendicular to the first direction, a plurality of pixels disposed in a matrix form; the control circuitry including: a first scanner circuit disposed in a first peripheral area adjacent to the pixel array area in the first direction, and configured to drive the plurality of first lines by supplying a pulse-like output voltage, a signal circuit configured to drive the plurality of signal lines, a potential wiring configured to supply a predetermined voltage to the pixels; wherein at least a part of the potential wiring is disposed in a form of a multi-layer interconnection having at least a first wiring on a first layer and a second wiring on a second layer, the first wiring is disposed at least in the first peripheral area and in a second peripheral area which is adjacent to the pixel array area in the second direction, the first wiring extends along the second direction in the first peripheral area and along the first direction in the second peripheral area, the second wiring is disposed so as to surround the pixel array area, the first wiring and the second wiring are at least partially overlapping and electrically connected to each other in the first peripheral area. 2. The display device according to claim 1, wherein at least one of the plurality of pixels includes a light emitting element, and at least one of the plurality of first lines is configured to control a current supply to the light emitting element in corresponding one of the plurality of pixels. 3. The display device according to claim 2, wherein at least one of the plurality of pixels further includes a drive element, the light emitting element being responsive to the drive element, and at least one of the plurality of first lines is configured to control a current supply to the light emitting element via the drive element in corresponding one of the plurality of pixels. 4. The display device according to claim 3, wherein at least one of the plurality of first lines is connected to a current node of the drive element in corresponding one of the plurality of pixels. 5. The display device according to claim 1, wherein the pixel array area further includes a plurality of second lines disposed in the first direction, the control portion further includes a second scanner circuit configured to control the plurality of second lines. 6. The display device according to claim 5, wherein at least one of the plurality of pixels includes a sampling element connected to corresponding one of the signal lines, and at least one of the plurality of second lines is configured to control the sampling element in corresponding one of the pixels. 7. The display device according to claim 1, wherein at least one of the pixels includes: a capacitive element a sampling element configured to sample a voltage signal from corresponding one of the plurality of signal lines to the capacitive element, a drive element responsive to the capacitive element, a light emitting element responsive to the drive element. 8. The display device according to claim 7, wherein the drive element is configured to supply drive current to the light emitting element in response to the voltage signal stored in the capacitive element. 9. The display device according to claim 8, wherein at least one of the plurality of pixels is configured to execute a correction operation for compensating a dependence of the drive current on a property of the drive element, and a timing of the correction operation is controlled by the pulse-like output voltage. 10. The display device according to claim 1, wherein an area of the first wiring is larger than an area of the second wiring. 11. The display device according to claim 10, wherein the first layer is disposed above the second layer, the first layer and the second layer both being disposed above a substrate. 12. The display device according to claim 11, wherein an insulating layer is arranged between the first layer and the second layer. 13. An organic EL display device comprising the display device according to claim 1.
Disclosed herein is a display device including: a pixel array portion; a drive portion; and a power source wiring; the pixel array portion, at least a part of the drive portion configured to drive the pixel array portion and the power source wiring are forming a panel, the pixel array portion including scanning lines disposed in rows, signal lines disposed in columns, and pixels disposed in matrix in portions where the scanning lines and the signal lines cross each other, respectively, the drive portion including a scanner portion configured to drive the pixels in a line-sequential manner through the scanning lines, and a signal portion configured to supply a video signal to each of the signal lines in correspondence to the line-sequential drive, so that an image is displayed on the pixel array portion.1. A display device comprising a pixel array and a control circuitry; the pixel array being disposed in a pixel array area and including: a plurality of first lines disposed in a first direction, a plurality of signal lines disposed in a second direction perpendicular to the first direction, a plurality of pixels disposed in a matrix form; the control circuitry including: a first scanner circuit disposed in a first peripheral area adjacent to the pixel array area in the first direction, and configured to drive the plurality of first lines by supplying a pulse-like output voltage, a signal circuit configured to drive the plurality of signal lines, a potential wiring configured to supply a predetermined voltage to the pixels; wherein at least a part of the potential wiring is disposed in a form of a multi-layer interconnection having at least a first wiring on a first layer and a second wiring on a second layer, the first wiring is disposed at least in the first peripheral area and in a second peripheral area which is adjacent to the pixel array area in the second direction, the first wiring extends along the second direction in the first peripheral area and along the first direction in the second peripheral area, the second wiring is disposed so as to surround the pixel array area, the first wiring and the second wiring are at least partially overlapping and electrically connected to each other in the first peripheral area. 2. The display device according to claim 1, wherein at least one of the plurality of pixels includes a light emitting element, and at least one of the plurality of first lines is configured to control a current supply to the light emitting element in corresponding one of the plurality of pixels. 3. The display device according to claim 2, wherein at least one of the plurality of pixels further includes a drive element, the light emitting element being responsive to the drive element, and at least one of the plurality of first lines is configured to control a current supply to the light emitting element via the drive element in corresponding one of the plurality of pixels. 4. The display device according to claim 3, wherein at least one of the plurality of first lines is connected to a current node of the drive element in corresponding one of the plurality of pixels. 5. The display device according to claim 1, wherein the pixel array area further includes a plurality of second lines disposed in the first direction, the control portion further includes a second scanner circuit configured to control the plurality of second lines. 6. The display device according to claim 5, wherein at least one of the plurality of pixels includes a sampling element connected to corresponding one of the signal lines, and at least one of the plurality of second lines is configured to control the sampling element in corresponding one of the pixels. 7. The display device according to claim 1, wherein at least one of the pixels includes: a capacitive element a sampling element configured to sample a voltage signal from corresponding one of the plurality of signal lines to the capacitive element, a drive element responsive to the capacitive element, a light emitting element responsive to the drive element. 8. The display device according to claim 7, wherein the drive element is configured to supply drive current to the light emitting element in response to the voltage signal stored in the capacitive element. 9. The display device according to claim 8, wherein at least one of the plurality of pixels is configured to execute a correction operation for compensating a dependence of the drive current on a property of the drive element, and a timing of the correction operation is controlled by the pulse-like output voltage. 10. The display device according to claim 1, wherein an area of the first wiring is larger than an area of the second wiring. 11. The display device according to claim 10, wherein the first layer is disposed above the second layer, the first layer and the second layer both being disposed above a substrate. 12. The display device according to claim 11, wherein an insulating layer is arranged between the first layer and the second layer. 13. An organic EL display device comprising the display device according to claim 1.
2,600
10,191
10,191
15,864,339
2,683
Upon receiving a signal from a remote control device, a device identifies a command data usable to communicate with a consumer device. The signal contains an indication of a pressed key, which corresponds to a function of the consumer device. The device generates a command signal having the command data for transmission to the consumer device to control the selected function of the consumer device using a format recognizable by the consumer device.
1. A first device for transmitting a command to control a functional operation of a second device, the first device comprising: a receiver; a transmitter; a processing device coupled to the receiver and the transmitter; and a memory storing a plurality of command data and instructions executable by the processing device; wherein the instructions cause the processing device to transmit a command signal to the second device, via use of the transmitter, in response to receiving, via use of the receiver, an activation signal having data indicative of an input element of a third device that has been activated by a user, the transmitted command signal has a format that is recognizable by the second device, and the formatted command signal comprises a one of the plurality of command data which is selected from the memory via use of the received data indicative of the input element of the third device that has been activated by the user. 2. The first device as recited in claim 1, wherein the plurality of command data comprises a first command data corresponding to a volume up function of the second device and a second command data corresponding to a volume down function of the second device. 3. The first device as recited in claim 1, wherein the receiver comprises an RF receiver. 4. The first device as recited in claim 1, wherein the transmitter comprises an IR transmitter. 5. The first device as recited in claim 1, wherein the formatted command signal is transmitted from the first device to the second device via a wired connection between the first device and the second device. 6. The first device as recited in claim 1, wherein the formatted command signal is transmitted from the first device to the second device via a wireless connection between the first device and the second device. 7. The first device as recited in claim 1, wherein the first device comprises a further receiver for receiving a media from a fourth device in communication with the first device and wherein the first device is coupled to the second device to provide the media to the second device for display on a display device associated with the second device.
Upon receiving a signal from a remote control device, a device identifies a command data usable to communicate with a consumer device. The signal contains an indication of a pressed key, which corresponds to a function of the consumer device. The device generates a command signal having the command data for transmission to the consumer device to control the selected function of the consumer device using a format recognizable by the consumer device.1. A first device for transmitting a command to control a functional operation of a second device, the first device comprising: a receiver; a transmitter; a processing device coupled to the receiver and the transmitter; and a memory storing a plurality of command data and instructions executable by the processing device; wherein the instructions cause the processing device to transmit a command signal to the second device, via use of the transmitter, in response to receiving, via use of the receiver, an activation signal having data indicative of an input element of a third device that has been activated by a user, the transmitted command signal has a format that is recognizable by the second device, and the formatted command signal comprises a one of the plurality of command data which is selected from the memory via use of the received data indicative of the input element of the third device that has been activated by the user. 2. The first device as recited in claim 1, wherein the plurality of command data comprises a first command data corresponding to a volume up function of the second device and a second command data corresponding to a volume down function of the second device. 3. The first device as recited in claim 1, wherein the receiver comprises an RF receiver. 4. The first device as recited in claim 1, wherein the transmitter comprises an IR transmitter. 5. The first device as recited in claim 1, wherein the formatted command signal is transmitted from the first device to the second device via a wired connection between the first device and the second device. 6. The first device as recited in claim 1, wherein the formatted command signal is transmitted from the first device to the second device via a wireless connection between the first device and the second device. 7. The first device as recited in claim 1, wherein the first device comprises a further receiver for receiving a media from a fourth device in communication with the first device and wherein the first device is coupled to the second device to provide the media to the second device for display on a display device associated with the second device.
2,600
10,192
10,192
15,796,214
2,689
An unmanned aerial vehicle (UAV) has: a body for carrying an article; at least one rotor; and a light source for generating a light beam to indicate the landing zone for the UAV. The UAV, in use, is flown to a desired location, that might be a site of an emergency with no defined landing zone. The UAV descends, and, while descending the light source is operated to illuminate and to define a landing zone. The UAV can be provided with lights and an audio source to warn and advise bystanders that the UAV is landing and to stand clear of the landing zone.
1. An unmanned aerial vehicle comprising: a body for carrying an article; at least one rotor; and a light source for generating a light beam to define the periphery of a landing zone for the unmanned aerial vehicle. 2. An unmanned aerial vehicle as claimed in claim 1, wherein the light source provides a solid beam. 3. An unmanned aerial vehicle as claimed in claim 1, wherein the light source provides a hollow cone beam. 4. An unmanned aerial vehicle as claimed in claim 1, 2 or 3, wherein the landing zone defined by the light beam is one of a circle, quadrilateral and a hexagon. 5. An unmanned aerial vehicle as claimed in claim 1, wherein the light beam has a fixed cone angle. 6. An unmanned aerial vehicle as claimed in claim 1, wherein the light beam has a variable cone angle, whereby the landing zone indicated by the light source remains of substantially constant area, as the unmanned aerial vehicle the descends. 7. An unmanned aerial vehicle as claimed in claim 1, including at least one sensor to detect unwanted objects in the landing zone. 8. An unmanned aerial vehicle as claimed in claim 7, wherein the at least one sensor comprises at least one of a downwardly facing video camera, a motion sensor directed at the landing zone and an infrared thermography sensor or camera. 9. An unmanned aerial vehicle as claimed in claim 1, including at least one warning device for warning bystanders that the unmanned aerial vehicle will be landing. 10. An unmanned aerial vehicle as claimed in claim 9, wherein the warning device comprises at least one of the lights and an audio source. 11. An unmanned aerial vehicle as claimed in claim 10, wherein the audio source includes loudspeakers, for transmitting at least one of a warning sound and messages, optionally including an advisory message for bystanders to stay out of landing zone defined by the light source. 12. An unmanned aerial vehicle as claimed in claim 1, wherein the unmanned aerial vehicle includes at least one rotor. 13. An unmanned aerial vehicle as claimed in claim 12, wherein the unmanned aerial vehicle includes a plurality of rotors. 14. A method of landing an unmanned aerial vehicle at a location, the method comprising: providing an unmanned aerial vehicle comprising: a body for carrying an article; at least one rotor; and a light source for generating a light beam to indicate the landing zone for the unmanned aerial vehicle; flying the unmanned aerial vehicle to a location; causing the unmanned aerial vehicle to descend; and while the unmanned aerial vehicle is descending operating the light source to illuminate and to define the periphery of a landing zone. 15. A method as claimed in claim 14, wherein the light source provides a solid beam. 16. A method as claimed in claim 14, wherein the light source provides a hollow cone beam. 17. A method as claimed in claim 14, wherein the light beam defines the landing zone as one of a circle, quadrilateral and a hexagon. 18. A method as claimed in claim 14, wherein the light beam provides a fixed cone angle. 19. A method as claimed in claim 14 , wherein the light beam provides a variable cone angle, whereby the landing zone indicated by the light source remains of substantially constant area, as the unmanned aerial vehicle the descends. 20. A method as claimed in claim 14, including detecting unwanted objects in the landing zone with at least one sensor. 21. A method as claimed in claim 20, wherein the at least one sensor comprises at least one of a downwardly facing video camera, a motion sensor directed at the landing zone and an infrared thermography sensor or camera. 22. A method as claimed in claim 14, including at least one warning to bystanders that the unmanned aerial vehicle will be landing with a warning device. 23. A method as claimed in claim 22, wherein the warning device comprises at least one of the lights and an audio source. 24. A method as claimed in claim 23, including transmitting at least one of a warning sound and messages, and optionally including transmitting an advisory message advising bystanders to stay out of the landing zone defined by the light source.
An unmanned aerial vehicle (UAV) has: a body for carrying an article; at least one rotor; and a light source for generating a light beam to indicate the landing zone for the UAV. The UAV, in use, is flown to a desired location, that might be a site of an emergency with no defined landing zone. The UAV descends, and, while descending the light source is operated to illuminate and to define a landing zone. The UAV can be provided with lights and an audio source to warn and advise bystanders that the UAV is landing and to stand clear of the landing zone.1. An unmanned aerial vehicle comprising: a body for carrying an article; at least one rotor; and a light source for generating a light beam to define the periphery of a landing zone for the unmanned aerial vehicle. 2. An unmanned aerial vehicle as claimed in claim 1, wherein the light source provides a solid beam. 3. An unmanned aerial vehicle as claimed in claim 1, wherein the light source provides a hollow cone beam. 4. An unmanned aerial vehicle as claimed in claim 1, 2 or 3, wherein the landing zone defined by the light beam is one of a circle, quadrilateral and a hexagon. 5. An unmanned aerial vehicle as claimed in claim 1, wherein the light beam has a fixed cone angle. 6. An unmanned aerial vehicle as claimed in claim 1, wherein the light beam has a variable cone angle, whereby the landing zone indicated by the light source remains of substantially constant area, as the unmanned aerial vehicle the descends. 7. An unmanned aerial vehicle as claimed in claim 1, including at least one sensor to detect unwanted objects in the landing zone. 8. An unmanned aerial vehicle as claimed in claim 7, wherein the at least one sensor comprises at least one of a downwardly facing video camera, a motion sensor directed at the landing zone and an infrared thermography sensor or camera. 9. An unmanned aerial vehicle as claimed in claim 1, including at least one warning device for warning bystanders that the unmanned aerial vehicle will be landing. 10. An unmanned aerial vehicle as claimed in claim 9, wherein the warning device comprises at least one of the lights and an audio source. 11. An unmanned aerial vehicle as claimed in claim 10, wherein the audio source includes loudspeakers, for transmitting at least one of a warning sound and messages, optionally including an advisory message for bystanders to stay out of landing zone defined by the light source. 12. An unmanned aerial vehicle as claimed in claim 1, wherein the unmanned aerial vehicle includes at least one rotor. 13. An unmanned aerial vehicle as claimed in claim 12, wherein the unmanned aerial vehicle includes a plurality of rotors. 14. A method of landing an unmanned aerial vehicle at a location, the method comprising: providing an unmanned aerial vehicle comprising: a body for carrying an article; at least one rotor; and a light source for generating a light beam to indicate the landing zone for the unmanned aerial vehicle; flying the unmanned aerial vehicle to a location; causing the unmanned aerial vehicle to descend; and while the unmanned aerial vehicle is descending operating the light source to illuminate and to define the periphery of a landing zone. 15. A method as claimed in claim 14, wherein the light source provides a solid beam. 16. A method as claimed in claim 14, wherein the light source provides a hollow cone beam. 17. A method as claimed in claim 14, wherein the light beam defines the landing zone as one of a circle, quadrilateral and a hexagon. 18. A method as claimed in claim 14, wherein the light beam provides a fixed cone angle. 19. A method as claimed in claim 14 , wherein the light beam provides a variable cone angle, whereby the landing zone indicated by the light source remains of substantially constant area, as the unmanned aerial vehicle the descends. 20. A method as claimed in claim 14, including detecting unwanted objects in the landing zone with at least one sensor. 21. A method as claimed in claim 20, wherein the at least one sensor comprises at least one of a downwardly facing video camera, a motion sensor directed at the landing zone and an infrared thermography sensor or camera. 22. A method as claimed in claim 14, including at least one warning to bystanders that the unmanned aerial vehicle will be landing with a warning device. 23. A method as claimed in claim 22, wherein the warning device comprises at least one of the lights and an audio source. 24. A method as claimed in claim 23, including transmitting at least one of a warning sound and messages, and optionally including transmitting an advisory message advising bystanders to stay out of the landing zone defined by the light source.
2,600
10,193
10,193
15,520,094
2,661
A method includes visually presenting image data ( 404 ) in a main window ( 402 ) of a display monitor ( 120 ). The image data is processed with a first processing algorithm. The method further includes identifying tissue of interest in the image data displayed in the main window. The method further includes generating, with the processor ( 124 ), a sub-viewport ( 502 ) for the tissue of interest by determining at least one of: a location of the sub-viewport; a size of the sub-viewport; a shape of the sub-viewport; or an orientation of the sub-viewport. The method further includes visually presenting the sub-viewport over a sub-region of the image data in the main window based on one or more of the location, the size, the shape, or the orientation.
1. A method, comprising: visually presenting image data in a main window of a display monitor, wherein the image data is processed with a first processing algorithm; identifying, with a processor, tissue of interest in the image data displayed in the main window; generating, with the processor, a sub-viewport for the tissue of interest by determining at least one of: a location of the sub-viewport; a size of the sub-viewport; a shape of the sub-viewport; or an orientation of the sub-viewport; and visually presenting, with the processor, the sub-viewport over a sub-region of the image data in the main window based on one or more of the location, the size, the shape, or the orientation. 2. The method of claim 1, further comprising: receiving a first input indicating the tissue of interest in the image data, wherein the first input is indicative of a user selected tissue of interest; and determining the location of the sub-viewport based on the first input. 3. The method of claim 1, further comprising: receiving a first input indicating the tissue of interest in the image data, wherein the first input is indicative of a processor selected tissue of interest; and determining the location of the sub-viewport based on the first input. 4. The method of claim 1, wherein determining the size of the sub-viewport comprises: determining scale spaces of the image data; searching for local minima and maxima values of the tissue of interest across the scale spaces; identifying a local minima and a local maxima for a scale space; and multiplying the local minima and the local maxima by a predefined scale factor. 5. The method of claim 4, wherein a scale space is determined by convolving a variable-scale Gaussian function with the image data. 6. The method of claim 1, wherein determining the shape of the sub-viewport comprises: scaling down the image data to the scale of the local minima and the local maxima; calculating a structure tensor which identifies predominant directions of a gradient in a specified neighborhood of a point and a degree to which those directions are coherent; calculating an eigenvalues and corresponding eigenvectors of the structure tensor matrix; and setting a ratio between sides of the sub-viewport to a ratio between a square root of the eigenvalues. 7. The method of claim 6, further comprising: cropping the ratio by at least one of a predefined upper threshold or a predefined lower threshold. 8. The method of claim 6, wherein determining the orientation of the sub-viewport comprises: setting the orientation of a major side of the sub-viewport to be the orientation of the eigenvector corresponding to a smallest eigenvalue of the structure tensor. 9. The method of claim 1, further comprising: receiving a signal indicating movement of the sub-viewport through the image data; and updating, with the processor, at least one of the location, the size, the shape, or the orientation of the sub-viewport based on the structure of interest at the location of the sub-viewport in the image data. 10. The method of claim 1, further comprising: receiving a toggle signal to remove the sub-viewport; and removing the visual presentation of the sub-viewport from the main window. 11. The method of claim 1, further comprising: receiving a toggle signal to hide the sub-viewport; and rendering the sub-viewport transparent. 12. The method of claim 1, wherein the image data is one of a 2D image, 3D volumetric image data or 4D image data. 13. The method of claim 12, further comprising: dynamically adjusting at least one of the location, the size, the shape and the orientation of the sub-viewport based on movement of surrounding structure. 14. A computing system, comprising: a computer processor configured to execute instructions stored in computer readable storage medium which causes the computer processor to: visually present image data in a main window of a display monitor, wherein the image data is processed with a first processing algorithm; identify tissue of interest in the image data displayed in the main window; generate a sub-viewport for the tissue of interest by determining at least one of: a location of the sub-viewport; a size of the sub-viewport; a shape of the sub-viewport; or an orientation of the sub-viewport; and visually present the sub-viewport over a sub-region of the image data in the main window based on one or more of the location, the size, the shape, or the orientation. 15. The computing system of claim 14, wherein the processor determines the size of the sub-viewport by determining scale spaces of the image data; searching for local minima and maxima values of the tissue of interest across the scale spaces; identifying a local minima and a local maxima for a scale space; and multiplying the local minima and the local maxima by a predefined scale factor. 16. The computing system of claim 15, wherein the processor determines the shape of the sub-viewport by scaling down the image data to the scale of the local minima and the local maxima; calculating a structure tensor which identifies predominant directions of a gradient in a specified neighborhood of a point and a degree to which those directions are coherent; calculating an eigenvalues and corresponding eigenvectors of the structure tensor matrix; and setting a ratio between sides of the sub-viewport to a ratio between a square root of the eigenvalues. 17. The computing system of claim 16, wherein the image data is one of a 2D image, 3D volumetric image data or 4D image data. 18. The computing system of claim 14, wherein the computing system is part of a console of an imaging system. 19. The computing system of claim 14, wherein the computing system is an apparatus separate and remote from an imaging system. 20. A computer readable storage medium encoded with one or more computer executable instructions, which, when executed by a processor of a computing system, causes the processor to: visually present image data in a main window of a display monitor wherein the image data is processed with a first processing algorithm; identify tissue of interest in the image data displayed in the main window; generate a sub-viewport for the tissue of interest by determining at least one of: a location of the sub-viewport; a size of the sub-viewport; a shape of the sub-viewport; or an orientation of the sub-viewport; and visually present the sub-viewport over a sub-region of the image data in the main window based on one or more of the location, the size, the shape, or the orientation.
A method includes visually presenting image data ( 404 ) in a main window ( 402 ) of a display monitor ( 120 ). The image data is processed with a first processing algorithm. The method further includes identifying tissue of interest in the image data displayed in the main window. The method further includes generating, with the processor ( 124 ), a sub-viewport ( 502 ) for the tissue of interest by determining at least one of: a location of the sub-viewport; a size of the sub-viewport; a shape of the sub-viewport; or an orientation of the sub-viewport. The method further includes visually presenting the sub-viewport over a sub-region of the image data in the main window based on one or more of the location, the size, the shape, or the orientation.1. A method, comprising: visually presenting image data in a main window of a display monitor, wherein the image data is processed with a first processing algorithm; identifying, with a processor, tissue of interest in the image data displayed in the main window; generating, with the processor, a sub-viewport for the tissue of interest by determining at least one of: a location of the sub-viewport; a size of the sub-viewport; a shape of the sub-viewport; or an orientation of the sub-viewport; and visually presenting, with the processor, the sub-viewport over a sub-region of the image data in the main window based on one or more of the location, the size, the shape, or the orientation. 2. The method of claim 1, further comprising: receiving a first input indicating the tissue of interest in the image data, wherein the first input is indicative of a user selected tissue of interest; and determining the location of the sub-viewport based on the first input. 3. The method of claim 1, further comprising: receiving a first input indicating the tissue of interest in the image data, wherein the first input is indicative of a processor selected tissue of interest; and determining the location of the sub-viewport based on the first input. 4. The method of claim 1, wherein determining the size of the sub-viewport comprises: determining scale spaces of the image data; searching for local minima and maxima values of the tissue of interest across the scale spaces; identifying a local minima and a local maxima for a scale space; and multiplying the local minima and the local maxima by a predefined scale factor. 5. The method of claim 4, wherein a scale space is determined by convolving a variable-scale Gaussian function with the image data. 6. The method of claim 1, wherein determining the shape of the sub-viewport comprises: scaling down the image data to the scale of the local minima and the local maxima; calculating a structure tensor which identifies predominant directions of a gradient in a specified neighborhood of a point and a degree to which those directions are coherent; calculating an eigenvalues and corresponding eigenvectors of the structure tensor matrix; and setting a ratio between sides of the sub-viewport to a ratio between a square root of the eigenvalues. 7. The method of claim 6, further comprising: cropping the ratio by at least one of a predefined upper threshold or a predefined lower threshold. 8. The method of claim 6, wherein determining the orientation of the sub-viewport comprises: setting the orientation of a major side of the sub-viewport to be the orientation of the eigenvector corresponding to a smallest eigenvalue of the structure tensor. 9. The method of claim 1, further comprising: receiving a signal indicating movement of the sub-viewport through the image data; and updating, with the processor, at least one of the location, the size, the shape, or the orientation of the sub-viewport based on the structure of interest at the location of the sub-viewport in the image data. 10. The method of claim 1, further comprising: receiving a toggle signal to remove the sub-viewport; and removing the visual presentation of the sub-viewport from the main window. 11. The method of claim 1, further comprising: receiving a toggle signal to hide the sub-viewport; and rendering the sub-viewport transparent. 12. The method of claim 1, wherein the image data is one of a 2D image, 3D volumetric image data or 4D image data. 13. The method of claim 12, further comprising: dynamically adjusting at least one of the location, the size, the shape and the orientation of the sub-viewport based on movement of surrounding structure. 14. A computing system, comprising: a computer processor configured to execute instructions stored in computer readable storage medium which causes the computer processor to: visually present image data in a main window of a display monitor, wherein the image data is processed with a first processing algorithm; identify tissue of interest in the image data displayed in the main window; generate a sub-viewport for the tissue of interest by determining at least one of: a location of the sub-viewport; a size of the sub-viewport; a shape of the sub-viewport; or an orientation of the sub-viewport; and visually present the sub-viewport over a sub-region of the image data in the main window based on one or more of the location, the size, the shape, or the orientation. 15. The computing system of claim 14, wherein the processor determines the size of the sub-viewport by determining scale spaces of the image data; searching for local minima and maxima values of the tissue of interest across the scale spaces; identifying a local minima and a local maxima for a scale space; and multiplying the local minima and the local maxima by a predefined scale factor. 16. The computing system of claim 15, wherein the processor determines the shape of the sub-viewport by scaling down the image data to the scale of the local minima and the local maxima; calculating a structure tensor which identifies predominant directions of a gradient in a specified neighborhood of a point and a degree to which those directions are coherent; calculating an eigenvalues and corresponding eigenvectors of the structure tensor matrix; and setting a ratio between sides of the sub-viewport to a ratio between a square root of the eigenvalues. 17. The computing system of claim 16, wherein the image data is one of a 2D image, 3D volumetric image data or 4D image data. 18. The computing system of claim 14, wherein the computing system is part of a console of an imaging system. 19. The computing system of claim 14, wherein the computing system is an apparatus separate and remote from an imaging system. 20. A computer readable storage medium encoded with one or more computer executable instructions, which, when executed by a processor of a computing system, causes the processor to: visually present image data in a main window of a display monitor wherein the image data is processed with a first processing algorithm; identify tissue of interest in the image data displayed in the main window; generate a sub-viewport for the tissue of interest by determining at least one of: a location of the sub-viewport; a size of the sub-viewport; a shape of the sub-viewport; or an orientation of the sub-viewport; and visually present the sub-viewport over a sub-region of the image data in the main window based on one or more of the location, the size, the shape, or the orientation.
2,600
10,194
10,194
13,174,642
2,613
A system for 3-dimensional animation includes a computer apparatus, a means for display in communication with the computer apparatus, and a means for storage in communication with the computer apparatus. The means for storage is disposed to store data representing a 3D animation, the means for display is disposed to display a representation of the 3D animation, and the computer apparatus is configured to perform a method of 3D animation. The method includes setting an inter-axial distance between logical representations of two cameras, the inter-axial distance being configured to produce a desired 3D effect for a target audience, and creating a stereoscopic frame set representing the 3D animation using the logical representations of the two cameras.
1. A system for three-dimensional (3D) animation, comprising: a means for storage; a computer apparatus in communication with the means for storage; and a means for display in communication with the computer apparatus; wherein, the means for storage is disposed to store data representing a 3D animation; the means for display is disposed to display a representation of the 3D animation; and the computer apparatus is configured to perform a method, comprising: setting an inter-axial distance between logical representations of two cameras, the inter-axial distance being configured to produce a desired 3D effect for a target audience; and creating a stereoscopic frame set representing the 3D animation using the logical representations of the two cameras. 2. The system of claim 1, wherein creating the stereoscopic frame set comprises: creating a 2D animation based on a first of the logical representations of the two cameras, the 2D animation including a first plurality of frames representing a first viewing angle; and creating a second plurality of frames based on a second of the logical representations of the two cameras representing a second viewing angle, the second plurality of frames being paired with the first plurality of frames to create the stereoscopic frame set representing the 3D animation. 3. The system of claim 1, wherein setting the inter-axial distance comprises: logically separating renderings captured by the logical representations of the two cameras by a distance equivalent to the inter-axial distance. 4. The system of claim 3, wherein the inter-axial distance is based on an average ocular distance of the target audience. 5. The system of claim 1, wherein the inter-axial distance is based on an average ocular distance of the target audience. 6. The system of claim 1, wherein the means for storage is disposed to store information related to the 3D animation. 7. The system of claim 6, wherein the information related to the 3D animation includes an inter-axial distance associated with the stereoscopic frame set. 8. The system of claim 1, wherein the means for display is a passive 3D display or an active 3D display. 9. The system of claim 8, wherein the means for display is a passive 3D display comprising: an auto-stereoscopic display panel; a lenticular screen panel; or a polarized display panel. 10. The system of claim 8, wherein the means for display is an active 3D display comprising: a LCD shuttering system. 11. A method for 3D animation, comprising: setting an inter-axial distance between logical representations of two cameras at a computer system, the inter-axial distance being configured to produce a desired 3D effect for a target audience; and creating a stereoscopic frame set representing the 3D animation using the logical representations of the two cameras. 12. The method of claim 11, wherein creating the stereoscopic frame set comprises: creating a 2D animation based on a first of the logical representations of the two cameras, the 2D animation including a first plurality of frames representing a first viewing angle; and creating a second plurality of frames based on a second of the logical representations of the two cameras representing a second viewing angle, the second plurality of frames being paired with the first plurality of frames to create the stereoscopic frame set representing the 3D animation. 13. The method of claim 11, wherein setting the inter-axial distance comprises: logically separating renderings captured by the logical representations of the two cameras by a distance equivalent to the inter-axial distance. 14. The method of claim 13, wherein the inter-axial distance is based on an average ocular distance of the target audience. 15. The system of claim 11, wherein the inter-axial distance is based on an average ocular distance of the target audience. 16. The system of claim 1, further comprising storing information related to the 3D animation. 17. The method of claim 16, wherein the information related to the 3D animation includes an inter-axial distance associated with the stereoscopic frame set. 18. A computer program product for 3D animation, comprising a tangible storage medium readable by a computer processor and storing instructions thereon that, when executed by the computer processor, direct the computer processor to perform a method, comprising: setting an inter-axial distance between logical representations of two cameras at a computer system, the inter-axial distance being configured to produce a desired 3D effect for a target audience; and creating a stereoscopic frame set representing the 3D animation using the logical representations of the two cameras. 19. The computer program product of claim 18, wherein creating the stereoscopic frame set comprises: creating a 2D animation based on a first of the logical representations of the two cameras, the 2D animation including a first plurality of frames representing a first viewing angle; and creating a second plurality of frames based on a second of the logical representations of the two cameras representing a second viewing angle, the second plurality of frames being paired with the first plurality of frames to create the stereoscopic frame set representing the 3D animation. 20. The computer program product of claim 19, wherein the inter-axial distance is based on an average ocular distance of the target audience.
A system for 3-dimensional animation includes a computer apparatus, a means for display in communication with the computer apparatus, and a means for storage in communication with the computer apparatus. The means for storage is disposed to store data representing a 3D animation, the means for display is disposed to display a representation of the 3D animation, and the computer apparatus is configured to perform a method of 3D animation. The method includes setting an inter-axial distance between logical representations of two cameras, the inter-axial distance being configured to produce a desired 3D effect for a target audience, and creating a stereoscopic frame set representing the 3D animation using the logical representations of the two cameras.1. A system for three-dimensional (3D) animation, comprising: a means for storage; a computer apparatus in communication with the means for storage; and a means for display in communication with the computer apparatus; wherein, the means for storage is disposed to store data representing a 3D animation; the means for display is disposed to display a representation of the 3D animation; and the computer apparatus is configured to perform a method, comprising: setting an inter-axial distance between logical representations of two cameras, the inter-axial distance being configured to produce a desired 3D effect for a target audience; and creating a stereoscopic frame set representing the 3D animation using the logical representations of the two cameras. 2. The system of claim 1, wherein creating the stereoscopic frame set comprises: creating a 2D animation based on a first of the logical representations of the two cameras, the 2D animation including a first plurality of frames representing a first viewing angle; and creating a second plurality of frames based on a second of the logical representations of the two cameras representing a second viewing angle, the second plurality of frames being paired with the first plurality of frames to create the stereoscopic frame set representing the 3D animation. 3. The system of claim 1, wherein setting the inter-axial distance comprises: logically separating renderings captured by the logical representations of the two cameras by a distance equivalent to the inter-axial distance. 4. The system of claim 3, wherein the inter-axial distance is based on an average ocular distance of the target audience. 5. The system of claim 1, wherein the inter-axial distance is based on an average ocular distance of the target audience. 6. The system of claim 1, wherein the means for storage is disposed to store information related to the 3D animation. 7. The system of claim 6, wherein the information related to the 3D animation includes an inter-axial distance associated with the stereoscopic frame set. 8. The system of claim 1, wherein the means for display is a passive 3D display or an active 3D display. 9. The system of claim 8, wherein the means for display is a passive 3D display comprising: an auto-stereoscopic display panel; a lenticular screen panel; or a polarized display panel. 10. The system of claim 8, wherein the means for display is an active 3D display comprising: a LCD shuttering system. 11. A method for 3D animation, comprising: setting an inter-axial distance between logical representations of two cameras at a computer system, the inter-axial distance being configured to produce a desired 3D effect for a target audience; and creating a stereoscopic frame set representing the 3D animation using the logical representations of the two cameras. 12. The method of claim 11, wherein creating the stereoscopic frame set comprises: creating a 2D animation based on a first of the logical representations of the two cameras, the 2D animation including a first plurality of frames representing a first viewing angle; and creating a second plurality of frames based on a second of the logical representations of the two cameras representing a second viewing angle, the second plurality of frames being paired with the first plurality of frames to create the stereoscopic frame set representing the 3D animation. 13. The method of claim 11, wherein setting the inter-axial distance comprises: logically separating renderings captured by the logical representations of the two cameras by a distance equivalent to the inter-axial distance. 14. The method of claim 13, wherein the inter-axial distance is based on an average ocular distance of the target audience. 15. The system of claim 11, wherein the inter-axial distance is based on an average ocular distance of the target audience. 16. The system of claim 1, further comprising storing information related to the 3D animation. 17. The method of claim 16, wherein the information related to the 3D animation includes an inter-axial distance associated with the stereoscopic frame set. 18. A computer program product for 3D animation, comprising a tangible storage medium readable by a computer processor and storing instructions thereon that, when executed by the computer processor, direct the computer processor to perform a method, comprising: setting an inter-axial distance between logical representations of two cameras at a computer system, the inter-axial distance being configured to produce a desired 3D effect for a target audience; and creating a stereoscopic frame set representing the 3D animation using the logical representations of the two cameras. 19. The computer program product of claim 18, wherein creating the stereoscopic frame set comprises: creating a 2D animation based on a first of the logical representations of the two cameras, the 2D animation including a first plurality of frames representing a first viewing angle; and creating a second plurality of frames based on a second of the logical representations of the two cameras representing a second viewing angle, the second plurality of frames being paired with the first plurality of frames to create the stereoscopic frame set representing the 3D animation. 20. The computer program product of claim 19, wherein the inter-axial distance is based on an average ocular distance of the target audience.
2,600
10,195
10,195
14,707,513
2,613
A waveguide apparatus includes a planar waveguide and at least one optical diffraction element (DOE) that provides a plurality of optical paths between an exterior and interior of the planar waveguide. A phase profile of the DOE may combine a linear diffraction grating with a circular lens, to shape a wave front and produce beams with desired focus. Waveguide apparati may be assembled to create multiple focal planes. The DOE may have a low diffraction efficiency, and planar waveguides may be transparent when viewed normally, allowing passage of light from an ambient environment (e.g., real world) useful in AR systems. Light may be returned for temporally sequentially passes through the planar waveguide. The DOE(s) may be fixed or may have dynamically adjustable characteristics. An optical coupler system may couple images to the waveguide apparatus from a projector, for instance a biaxially scanning cantilevered optical fiber tip.
1. A method for generating a retail experience, comprising: recognizing a location of a user in a retail establishment; retrieving data corresponding to the retail establishment; generating virtual content relating to the retail establishment based on the retrieved data; creating a virtual user interface in a field of view of the user; and displaying the virtual content on the virtual user interface, while the user is engaged in retail activity in the retail establishment. 2. The method of claim 1, wherein the retrieved data comprises a set of map points corresponding to the retail establishment, and wherein the virtual user interface, when viewed by the user, appears to be stationary at the set of map points. 3. The method of claim 1, wherein the location of the user is recognized using radio frequency identification transponders and communications. 4. The method of claim 1, wherein the virtual content is selected from the group consisting of a virtual character, a virtual coupon, a game based on the user location, a list of promotional items, nutritional information, metadata relating to an item, a celebrity appearance, a cross-selling advertisement, information from a person known to the user, and an electronic book. 5. The method of claim 1, further comprising: retrieving user data corresponding to the user; and generating the virtual content based on both the retrieved data and the retrieved user data. 6. The method of claim 5, wherein the virtual content is selected from the group consisting of a virtual grocery list, a virtual coupon book, a virtual recipe book, a list of various ingredients in the user's home, and a virtual recipe builder. 7. The method of claim 1, further comprising: receiving user input; generating additional virtual content based on the user input; and displaying the additional virtual content on the virtual user interface, while the user is engaged in the retail activity in the retail establishment. 8. The method of claim 7, wherein the user input is selected from the group consisting of a gesture, visual data, audio data, sensory data, a direct command, a voice command, eye tracking, and selection of a physical button. 9. The method of claim 5, wherein the virtual content is selected from the group consisting of a running total cost, a smart virtual grocery list, an indicator of items proximate the user location, and a virtual payment system. 10. The method of claim 1, further comprising sending the generated virtual content to a user device for display. 11. The method of claim 1 implemented as a system having means for implementing the method steps. 12. The method of claim 1 implemented as a computer program product comprising a computer-usable storage medium having executable code to execute the method steps.
A waveguide apparatus includes a planar waveguide and at least one optical diffraction element (DOE) that provides a plurality of optical paths between an exterior and interior of the planar waveguide. A phase profile of the DOE may combine a linear diffraction grating with a circular lens, to shape a wave front and produce beams with desired focus. Waveguide apparati may be assembled to create multiple focal planes. The DOE may have a low diffraction efficiency, and planar waveguides may be transparent when viewed normally, allowing passage of light from an ambient environment (e.g., real world) useful in AR systems. Light may be returned for temporally sequentially passes through the planar waveguide. The DOE(s) may be fixed or may have dynamically adjustable characteristics. An optical coupler system may couple images to the waveguide apparatus from a projector, for instance a biaxially scanning cantilevered optical fiber tip.1. A method for generating a retail experience, comprising: recognizing a location of a user in a retail establishment; retrieving data corresponding to the retail establishment; generating virtual content relating to the retail establishment based on the retrieved data; creating a virtual user interface in a field of view of the user; and displaying the virtual content on the virtual user interface, while the user is engaged in retail activity in the retail establishment. 2. The method of claim 1, wherein the retrieved data comprises a set of map points corresponding to the retail establishment, and wherein the virtual user interface, when viewed by the user, appears to be stationary at the set of map points. 3. The method of claim 1, wherein the location of the user is recognized using radio frequency identification transponders and communications. 4. The method of claim 1, wherein the virtual content is selected from the group consisting of a virtual character, a virtual coupon, a game based on the user location, a list of promotional items, nutritional information, metadata relating to an item, a celebrity appearance, a cross-selling advertisement, information from a person known to the user, and an electronic book. 5. The method of claim 1, further comprising: retrieving user data corresponding to the user; and generating the virtual content based on both the retrieved data and the retrieved user data. 6. The method of claim 5, wherein the virtual content is selected from the group consisting of a virtual grocery list, a virtual coupon book, a virtual recipe book, a list of various ingredients in the user's home, and a virtual recipe builder. 7. The method of claim 1, further comprising: receiving user input; generating additional virtual content based on the user input; and displaying the additional virtual content on the virtual user interface, while the user is engaged in the retail activity in the retail establishment. 8. The method of claim 7, wherein the user input is selected from the group consisting of a gesture, visual data, audio data, sensory data, a direct command, a voice command, eye tracking, and selection of a physical button. 9. The method of claim 5, wherein the virtual content is selected from the group consisting of a running total cost, a smart virtual grocery list, an indicator of items proximate the user location, and a virtual payment system. 10. The method of claim 1, further comprising sending the generated virtual content to a user device for display. 11. The method of claim 1 implemented as a system having means for implementing the method steps. 12. The method of claim 1 implemented as a computer program product comprising a computer-usable storage medium having executable code to execute the method steps.
2,600
10,196
10,196
14,546,010
2,653
Systems and methods may provide for sending a sound wave signal and measuring a body conduction characteristic of the sound wave signal. Additionally, a user authentication may be performed based at least in part on the body conduction characteristic. In one example, the body conduction characteristic includes one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue.
1. A system comprising: an enclosure including a wearable form factor; a tissue conduction speaker; a sensor including one or more of a tissue conduction microphone or an accelerometer; an outbound signal controller to send, via the tissue conduction speaker, a sound wave signal; an inbound signal controller coupled to the sensor, the inbound signal controller to measure a body conduction characteristic of the sound wave signal; and an authenticator to perform a user authentication based at least in part on the body conduction characteristic. 2. The system of claim 1, wherein the body conduction characteristic is to include one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue. 3. The system of claim 1, wherein the authenticator further includes one or more of: a presence detector to detect a user; or a recognizer to identify the user. 4. The system of claim 1, further including a supplemental factor component to perform the user authentication further based on an additional authentication factor including one or more of voice input, gesture input or textual input. 5. The system of claim 1, wherein the inbound signal controller includes a sensor interface to capture, via the sensor, a measurement signal associated with the sound wave signal and compare the measurement signal to a training signal. 6. The system of claim 1, wherein the outbound signal controller is to configure the sound wave signal based on an expected user. 7. The system of claim 1, wherein the enclosure includes one of a single-part enclosure or a multi-part enclosure and wherein the wearable form factor includes one or more of a headwear form factor, a wrist wear form factor or a hand wear form factor. 8. An apparatus comprising: an outbound signal controller to send a sound wave signal; an inbound signal controller to measure a body conduction characteristic of the sound wave signal; and an authenticator to perform a user authentication based at least in part on the body conduction characteristic. 9. The apparatus of claim 8, wherein the body conduction characteristic is to include one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue. 10. The apparatus of claim 8, wherein the authenticator further includes one or more of: a presence detector to detect a user; or a recognizer to identify the user. 11. The apparatus of claim 8, further including a supplemental factor component to perform the user authentication further based on an additional authentication factor including one or more of voice input, gesture input or textual input. 12. The apparatus of claim 8, wherein the inbound signal controller includes a sensor interface to capture, via one or more of a tissue conduction microphone or an accelerometer, a measurement signal associated with the sound wave signal and compare the measurement signal to a training signal, and wherein outbound signal controller is to send the sound wave signal via a tissue conduction speaker. 13. The apparatus of claim 8, wherein the outbound signal controller is to configure the sound wave signal based on an expected user. 14. A method comprising: sending a sound wave signal; measuring a body conduction characteristic of the sound wave signal; and performing a user authentication based at least in part on the body conduction characteristic. 15. The method of claim 14, wherein the body conduction characteristic includes one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue. 16. The method of claim 14, wherein performing the user authentication includes one or more of detecting a user or identifying the user. 17. The method of claim 14, wherein the user authentication is performed further based on an additional authentication factor including one or more of voice input, gesture input or textual input. 18. The method of claim 14, further including: capturing, via one or more of a tissue conduction microphone or an accelerometer, a measurement signal associated with the sound wave signal; and comparing the measurement signal to a training signal, wherein the sound wave signal is sent via a tissue conduction speaker. 19. The method of claim 14, further including configuring the sound wave signal based on an expected user. 20. At least one computer readable storage medium comprising a set of instructions which, when executed by a computing device, cause the computing device to: send a sound wave signal; measure a body conduction characteristic of the sound wave signal; and perform a user authentication based at least in part on the body conduction characteristic. 21. The at least one computer readable storage medium of claim 20, wherein the body conduction characteristic is to include one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue. 22. The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to one or more of: detect a user to perform the user authentication; or identify the user to perform the user authentication. 23. The at least one computer readable storage medium of claim 20, wherein the user authentication is to be performed further based on an additional authentication factor including one or more of voice input, gesture input or textual input. 24. The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to: capture, via one or more of a tissue conduction microphone or an accelerometer, a measurement signal associated with the sound wave signal; and compare the measurement signal to a training signal, wherein the sound wave signal is to be sent via a tissue conduction speaker. 25. The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to configure the sound wave signal based on an expected user.
Systems and methods may provide for sending a sound wave signal and measuring a body conduction characteristic of the sound wave signal. Additionally, a user authentication may be performed based at least in part on the body conduction characteristic. In one example, the body conduction characteristic includes one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue.1. A system comprising: an enclosure including a wearable form factor; a tissue conduction speaker; a sensor including one or more of a tissue conduction microphone or an accelerometer; an outbound signal controller to send, via the tissue conduction speaker, a sound wave signal; an inbound signal controller coupled to the sensor, the inbound signal controller to measure a body conduction characteristic of the sound wave signal; and an authenticator to perform a user authentication based at least in part on the body conduction characteristic. 2. The system of claim 1, wherein the body conduction characteristic is to include one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue. 3. The system of claim 1, wherein the authenticator further includes one or more of: a presence detector to detect a user; or a recognizer to identify the user. 4. The system of claim 1, further including a supplemental factor component to perform the user authentication further based on an additional authentication factor including one or more of voice input, gesture input or textual input. 5. The system of claim 1, wherein the inbound signal controller includes a sensor interface to capture, via the sensor, a measurement signal associated with the sound wave signal and compare the measurement signal to a training signal. 6. The system of claim 1, wherein the outbound signal controller is to configure the sound wave signal based on an expected user. 7. The system of claim 1, wherein the enclosure includes one of a single-part enclosure or a multi-part enclosure and wherein the wearable form factor includes one or more of a headwear form factor, a wrist wear form factor or a hand wear form factor. 8. An apparatus comprising: an outbound signal controller to send a sound wave signal; an inbound signal controller to measure a body conduction characteristic of the sound wave signal; and an authenticator to perform a user authentication based at least in part on the body conduction characteristic. 9. The apparatus of claim 8, wherein the body conduction characteristic is to include one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue. 10. The apparatus of claim 8, wherein the authenticator further includes one or more of: a presence detector to detect a user; or a recognizer to identify the user. 11. The apparatus of claim 8, further including a supplemental factor component to perform the user authentication further based on an additional authentication factor including one or more of voice input, gesture input or textual input. 12. The apparatus of claim 8, wherein the inbound signal controller includes a sensor interface to capture, via one or more of a tissue conduction microphone or an accelerometer, a measurement signal associated with the sound wave signal and compare the measurement signal to a training signal, and wherein outbound signal controller is to send the sound wave signal via a tissue conduction speaker. 13. The apparatus of claim 8, wherein the outbound signal controller is to configure the sound wave signal based on an expected user. 14. A method comprising: sending a sound wave signal; measuring a body conduction characteristic of the sound wave signal; and performing a user authentication based at least in part on the body conduction characteristic. 15. The method of claim 14, wherein the body conduction characteristic includes one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue. 16. The method of claim 14, wherein performing the user authentication includes one or more of detecting a user or identifying the user. 17. The method of claim 14, wherein the user authentication is performed further based on an additional authentication factor including one or more of voice input, gesture input or textual input. 18. The method of claim 14, further including: capturing, via one or more of a tissue conduction microphone or an accelerometer, a measurement signal associated with the sound wave signal; and comparing the measurement signal to a training signal, wherein the sound wave signal is sent via a tissue conduction speaker. 19. The method of claim 14, further including configuring the sound wave signal based on an expected user. 20. At least one computer readable storage medium comprising a set of instructions which, when executed by a computing device, cause the computing device to: send a sound wave signal; measure a body conduction characteristic of the sound wave signal; and perform a user authentication based at least in part on the body conduction characteristic. 21. The at least one computer readable storage medium of claim 20, wherein the body conduction characteristic is to include one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue. 22. The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to one or more of: detect a user to perform the user authentication; or identify the user to perform the user authentication. 23. The at least one computer readable storage medium of claim 20, wherein the user authentication is to be performed further based on an additional authentication factor including one or more of voice input, gesture input or textual input. 24. The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to: capture, via one or more of a tissue conduction microphone or an accelerometer, a measurement signal associated with the sound wave signal; and compare the measurement signal to a training signal, wherein the sound wave signal is to be sent via a tissue conduction speaker. 25. The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to configure the sound wave signal based on an expected user.
2,600
10,197
10,197
14,438,229
2,661
The present invention relates to image processing for enhancing ultrasound images. In order to provide image data showing the current situation, for example in a region of interest of a patient, an image processing device ( 10 ) for enhancing ultrasound images is provided that comprises an image data input unit (12), a central processing unit ( 14 ), and a display unit ( 16 ). The image data input unit is configured to provide an ultrasound image of a region of interest of an object, and to provide an X-ray image of the region of interest of the object. The central processing unit is configured to select a predetermined image area in the X-ray image, to register the ultrasound image and the X-ray image, to detect the predetermined area in the ultrasound image based on the registered selected predetermined image area, and to highlight at least a part of the detected area in the ultrasound image to generate a boosted ultrasound image. The display unit is configured to provide the boosted ultrasound image as guiding information on a display area ( 18 ).
1. An image processing device for enhancing ultrasound images, comprising: an image data input unit; a central processing unit; and a display unit; wherein the image data input unit is configured to provide an ultrasound image of a region of interest of an object; and to provide an X-ray image of the region of interest of the object; wherein the central processing unit is configured to select at least one target object in the X-ray image; to register the ultrasound image and the X-ray image; to detect the at least one target object in the ultrasound image based on the registration; and to generate a boosted ultrasound image in which at least an area of the detected target object in the ultrasound image is highlighted; and wherein the display unit is configured to provide the boosted ultrasound image as guiding information on a display area. 2. Image processing device according to claim 1, wherein the central processing unit is configured to project the selected area into the ultrasound image; and to detect the predetermined area in the ultrasound image based on the projected area. 3. (canceled) 4. Image processing device according to claim 1, wherein the central processing unit is configured to project the selected area as an outline; and to select objects in the ultrasound image that fit into the projected outline. 5. A medical imaging system, comprising: an ultrasound imaging device; an X-ray imaging device; and an image processing device; wherein the image processing device is provided according to claim 1; wherein the ultrasound imaging device is configured to acquire and provide an ultrasound image of a region of interest of an object; and wherein the X-ray imaging device is configured to acquire and provide an X-ray image of the region of interest of the object. 6. System according to claim 5, wherein the ultrasound imaging device comprises a probe that is used for acquiring the ultrasound image; and wherein a registration unit is provided, which is configured to register the ultrasound probe in the X-ray image. 7. System according to claim 5, wherein the X-ray imaging device and the ultrasound imaging device are configured to acquire the images simultaneously. 8. System according to claim 5, wherein the ultrasound imaging device is configured to acquire a first sequence of ultrasound images; and the X-ray imaging device is configured to acquire a second sequence of X-ray images. 9. A method for enhancing ultrasound images, comprising the following steps: a) providing an ultrasound image of a region of interest of an object; b) providing an X-ray image of the region of interest of the object; c) selecting in at least one target object in the X-ray image; d) registering the ultrasound image and the X-ray image; e) detecting the at least one target object in the ultrasound image based on the registration; and f) highlighting at least an area of the detected target object in the ultrasound image thereby generating ultrasound image. 10. (canceled) 11. Method according to claim 9, wherein the predetermined image area is an interventional tool, which is detected and tracked automatically for the selecting in step c). 12. Method according to claim 9, wherein in step d), the selected area is projected as an outline; and wherein in step e), objects are selected in the ultrasound image that fit into the projected outline. 13. Method according to claim 8, wherein: in step a), a first sequence of ultrasound images of the region of interest is provided; in step b), a second sequence of X-ray images of the region of interest is provided; in step c), the at least one target is detected in one of the X-ray images and tracked in the other X-ray images; in step d), the first sequence of ultrasound images is registered with the second sequence of X-ray images; in step e), the at least one target object is detected in the first sequence of ultrasound images based on the registration; and in step f), the at least one target object is highlighted in the first sequence of ultrasound images. 14. A computer program element for controlling an image processing device according to claim 1, or a medical imaging system, which, when being executed by a processing unit, is adapted to perform method steps. 15. A computer readable medium having stored the program element of claim 14. 16. System according to claim 1, further comprising an interventional tool or device, wherein the target object corresponds to said interventional tool or device.
The present invention relates to image processing for enhancing ultrasound images. In order to provide image data showing the current situation, for example in a region of interest of a patient, an image processing device ( 10 ) for enhancing ultrasound images is provided that comprises an image data input unit (12), a central processing unit ( 14 ), and a display unit ( 16 ). The image data input unit is configured to provide an ultrasound image of a region of interest of an object, and to provide an X-ray image of the region of interest of the object. The central processing unit is configured to select a predetermined image area in the X-ray image, to register the ultrasound image and the X-ray image, to detect the predetermined area in the ultrasound image based on the registered selected predetermined image area, and to highlight at least a part of the detected area in the ultrasound image to generate a boosted ultrasound image. The display unit is configured to provide the boosted ultrasound image as guiding information on a display area ( 18 ).1. An image processing device for enhancing ultrasound images, comprising: an image data input unit; a central processing unit; and a display unit; wherein the image data input unit is configured to provide an ultrasound image of a region of interest of an object; and to provide an X-ray image of the region of interest of the object; wherein the central processing unit is configured to select at least one target object in the X-ray image; to register the ultrasound image and the X-ray image; to detect the at least one target object in the ultrasound image based on the registration; and to generate a boosted ultrasound image in which at least an area of the detected target object in the ultrasound image is highlighted; and wherein the display unit is configured to provide the boosted ultrasound image as guiding information on a display area. 2. Image processing device according to claim 1, wherein the central processing unit is configured to project the selected area into the ultrasound image; and to detect the predetermined area in the ultrasound image based on the projected area. 3. (canceled) 4. Image processing device according to claim 1, wherein the central processing unit is configured to project the selected area as an outline; and to select objects in the ultrasound image that fit into the projected outline. 5. A medical imaging system, comprising: an ultrasound imaging device; an X-ray imaging device; and an image processing device; wherein the image processing device is provided according to claim 1; wherein the ultrasound imaging device is configured to acquire and provide an ultrasound image of a region of interest of an object; and wherein the X-ray imaging device is configured to acquire and provide an X-ray image of the region of interest of the object. 6. System according to claim 5, wherein the ultrasound imaging device comprises a probe that is used for acquiring the ultrasound image; and wherein a registration unit is provided, which is configured to register the ultrasound probe in the X-ray image. 7. System according to claim 5, wherein the X-ray imaging device and the ultrasound imaging device are configured to acquire the images simultaneously. 8. System according to claim 5, wherein the ultrasound imaging device is configured to acquire a first sequence of ultrasound images; and the X-ray imaging device is configured to acquire a second sequence of X-ray images. 9. A method for enhancing ultrasound images, comprising the following steps: a) providing an ultrasound image of a region of interest of an object; b) providing an X-ray image of the region of interest of the object; c) selecting in at least one target object in the X-ray image; d) registering the ultrasound image and the X-ray image; e) detecting the at least one target object in the ultrasound image based on the registration; and f) highlighting at least an area of the detected target object in the ultrasound image thereby generating ultrasound image. 10. (canceled) 11. Method according to claim 9, wherein the predetermined image area is an interventional tool, which is detected and tracked automatically for the selecting in step c). 12. Method according to claim 9, wherein in step d), the selected area is projected as an outline; and wherein in step e), objects are selected in the ultrasound image that fit into the projected outline. 13. Method according to claim 8, wherein: in step a), a first sequence of ultrasound images of the region of interest is provided; in step b), a second sequence of X-ray images of the region of interest is provided; in step c), the at least one target is detected in one of the X-ray images and tracked in the other X-ray images; in step d), the first sequence of ultrasound images is registered with the second sequence of X-ray images; in step e), the at least one target object is detected in the first sequence of ultrasound images based on the registration; and in step f), the at least one target object is highlighted in the first sequence of ultrasound images. 14. A computer program element for controlling an image processing device according to claim 1, or a medical imaging system, which, when being executed by a processing unit, is adapted to perform method steps. 15. A computer readable medium having stored the program element of claim 14. 16. System according to claim 1, further comprising an interventional tool or device, wherein the target object corresponds to said interventional tool or device.
2,600
10,198
10,198
16,258,034
2,664
A method and system including: defining a geographic area; receiving a plurality of images; determining a plurality of image points; partitioning the geographic area into a plurality of image regions based on the plurality of image points; and stitching the plurality of images into a combined image based on the plurality of image regions.
1. A method comprising: defining a geographic area; receiving a plurality of images, wherein at least a portion of each received image is within the defined geographic area; determining a plurality of image points, wherein each image point is a geographic location of a center field of view of each image of the received plurality of images; partitioning the geographic area into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point of the plurality of image points than any other image point of the plurality of image points; and stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region. 2. The method of claim 1 wherein partitioning the geographic area into the plurality of image regions further comprises: generating a Voronoi diagram. 3. The method of claim 1 further comprising: capturing the plurality of images by an aerial vehicle. 4. The method of claim 3 wherein the aerial vehicle is a vertical takeoff and landing (VTOL) unmanned aerial vehicle (UAV). 5. The method of claim 1 further comprising: filtering one or more images of the received plurality of images. 6. The method of claim 5 wherein filtering the one or more images further comprises: removing the one or more images due to at least one of: overexposure, underexposure, distortion, blur, and an error with a camera taking the image. 7. The method of claim 1 further comprising: applying one or more image enhancements to one or more images of the received plurality of images. 8. The method of claim 7 wherein applying image enhancements to the one or more images comprises at least one of: brightening, darkening, color correcting, white balancing, sharpening, correcting lens distortion, and adjusting contrast. 9. A system comprising: an unmanned aerial vehicle (UAV) comprising: a processor having addressable memory; a sensor in communication with the processor, the sensor configured to capture a plurality of images; and wherein the processor is configured to: receive a geographic area; receive a plurality of images from the sensor; determine a plurality of image points, wherein each image point is a geographic location of a center field of view of each image; partition the geographic area into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point than any other image point; and stitch the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region. 10. The system of claim 9 wherein the UAV further comprises: a global positioning system (GPS) in communication with the processor, wherein the processor uses the GPS to determine the geographic location of each image point of the plurality of image points. 11. The system of claim 9 wherein the processor is further configured to generate a Voronoi diagram to partition the geographic area into the plurality of image regions. 12. The system of claim 9 wherein the UAV is a vertical takeoff and landing (VTOL) UAV. 13. The system of claim 9 wherein the processor is further configured to: filter one or more images of the received plurality of images, wherein filtering the one or more images further comprises: removing the one or more images due to at least one of: overexposure, underexposure, distortion, blur, and an error with a camera taking the image. 14. The system of claim 9 wherein the processor is further configured to: apply one or more image enhancements to one or more images of the received plurality of images, wherein applying image enhancements to the one or more images comprises at least one of: brightening, darkening, color correcting, white balancing, sharpening, correcting lens distortion, and adjusting contrast. 15. The system of claim 9 further comprising: a controller comprising: a processor having addressable memory, wherein the processor is configured to: define the geographic area; send the geographic area to the UAV; and receive the combined image from the UAV. 16. The system of claim 15 further comprising: a computing device comprising a processor and addressable memory, wherein the processor is configured to: receive the combined image from at least one of: the UAV and the controller; and analyze the combined image. 17. The system of claim 16 wherein analyzing the combined image comprises comparing the combined image to a historical combined image. 18. The system of claim 16 wherein the processor of the computing device is further configured to smooth the combined image to account for at least one of: brightness, color, dead pixels, and lens distortion. 19. The system of claim 16 wherein the processor of the computing device is further configured to: receive the plurality of images; and stitch the plurality of images into a high resolution combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region. 20. A method comprising: receiving a plurality of images; determining a plurality of image points, wherein each image point is a geographic location of a center field of view of each image of the received plurality of images; partitioning the geographic area into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point of the plurality of image points than any other image point of the plurality of image points; stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region; and expanding each image region in the combined image by a set amount such that a respective border of each image region overlaps each neighboring image region.
A method and system including: defining a geographic area; receiving a plurality of images; determining a plurality of image points; partitioning the geographic area into a plurality of image regions based on the plurality of image points; and stitching the plurality of images into a combined image based on the plurality of image regions.1. A method comprising: defining a geographic area; receiving a plurality of images, wherein at least a portion of each received image is within the defined geographic area; determining a plurality of image points, wherein each image point is a geographic location of a center field of view of each image of the received plurality of images; partitioning the geographic area into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point of the plurality of image points than any other image point of the plurality of image points; and stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region. 2. The method of claim 1 wherein partitioning the geographic area into the plurality of image regions further comprises: generating a Voronoi diagram. 3. The method of claim 1 further comprising: capturing the plurality of images by an aerial vehicle. 4. The method of claim 3 wherein the aerial vehicle is a vertical takeoff and landing (VTOL) unmanned aerial vehicle (UAV). 5. The method of claim 1 further comprising: filtering one or more images of the received plurality of images. 6. The method of claim 5 wherein filtering the one or more images further comprises: removing the one or more images due to at least one of: overexposure, underexposure, distortion, blur, and an error with a camera taking the image. 7. The method of claim 1 further comprising: applying one or more image enhancements to one or more images of the received plurality of images. 8. The method of claim 7 wherein applying image enhancements to the one or more images comprises at least one of: brightening, darkening, color correcting, white balancing, sharpening, correcting lens distortion, and adjusting contrast. 9. A system comprising: an unmanned aerial vehicle (UAV) comprising: a processor having addressable memory; a sensor in communication with the processor, the sensor configured to capture a plurality of images; and wherein the processor is configured to: receive a geographic area; receive a plurality of images from the sensor; determine a plurality of image points, wherein each image point is a geographic location of a center field of view of each image; partition the geographic area into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point than any other image point; and stitch the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region. 10. The system of claim 9 wherein the UAV further comprises: a global positioning system (GPS) in communication with the processor, wherein the processor uses the GPS to determine the geographic location of each image point of the plurality of image points. 11. The system of claim 9 wherein the processor is further configured to generate a Voronoi diagram to partition the geographic area into the plurality of image regions. 12. The system of claim 9 wherein the UAV is a vertical takeoff and landing (VTOL) UAV. 13. The system of claim 9 wherein the processor is further configured to: filter one or more images of the received plurality of images, wherein filtering the one or more images further comprises: removing the one or more images due to at least one of: overexposure, underexposure, distortion, blur, and an error with a camera taking the image. 14. The system of claim 9 wherein the processor is further configured to: apply one or more image enhancements to one or more images of the received plurality of images, wherein applying image enhancements to the one or more images comprises at least one of: brightening, darkening, color correcting, white balancing, sharpening, correcting lens distortion, and adjusting contrast. 15. The system of claim 9 further comprising: a controller comprising: a processor having addressable memory, wherein the processor is configured to: define the geographic area; send the geographic area to the UAV; and receive the combined image from the UAV. 16. The system of claim 15 further comprising: a computing device comprising a processor and addressable memory, wherein the processor is configured to: receive the combined image from at least one of: the UAV and the controller; and analyze the combined image. 17. The system of claim 16 wherein analyzing the combined image comprises comparing the combined image to a historical combined image. 18. The system of claim 16 wherein the processor of the computing device is further configured to smooth the combined image to account for at least one of: brightness, color, dead pixels, and lens distortion. 19. The system of claim 16 wherein the processor of the computing device is further configured to: receive the plurality of images; and stitch the plurality of images into a high resolution combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region. 20. A method comprising: receiving a plurality of images; determining a plurality of image points, wherein each image point is a geographic location of a center field of view of each image of the received plurality of images; partitioning the geographic area into a plurality of image regions based on the plurality of image points, wherein each pixel in each image region is closer to a closest image point of the plurality of image points than any other image point of the plurality of image points; stitching the plurality of images into a combined image based on the plurality of image regions, wherein each pixel in the combined image is selected from its corresponding image region; and expanding each image region in the combined image by a set amount such that a respective border of each image region overlaps each neighboring image region.
2,600
10,199
10,199
15,089,970
2,658
A system and method capable of distinguishing sources in a multiple source environment is provided. The system receives an audio signal comprising an audio tag, a desired audio signal and an undesired audio signal. Based on the audio tag, the system eliminates the undesired audio signal and identifies an intended command in the desired audio signal. The system generates a command for an external device based on the intended command.
1. An audio system, comprising: a memory device comprising a speech recognition algorithm and audio tags; a processor coupled to the memory device, a source of undesired audio signal and a source of desired audio signal, the processor configured to (i) receive an audio signal comprising (a) an undesired audio signal from the source of undesired audio, (b) a desired audio signal from the source of desired audio signal, the source of the undesired audio signal and the source of the desired audio signal being separate, and (c) an audio tag associated with one of the undesired audio signal or the desired audio signal, and (ii) based on the audio tag, eliminate the undesired audio signal from the audio signal prior to using the speech recognition algorithm to identify an intended command from the audio signal with the undesired audio signal eliminated. 2. The audio system of claim 1, wherein the audio tag is an externally created audio tag (ECT). 3. The audio system of claim 2, wherein the processor is further configured to (i) command a source of ECT to add the ECT to the source of the undesired audio signal, and (ii) create an association between the ECT and the source of undesired audio signal and store the association in the memory device. 4. (canceled) 5. The audio system of claim 1, further comprising a source of the desired audio signal coupled to the processor, and wherein the audio tag is a naturally occurring tag (NT), and the processor is further configured to process the audio signal from the source of the desired audio signal to identify the naturally occurring audio tag (NT) unique to the source of desired audio signal, create an association between the NT and the source of the desired audio signal, and store the association in the memory device. 6. The audio system of claim 5, wherein the processor is further configured to process the audio signal to (i) identify a naturally occurring audio tag therein, and (ii) associate the naturally occurring audio tag with the source of desired audio signal. 7. The audio system of claim 6, further comprising a sensor coupled to the processor and the source of desired audio signal. 8. (canceled) 9. The audio system of claim 7, wherein the processor is further configured to: audio process the desired audio signal to identify therein an intended device associated with the intended command; and generate a command that is responsive to the intended command and intended device. 10. The audio system of claim 3, wherein the ECT comprises at least one signal characteristic from the set of signal characteristics including: analog, digital, continuous, pulsed, patterned, audible, sub audible and super audible. 11. An audio processing method, the method comprising: detecting, by an audio capture device, audio transmissions comprising (i) an undesired audio signal from a source of undesired audio, and (ii) a desired audio signal from a source of desired audio signal, the source of the undesired audio signal and the source of the desired audio signal being separate; converting, by the audio capture device, the audio transmissions into a collective audio signal comprising an audio tag, the desired audio signal and the undesired audio signal; receiving, at a processor, the audio signal comprising the audio tag, desired audio signal and undesired audio signal; processing the audio signal, using speech recognition algorithms stored in a memory device; eliminating the undesired audio signal from the audio signal based on the audio tag; and generating a command for an external device using the audio signal with the undesired audio signal eliminated. 12. The audio processing method of claim 11, further comprising determining, by the processor, that it has access to the source of the undesired audio signal. 13. The audio processing method of claim 12, wherein the audio tag is an externally created audio tag (ECT), and further comprising, responsive to the processor determining that it has access to the source of the undesired audio signal: commanding a source of ECT to add the ECT to the undesired audio signal, and creating an association between the ECT and the source of undesired audio signal and storing the association in the memory device. 14. The audio processing method of claim 11, further comprising determining, by the processor, that it has access to the source of the desired audio signal. 15. The audio processing method of claim 14, wherein the audio tag is a naturally occurring tag (NT), and further comprising, responsive to the processor determining that it has access to the source of the desired audio signal: processing the desired audio signal to identify the naturally occurring tag (NT) therein, and creating an association between the NT and the source of the desired audio signal and storing the association in the memory device. 16. The audio processing method of claim 11, wherein, upon determining that it does not have access to any sources of the audio signal, the processor is further configured to process the audio signal to (i) identify a naturally occurring audio tag in the undesired audio signal and associate the naturally occurring audio tag in the undesired audio signal with a source of undesired audio signal, or (ii) identify a naturally occurring audio tag in the desired audio signal and associate the naturally occurring audio tag in the desired audio signal with a source of desired audio signal. 17. An audio system, comprising: a memory device comprising speech recognition algorithms; a processor coupled to the memory device, the processor configured to (i) receive an audio signal comprising (a) an undesired audio signal from a source of undesired audio, (b) a desired audio signal from a source of desired audio signal, the source of the undesired audio signal and the source of the desired audio signal being separate, and (c) an audio tag, (ii) identify the audio tag in the audio signal, and (iii) eliminate the undesired audio signal from the audio signal prior to processing the audio signal and identifying a command therein. 18. The audio system of claim 17, wherein the processor is further configured to: command a source of externally created audio tag (ECT) to add an ECT to the source of the undesired audio signal, and store an association between the ECT and the source of the undesired audio signal in the memory device. 19. The audio system of claim 18, wherein the processor is further configured to: process the desired audio signal to identify a naturally occurring tag therein, and store an association between the naturally occurring tag and the source of the desired audio signal in the memory device. 20. The audio system of claim 19, further comprising: a user interface for providing an enable signal; and wherein the processor is further configured to generate a command for an external device based on the intended command when the enable signal is asserted. 21. The audio system of claim 1, wherein the processor is further configured to distinguish among the source of undesired audio signal and the source of desired audio signal when audio transmissions from the source of undesired audio signal and audio transmissions from the source of desired audio signal are transmitted in one from the set of concurrent, overlapping, and sequential order. 22. The audio processing method of claim 11, further comprising distinguishing among the source of undesired audio signal and the source of desired audio signal when audio transmissions from the source of undesired audio signal and audio transmissions from the source of desired audio signal are transmitted in one from the set of concurrent, overlapping, and sequential order.
A system and method capable of distinguishing sources in a multiple source environment is provided. The system receives an audio signal comprising an audio tag, a desired audio signal and an undesired audio signal. Based on the audio tag, the system eliminates the undesired audio signal and identifies an intended command in the desired audio signal. The system generates a command for an external device based on the intended command.1. An audio system, comprising: a memory device comprising a speech recognition algorithm and audio tags; a processor coupled to the memory device, a source of undesired audio signal and a source of desired audio signal, the processor configured to (i) receive an audio signal comprising (a) an undesired audio signal from the source of undesired audio, (b) a desired audio signal from the source of desired audio signal, the source of the undesired audio signal and the source of the desired audio signal being separate, and (c) an audio tag associated with one of the undesired audio signal or the desired audio signal, and (ii) based on the audio tag, eliminate the undesired audio signal from the audio signal prior to using the speech recognition algorithm to identify an intended command from the audio signal with the undesired audio signal eliminated. 2. The audio system of claim 1, wherein the audio tag is an externally created audio tag (ECT). 3. The audio system of claim 2, wherein the processor is further configured to (i) command a source of ECT to add the ECT to the source of the undesired audio signal, and (ii) create an association between the ECT and the source of undesired audio signal and store the association in the memory device. 4. (canceled) 5. The audio system of claim 1, further comprising a source of the desired audio signal coupled to the processor, and wherein the audio tag is a naturally occurring tag (NT), and the processor is further configured to process the audio signal from the source of the desired audio signal to identify the naturally occurring audio tag (NT) unique to the source of desired audio signal, create an association between the NT and the source of the desired audio signal, and store the association in the memory device. 6. The audio system of claim 5, wherein the processor is further configured to process the audio signal to (i) identify a naturally occurring audio tag therein, and (ii) associate the naturally occurring audio tag with the source of desired audio signal. 7. The audio system of claim 6, further comprising a sensor coupled to the processor and the source of desired audio signal. 8. (canceled) 9. The audio system of claim 7, wherein the processor is further configured to: audio process the desired audio signal to identify therein an intended device associated with the intended command; and generate a command that is responsive to the intended command and intended device. 10. The audio system of claim 3, wherein the ECT comprises at least one signal characteristic from the set of signal characteristics including: analog, digital, continuous, pulsed, patterned, audible, sub audible and super audible. 11. An audio processing method, the method comprising: detecting, by an audio capture device, audio transmissions comprising (i) an undesired audio signal from a source of undesired audio, and (ii) a desired audio signal from a source of desired audio signal, the source of the undesired audio signal and the source of the desired audio signal being separate; converting, by the audio capture device, the audio transmissions into a collective audio signal comprising an audio tag, the desired audio signal and the undesired audio signal; receiving, at a processor, the audio signal comprising the audio tag, desired audio signal and undesired audio signal; processing the audio signal, using speech recognition algorithms stored in a memory device; eliminating the undesired audio signal from the audio signal based on the audio tag; and generating a command for an external device using the audio signal with the undesired audio signal eliminated. 12. The audio processing method of claim 11, further comprising determining, by the processor, that it has access to the source of the undesired audio signal. 13. The audio processing method of claim 12, wherein the audio tag is an externally created audio tag (ECT), and further comprising, responsive to the processor determining that it has access to the source of the undesired audio signal: commanding a source of ECT to add the ECT to the undesired audio signal, and creating an association between the ECT and the source of undesired audio signal and storing the association in the memory device. 14. The audio processing method of claim 11, further comprising determining, by the processor, that it has access to the source of the desired audio signal. 15. The audio processing method of claim 14, wherein the audio tag is a naturally occurring tag (NT), and further comprising, responsive to the processor determining that it has access to the source of the desired audio signal: processing the desired audio signal to identify the naturally occurring tag (NT) therein, and creating an association between the NT and the source of the desired audio signal and storing the association in the memory device. 16. The audio processing method of claim 11, wherein, upon determining that it does not have access to any sources of the audio signal, the processor is further configured to process the audio signal to (i) identify a naturally occurring audio tag in the undesired audio signal and associate the naturally occurring audio tag in the undesired audio signal with a source of undesired audio signal, or (ii) identify a naturally occurring audio tag in the desired audio signal and associate the naturally occurring audio tag in the desired audio signal with a source of desired audio signal. 17. An audio system, comprising: a memory device comprising speech recognition algorithms; a processor coupled to the memory device, the processor configured to (i) receive an audio signal comprising (a) an undesired audio signal from a source of undesired audio, (b) a desired audio signal from a source of desired audio signal, the source of the undesired audio signal and the source of the desired audio signal being separate, and (c) an audio tag, (ii) identify the audio tag in the audio signal, and (iii) eliminate the undesired audio signal from the audio signal prior to processing the audio signal and identifying a command therein. 18. The audio system of claim 17, wherein the processor is further configured to: command a source of externally created audio tag (ECT) to add an ECT to the source of the undesired audio signal, and store an association between the ECT and the source of the undesired audio signal in the memory device. 19. The audio system of claim 18, wherein the processor is further configured to: process the desired audio signal to identify a naturally occurring tag therein, and store an association between the naturally occurring tag and the source of the desired audio signal in the memory device. 20. The audio system of claim 19, further comprising: a user interface for providing an enable signal; and wherein the processor is further configured to generate a command for an external device based on the intended command when the enable signal is asserted. 21. The audio system of claim 1, wherein the processor is further configured to distinguish among the source of undesired audio signal and the source of desired audio signal when audio transmissions from the source of undesired audio signal and audio transmissions from the source of desired audio signal are transmitted in one from the set of concurrent, overlapping, and sequential order. 22. The audio processing method of claim 11, further comprising distinguishing among the source of undesired audio signal and the source of desired audio signal when audio transmissions from the source of undesired audio signal and audio transmissions from the source of desired audio signal are transmitted in one from the set of concurrent, overlapping, and sequential order.
2,600