Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
10,400
10,400
15,519,681
2,687
A safety garment includes at least one sensor unit for detecting at least one user-specific characteristic value and/or at least one environment-specific characteristic value. The safety garment further includes at least one communication unit configured to transmit the detected user-specific characteristic value and/or the detected environment-specific characteristic value to at least one other safety garment in order to communicate with the other safety garment.
1. A safety garment comprising: at least one sensor unit configured to sense at least one operator-specific characteristic quantity and/or at least one environment-specific characteristic quantity; and at least one communication unit configured to transmit the sensed operator-specific characteristic quantity and/or the sensed environment-specific characteristic quantity to at least one further safety garment to communicate with the further safety garment. 2. The safety garment as claimed in claim 1, wherein: the communication unit is configured to exchange electronic data with at least one external unit, and the external unit is different from a safety garment. 3. The safety garment as claimed in claim 1, further comprising: at least one output unit configured to output items of information in dependence on the sensed operator-specific characteristic quantity, the sensed environment-specific characteristic quantity and/or in dependence on data received by the communication unit. 4. The safety garment as claimed in claim 3, wherein the output unit is configured to output at least work instructions that are dependent, at least, on the data received by the communication unit. 5. The safety garment as claimed in claim 1, further comprising: at least one evaluation unit configured to evaluate a safety garment combination, at least in dependence on the communication with the further safety garment. 6. The safety garment as claimed in claim 1, further comprising: at least one evaluation unit configured to evaluate a sensed environment, at least in dependence on an environment-sensing sensor element of the sensor unit, in order to make possible a work instruction. 7. The safety garment as claimed in claim 1, further comprising: at least one input unit configured to input operator-specific control commands, wherein the communication unit transmits the operator-specific control commands to an external unit to control the external unit. 8. The safety garment as claimed in claim 7, wherein the input unit has at least one gesture control function and/or voice control function. 9. The safety garment as claimed in claim 1, further comprising: at least one actuator unit configured for control at least in dependence on the sensed operator-specific characteristic quantity, in dependence on the sensed environment-specific characteristic quantity, and/or in dependence on data received by the communication unit. 10. The safety garment as claimed in claim 9, wherein the actuator unit is configured to act actively on a wearer as a result of identification of a hazard situation. 11. The safety garment as claimed in claim 1, further comprising: at least one lighting unit configured to illuminate a work area, wherein the lighting unit is controllable at least in dependence on the sensed operator-specific characteristic quantity, on the sensed environment-specific characteristic quantity, and/or on the data received by the communication unit. 12. The safety garment as claimed in claim 1, further comprising: at least one projection unit configured to project at least one item of information onto a base, wherein the projection unit is configured for control at least in dependence on the sensed operator-specific characteristic quantity, on the sensed environment-specific characteristic quantity, and/or on data received by the communication unit. 13. The safety garment as claimed in claim 1, wherein: the sensor unit has at least one eye sensor element configured to sense at least one eye characteristic quantity of a wearer, and the eye characteristic quantity is transmitted to the further safety garment and/or to an external unit by the communication unit. 14. The safety garment as claimed in claim 1, further comprising: at least one energy supply unit configured to supply at least the sensor unit and the communication unit with energy in dependence on a handling of an external unit. 15. A safety system, comprising: at least one safety garment including (i) at least one sensor unit configured to sense at least one operator-specific characteristic quantity and/or at least one environment-specific characteristic quantity, and (ii) at least one communication unit configured to transmit the sensed operator-specific characteristic quantity and/or the sensed environment-specific characteristic quantity to at least one further safety garment to communicate with the further safety garment; and at least one further safety garment or at least one external unit, wherein the at least one safety garment is configured to exchange electronic data with the further safety garment or the at least one external unit with the communication unit. 16. The safety system as claimed in claim 15, wherein: the sensor unit of the safety garment has at least one position sensor element, and the external unit has at least one hazard identification function configured at least to evaluate a hazard situation of a wearer of the safety garment, at least in dependence on a position characteristic quantity sensed by the position sensor element. 17. The safety system as claimed in claim 15, wherein the at least one external unit has at least one access control function configured to monitor, enable, or block access of a wearer of the safety garment and/or of the further safety garment to a work area or a space. 18. The safety system as claimed in claim 15, wherein the at least one external unit has at least one garment monitoring function configured to monitor a safety garment combination that is contingent upon a work region and/or a work assignment, in dependence on safety regulations and/or operating-area regulations. 19. The safety system as claimed in claim 15, wherein the at least one external unit is a hand-held power tool or a company control center. 20. The safety system as claimed in claim 15 further comprising: an eye protection device having (i) at least one further communication unit configured to exchange electronic data, the further communication unit configured to communicate with at least one hand-held power tool, and (ii) at least one output unit configured to output items of information in dependence on data received by the further communication unit, wherein the further communication unit is configured to communicate with a further external unit that is different from a hand-held power tool. 21. The safety system as claimed in claim 20, wherein the further communication unit is configured to communicate with the further external unit realized as a mobile device, in order to exchange data with the mobile device. 22. The safety system as claimed in claim 20, wherein the further communication unit is configured to communicate with the further external unit realized as a charging device, in order to receive at least one charge characteristic quantity from at least one storage battery that is chargeable by the charging device. 23. The safety system as claimed in claim 20, wherein the further communication unit is configured to communicate with the further external unit realized as a company control center, in order to receive at least project data. 24. (canceled)
A safety garment includes at least one sensor unit for detecting at least one user-specific characteristic value and/or at least one environment-specific characteristic value. The safety garment further includes at least one communication unit configured to transmit the detected user-specific characteristic value and/or the detected environment-specific characteristic value to at least one other safety garment in order to communicate with the other safety garment.1. A safety garment comprising: at least one sensor unit configured to sense at least one operator-specific characteristic quantity and/or at least one environment-specific characteristic quantity; and at least one communication unit configured to transmit the sensed operator-specific characteristic quantity and/or the sensed environment-specific characteristic quantity to at least one further safety garment to communicate with the further safety garment. 2. The safety garment as claimed in claim 1, wherein: the communication unit is configured to exchange electronic data with at least one external unit, and the external unit is different from a safety garment. 3. The safety garment as claimed in claim 1, further comprising: at least one output unit configured to output items of information in dependence on the sensed operator-specific characteristic quantity, the sensed environment-specific characteristic quantity and/or in dependence on data received by the communication unit. 4. The safety garment as claimed in claim 3, wherein the output unit is configured to output at least work instructions that are dependent, at least, on the data received by the communication unit. 5. The safety garment as claimed in claim 1, further comprising: at least one evaluation unit configured to evaluate a safety garment combination, at least in dependence on the communication with the further safety garment. 6. The safety garment as claimed in claim 1, further comprising: at least one evaluation unit configured to evaluate a sensed environment, at least in dependence on an environment-sensing sensor element of the sensor unit, in order to make possible a work instruction. 7. The safety garment as claimed in claim 1, further comprising: at least one input unit configured to input operator-specific control commands, wherein the communication unit transmits the operator-specific control commands to an external unit to control the external unit. 8. The safety garment as claimed in claim 7, wherein the input unit has at least one gesture control function and/or voice control function. 9. The safety garment as claimed in claim 1, further comprising: at least one actuator unit configured for control at least in dependence on the sensed operator-specific characteristic quantity, in dependence on the sensed environment-specific characteristic quantity, and/or in dependence on data received by the communication unit. 10. The safety garment as claimed in claim 9, wherein the actuator unit is configured to act actively on a wearer as a result of identification of a hazard situation. 11. The safety garment as claimed in claim 1, further comprising: at least one lighting unit configured to illuminate a work area, wherein the lighting unit is controllable at least in dependence on the sensed operator-specific characteristic quantity, on the sensed environment-specific characteristic quantity, and/or on the data received by the communication unit. 12. The safety garment as claimed in claim 1, further comprising: at least one projection unit configured to project at least one item of information onto a base, wherein the projection unit is configured for control at least in dependence on the sensed operator-specific characteristic quantity, on the sensed environment-specific characteristic quantity, and/or on data received by the communication unit. 13. The safety garment as claimed in claim 1, wherein: the sensor unit has at least one eye sensor element configured to sense at least one eye characteristic quantity of a wearer, and the eye characteristic quantity is transmitted to the further safety garment and/or to an external unit by the communication unit. 14. The safety garment as claimed in claim 1, further comprising: at least one energy supply unit configured to supply at least the sensor unit and the communication unit with energy in dependence on a handling of an external unit. 15. A safety system, comprising: at least one safety garment including (i) at least one sensor unit configured to sense at least one operator-specific characteristic quantity and/or at least one environment-specific characteristic quantity, and (ii) at least one communication unit configured to transmit the sensed operator-specific characteristic quantity and/or the sensed environment-specific characteristic quantity to at least one further safety garment to communicate with the further safety garment; and at least one further safety garment or at least one external unit, wherein the at least one safety garment is configured to exchange electronic data with the further safety garment or the at least one external unit with the communication unit. 16. The safety system as claimed in claim 15, wherein: the sensor unit of the safety garment has at least one position sensor element, and the external unit has at least one hazard identification function configured at least to evaluate a hazard situation of a wearer of the safety garment, at least in dependence on a position characteristic quantity sensed by the position sensor element. 17. The safety system as claimed in claim 15, wherein the at least one external unit has at least one access control function configured to monitor, enable, or block access of a wearer of the safety garment and/or of the further safety garment to a work area or a space. 18. The safety system as claimed in claim 15, wherein the at least one external unit has at least one garment monitoring function configured to monitor a safety garment combination that is contingent upon a work region and/or a work assignment, in dependence on safety regulations and/or operating-area regulations. 19. The safety system as claimed in claim 15, wherein the at least one external unit is a hand-held power tool or a company control center. 20. The safety system as claimed in claim 15 further comprising: an eye protection device having (i) at least one further communication unit configured to exchange electronic data, the further communication unit configured to communicate with at least one hand-held power tool, and (ii) at least one output unit configured to output items of information in dependence on data received by the further communication unit, wherein the further communication unit is configured to communicate with a further external unit that is different from a hand-held power tool. 21. The safety system as claimed in claim 20, wherein the further communication unit is configured to communicate with the further external unit realized as a mobile device, in order to exchange data with the mobile device. 22. The safety system as claimed in claim 20, wherein the further communication unit is configured to communicate with the further external unit realized as a charging device, in order to receive at least one charge characteristic quantity from at least one storage battery that is chargeable by the charging device. 23. The safety system as claimed in claim 20, wherein the further communication unit is configured to communicate with the further external unit realized as a company control center, in order to receive at least project data. 24. (canceled)
2,600
10,401
10,401
15,509,799
2,696
A projection capture system includes a camera to capture video of objects in a capture space, and a light emitting diode (LED) projector to illuminate the objects in the capture space and to project images captured by the camera into a display space. The projector includes a sequential display mode for sequentially displaying red, green, and blue light to project images captured by the camera into the display space, and a camera flash mode for simultaneously displaying red, green, and blue light to provide white light for illuminating the objects in the capture space during video capture.
1. A projection capture system, comprising; a camera to capture video of objects in a capture space; and a light emitting diode (LED) projector to illuminate the objects in the capture space and to project images captured by the camera into a display space, wherein the projector includes a sequential display mode for sequentially displaying red, green, and blue LED light to project images captured by the camera into the display space, and a camera flash mode for simultaneously displaying red, green, and blue LED light to provide white light for illuminating the objects in the capture space during video capture. 2. The system of claim 1, wherein the projector switches between the sequential display mode and the camera flash mode based on a functional call sent to the projector. 3. The system of claim 2, wherein the projector automatically exits the camera flash mode and returns to the sequential display mode after a predetermined period of time. 4. The system of claim 2, wherein the projector exits the camera flash mode and returns to the sequential display mode in response to receiving a functional call. 5. The system of claim 1, wherein currents to red, green, and blue LEDs of the projector are individually controlled to set a value of the brightness of each LED during the camera flash mode. 6. The system of claim 1, wherein currents to red, green, and blue LEDs of the projector are individually controlled to set a value of the brightness of each LED during the camera flash mode to achieve a true white point. 7. The system of claim 1, wherein the display space overlaps the capture space. 8. The system of claim 1, wherein the projector is housed together with the camera. 9. The system of claim 1, wherein the camera is positioned above the projector and wherein the system further comprises a mirror positioned above the projector to reflect light from the projector down onto the display space. 10. A method for capturing and projecting images, comprising: illuminating objects in a capture space with a light emitting diode (LED) projector operating in a first mode for simultaneously displaying red, green, and blue LED light to provide white light for illuminating the objects in the capture space; capturing video of the objects in the capture space while the projector is in the first mode; and causing the projector to switch to a second mode to sequentially display red, green, and blue LED light to project the captured video into a display space. 11. The method of claim 10, and further comprising: individually controlling currents to red, green, and blue LEDs of the projector to set a value of the brightness of each LED to achieve a true white point during the first mode. 12. The method of claim 10, wherein the projector is housed together with a camera that captures the images of the objects. 13. The method of claim 12, wherein the camera is positioned above the projector and wherein the method further comprises: reflecting, with a mirror positioned above the projector, light from the projector down onto the display space. 14. A computer-readable storage media storing computer-executable instructions that when executed by at least one processor cause the at least one processor to perform a method, comprising: causing a light emitting diode (LED) projector to enter a camera flash mode to illuminate objects in a capture space by simultaneously displaying red, green, and blue LED light to provide white light; causing a camera to capture video of the objects in the capture space while the projector is in the camera flash mode; and causing the projector to switch to a sequential display mode to sequentially display red, green, and blue LED light to project the captured video into a display space. 15. The computer-readable storage media of claim 14, wherein the display space overlaps the capture space.
A projection capture system includes a camera to capture video of objects in a capture space, and a light emitting diode (LED) projector to illuminate the objects in the capture space and to project images captured by the camera into a display space. The projector includes a sequential display mode for sequentially displaying red, green, and blue light to project images captured by the camera into the display space, and a camera flash mode for simultaneously displaying red, green, and blue light to provide white light for illuminating the objects in the capture space during video capture.1. A projection capture system, comprising; a camera to capture video of objects in a capture space; and a light emitting diode (LED) projector to illuminate the objects in the capture space and to project images captured by the camera into a display space, wherein the projector includes a sequential display mode for sequentially displaying red, green, and blue LED light to project images captured by the camera into the display space, and a camera flash mode for simultaneously displaying red, green, and blue LED light to provide white light for illuminating the objects in the capture space during video capture. 2. The system of claim 1, wherein the projector switches between the sequential display mode and the camera flash mode based on a functional call sent to the projector. 3. The system of claim 2, wherein the projector automatically exits the camera flash mode and returns to the sequential display mode after a predetermined period of time. 4. The system of claim 2, wherein the projector exits the camera flash mode and returns to the sequential display mode in response to receiving a functional call. 5. The system of claim 1, wherein currents to red, green, and blue LEDs of the projector are individually controlled to set a value of the brightness of each LED during the camera flash mode. 6. The system of claim 1, wherein currents to red, green, and blue LEDs of the projector are individually controlled to set a value of the brightness of each LED during the camera flash mode to achieve a true white point. 7. The system of claim 1, wherein the display space overlaps the capture space. 8. The system of claim 1, wherein the projector is housed together with the camera. 9. The system of claim 1, wherein the camera is positioned above the projector and wherein the system further comprises a mirror positioned above the projector to reflect light from the projector down onto the display space. 10. A method for capturing and projecting images, comprising: illuminating objects in a capture space with a light emitting diode (LED) projector operating in a first mode for simultaneously displaying red, green, and blue LED light to provide white light for illuminating the objects in the capture space; capturing video of the objects in the capture space while the projector is in the first mode; and causing the projector to switch to a second mode to sequentially display red, green, and blue LED light to project the captured video into a display space. 11. The method of claim 10, and further comprising: individually controlling currents to red, green, and blue LEDs of the projector to set a value of the brightness of each LED to achieve a true white point during the first mode. 12. The method of claim 10, wherein the projector is housed together with a camera that captures the images of the objects. 13. The method of claim 12, wherein the camera is positioned above the projector and wherein the method further comprises: reflecting, with a mirror positioned above the projector, light from the projector down onto the display space. 14. A computer-readable storage media storing computer-executable instructions that when executed by at least one processor cause the at least one processor to perform a method, comprising: causing a light emitting diode (LED) projector to enter a camera flash mode to illuminate objects in a capture space by simultaneously displaying red, green, and blue LED light to provide white light; causing a camera to capture video of the objects in the capture space while the projector is in the camera flash mode; and causing the projector to switch to a sequential display mode to sequentially display red, green, and blue LED light to project the captured video into a display space. 15. The computer-readable storage media of claim 14, wherein the display space overlaps the capture space.
2,600
10,402
10,402
15,268,228
2,631
Systems and techniques are provided in which one or more environmental sensors collect data about an environment. An environmental hazard assessment module collects and analyzes data obtained by the environmental sensors to automatically identify, categorize, and/or rate the severity of potential environmental hazards. The hazards are provided to a user for a particular region of the environment or for a larger environment that includes multiple regions.
1. A system comprising: a plurality of environmental sensors, each environmental sensor configured to measure a physical attribute of an environment in which the system is disposed; a processing system configured to receive environmental data from the plurality of environmental sensors and, based upon the environmental data, identify at least one potential fall hazard in the environment and a degree of risk associated with the fall hazard; a reporting component configured to provide an indication of the identified fall hazard and the degree of risk associated with the fall hazard. 2. The system of claim 1, wherein the system comprises a portable computing device, the portable computing device comprising the plurality of environmental sensors, the processing system, and the reporting component. 3. The system of claim 2, wherein the reporting component comprises a display screen. 4. The system of claim 2, wherein the portable computing device is selected from the group consisting of: a tablet, a smart phone, and a portable general-purpose computer. 5. The system of claim 1, wherein the plurality of environmental sensors comprises an RGB camera configured to capture image data of the environment. 6. The system of claim 5, wherein the processing system is configured to identify a floor transition, an item of clutter, or both based upon the image data. 7. The system of claim 1, wherein the plurality of environmental sensors comprises a topology sensor configured to collect topological data about the environment, and wherein the processing system is configured to identify a floor transition as a potential fall hazard based upon the topological data. 8. The system of claim 1, wherein the plurality of environmental sensors comprises an ambient light sensor, and wherein the processing system is configured to identify a low light condition as a potential fall hazard based upon ambient light data collected by the ambient light sensor. 9. The system of claim 1, wherein the processing system is configured to implement a neural network to identify the potential fall hazard based upon the environmental data, wherein the neural network is trained based upon historic environmental data. 10. The system of claim 1, comprising: a portable computing device, wherein the portable computing device comprises the plurality of environmental sensors; and a network communication interface, configured to provide the environmental data to the processing system. 11. A method comprising: receiving, from each of a plurality of environmental sensors, environmental data describing an environment in which the plurality of sensors is disposed; based upon the environmental data, automatically identifying at least one potential fall hazard in the environment; based upon the environmental data, automatically determining a degree of risk associated with the fall hazard; and automatically generating a report indicating the presence of the fall hazard and the degree of risk associated with the fall hazard. 12. The method of claim 11, wherein the plurality of environmental sensors are disposed within a portable computing device. 13. The method of claim 12, further comprising automatically displaying the report on a display screen of the portable computing device. 14. The method of claim 12, wherein the portable computing device is selected from the group consisting of: a tablet, a smart phone, and a portable general-purpose computer. 15. The method of claim 11, wherein the plurality of environmental sensors comprises an RGB camera configured to capture image data of the environment. 16. The method of claim 15, wherein the step of identifying at least one potential fall hazard comprises identifying a floor transition, an item of clutter, or both based upon the image data. 17. The method of claim 11, wherein the plurality of environmental sensors comprises a topology sensor configured to collect topological data about the environment, and wherein the step of identifying at least one potential fall hazard comprises identifying a floor transition as the potential fall hazard based upon the topological data. 18. The method of claim 11, wherein the plurality of environmental sensors comprises an ambient light sensor, and wherein the step of identifying at least one potential fall hazard comprises identifying a low light condition as the potential fall hazard based upon ambient light data collected by the ambient light sensor. 19. The method of claim 11, wherein the step of automatically identifying the at least one potential fall hazard, the step of automatically determining the degree of risk, or both, is performed by an artificial neural network. 20. The method of claim 11, further comprising providing the environmental data to a remote computing platform, wherein the remote computing platform generates the report.
Systems and techniques are provided in which one or more environmental sensors collect data about an environment. An environmental hazard assessment module collects and analyzes data obtained by the environmental sensors to automatically identify, categorize, and/or rate the severity of potential environmental hazards. The hazards are provided to a user for a particular region of the environment or for a larger environment that includes multiple regions.1. A system comprising: a plurality of environmental sensors, each environmental sensor configured to measure a physical attribute of an environment in which the system is disposed; a processing system configured to receive environmental data from the plurality of environmental sensors and, based upon the environmental data, identify at least one potential fall hazard in the environment and a degree of risk associated with the fall hazard; a reporting component configured to provide an indication of the identified fall hazard and the degree of risk associated with the fall hazard. 2. The system of claim 1, wherein the system comprises a portable computing device, the portable computing device comprising the plurality of environmental sensors, the processing system, and the reporting component. 3. The system of claim 2, wherein the reporting component comprises a display screen. 4. The system of claim 2, wherein the portable computing device is selected from the group consisting of: a tablet, a smart phone, and a portable general-purpose computer. 5. The system of claim 1, wherein the plurality of environmental sensors comprises an RGB camera configured to capture image data of the environment. 6. The system of claim 5, wherein the processing system is configured to identify a floor transition, an item of clutter, or both based upon the image data. 7. The system of claim 1, wherein the plurality of environmental sensors comprises a topology sensor configured to collect topological data about the environment, and wherein the processing system is configured to identify a floor transition as a potential fall hazard based upon the topological data. 8. The system of claim 1, wherein the plurality of environmental sensors comprises an ambient light sensor, and wherein the processing system is configured to identify a low light condition as a potential fall hazard based upon ambient light data collected by the ambient light sensor. 9. The system of claim 1, wherein the processing system is configured to implement a neural network to identify the potential fall hazard based upon the environmental data, wherein the neural network is trained based upon historic environmental data. 10. The system of claim 1, comprising: a portable computing device, wherein the portable computing device comprises the plurality of environmental sensors; and a network communication interface, configured to provide the environmental data to the processing system. 11. A method comprising: receiving, from each of a plurality of environmental sensors, environmental data describing an environment in which the plurality of sensors is disposed; based upon the environmental data, automatically identifying at least one potential fall hazard in the environment; based upon the environmental data, automatically determining a degree of risk associated with the fall hazard; and automatically generating a report indicating the presence of the fall hazard and the degree of risk associated with the fall hazard. 12. The method of claim 11, wherein the plurality of environmental sensors are disposed within a portable computing device. 13. The method of claim 12, further comprising automatically displaying the report on a display screen of the portable computing device. 14. The method of claim 12, wherein the portable computing device is selected from the group consisting of: a tablet, a smart phone, and a portable general-purpose computer. 15. The method of claim 11, wherein the plurality of environmental sensors comprises an RGB camera configured to capture image data of the environment. 16. The method of claim 15, wherein the step of identifying at least one potential fall hazard comprises identifying a floor transition, an item of clutter, or both based upon the image data. 17. The method of claim 11, wherein the plurality of environmental sensors comprises a topology sensor configured to collect topological data about the environment, and wherein the step of identifying at least one potential fall hazard comprises identifying a floor transition as the potential fall hazard based upon the topological data. 18. The method of claim 11, wherein the plurality of environmental sensors comprises an ambient light sensor, and wherein the step of identifying at least one potential fall hazard comprises identifying a low light condition as the potential fall hazard based upon ambient light data collected by the ambient light sensor. 19. The method of claim 11, wherein the step of automatically identifying the at least one potential fall hazard, the step of automatically determining the degree of risk, or both, is performed by an artificial neural network. 20. The method of claim 11, further comprising providing the environmental data to a remote computing platform, wherein the remote computing platform generates the report.
2,600
10,403
10,403
14,856,493
2,616
In one embodiment, a method includes presenting to a user, on a display of a head-worn client computing device, a three-dimensional video including images of a real-life scene that is remote from the user's physical environment. The method also includes presenting to the user, on the display of the head-worn client computing device, a graphical object including an image of the user's physical environment or a virtual graphical object.
1. A method comprising: presenting to a user, on a display of a head-worn client computing device, a three-dimensional video comprising images of a real-life scene that is remote from the user's physical environment; and presenting to the user, on the display of the head-worn client computing device, a graphical object comprising: an image of the user's physical environment; or a virtual graphical object. 2. The method of claim 1, wherein: the head-worn client computing device comprises an image sensor; and presenting to the user an image of the user's physical environment comprises presenting to the user an image captured by the image sensor. 3. The method of claim 1, wherein presenting to the user an image of the user's physical environment comprises: receiving, at the head-worn client computing device, an indication that an event occurred in the user's physical environment; determining, based on the event, to present on the display an image of at least a portion of the environment in which the event occurred; and presenting the image on the display. 4. The method of claim 3, wherein the event comprises a sound. 5. The method of claim 4, wherein the sound comprises one or more audible words. 6. The method of claim 4, wherein the sound comprises a sonic amplitude that is greater than a threshold sonic amplitude. 7. The method of claim 3, wherein: the head-worn client computing device comprises an image sensor; and the event comprises an aspect of the physical environment sensed by the image sensor. 8. The method of claim 7, wherein the aspect comprises a distance between the user and an object sensed by the image sensor. 9. The method of claim 7, wherein the aspect comprises a speed of an object sensed by the image sensor. 10. The method of claim 7, wherein the aspect comprises a particular gesture performed by the user or by another person. 11. The method of claim 1, wherein the virtual graphical object comprises a notification. 12. The method of claim 11, wherein the notification comprises a message from another user or from an application. 13. The method of claim 1, wherein the virtual graphical object comprises a virtual input device. 14. The method of claim 13, wherein: the head-worn client computing device comprises an image sensor; and the virtual graphical object corresponds to an object sensed by the image sensor. 15. The method of claim 14, wherein the object sensed by the image sensor comprises a portion of a hand or arm of the user. 16. The method of claim 1, wherein the virtual graphical object comprises a virtual surface displaying a plurality of three-dimensional videos. 17. The method of claim 1, wherein the virtual graphical object comprises information corresponding to an object in the three-dimensional video. 18. The method of claim 17, wherein the information comprises an identification of the object. 19. The method of claim 17, wherein the information comprises an identification that the object has been selected. 20. The method of claim 1, wherein the virtual graphical object comprises content created by the user. 21. The method of claim 20, wherein the content comprises a writing. 22. The method of claim 20, wherein the content comprises a drawing. 23. The method of claim 1, wherein the virtual graphical object comprises an image corresponding to the real-life scene. 24. The method of claim 23, wherein the image corresponding to the real-life scene comprises an identification of a view of at least a portion of the real-life scene at a time different than that corresponding to the real-life scene. 25. One or more non-transitory computer-readable storage media comprising instructions that are operable when executed to: present to a user, on a display of a head-worn client computing device, a three-dimensional video comprising images of a real-life scene that is remote from the user's physical environment; and present to the user, on the display of the head-worn client computing device, a graphical object comprising: an image of the user's physical environment; or a virtual graphical object. 26. The media of claim 25, wherein: the head-worn client computing device comprises an image sensor; and presenting to the user an image of the user's physical environment comprises presenting to the user an image captured by the image sensor. 27. The media of claim 25, wherein the virtual graphical object comprises a notification. 28. An apparatus comprising: one or more non-transitory computer-readable storage media embodying instructions; and one or more processors coupled to the storage media and configured to execute the instructions to: present to a user, on a display of a head-worn client computing device, a three-dimensional video comprising images of a real-life scene that is remote from the user's physical environment; and present to the user, on the display of the head-worn client computing device, a graphical object comprising: an image of the user's physical environment; or a virtual graphical object. 29. The apparatus of claim 28, wherein: the head-worn client computing device comprises an image sensor; and presenting to the user an image of the user's physical environment comprises presenting to the user an image captured by the image sensor. 30. The apparatus of claim 28, wherein the virtual graphical object comprises a notification. 31. A wearable device comprising: a display; one or more non-transitory computer-readable storage media embodying instructions; and one or more processors coupled to the media and configured to execute the instructions to: present on the display a three-dimensional video comprising a sequence of images, at least two of the images comprising: a three-dimensional image of at least a portion of a physical environment of the wearable headset device; and a three-dimensional image of at least a portion of an environment remote from the physical environment of the wearable headset device. 32. A method comprising: by a wearable computing device, presenting on a display of the device a three-dimensional video comprising a sequence of images, at least two of the images comprising: a three-dimensional image of at least a portion of a physical environment of the wearable headset device; and a three-dimensional image of at least a portion of an environment remote from the physical environment of the wearable headset device.
In one embodiment, a method includes presenting to a user, on a display of a head-worn client computing device, a three-dimensional video including images of a real-life scene that is remote from the user's physical environment. The method also includes presenting to the user, on the display of the head-worn client computing device, a graphical object including an image of the user's physical environment or a virtual graphical object.1. A method comprising: presenting to a user, on a display of a head-worn client computing device, a three-dimensional video comprising images of a real-life scene that is remote from the user's physical environment; and presenting to the user, on the display of the head-worn client computing device, a graphical object comprising: an image of the user's physical environment; or a virtual graphical object. 2. The method of claim 1, wherein: the head-worn client computing device comprises an image sensor; and presenting to the user an image of the user's physical environment comprises presenting to the user an image captured by the image sensor. 3. The method of claim 1, wherein presenting to the user an image of the user's physical environment comprises: receiving, at the head-worn client computing device, an indication that an event occurred in the user's physical environment; determining, based on the event, to present on the display an image of at least a portion of the environment in which the event occurred; and presenting the image on the display. 4. The method of claim 3, wherein the event comprises a sound. 5. The method of claim 4, wherein the sound comprises one or more audible words. 6. The method of claim 4, wherein the sound comprises a sonic amplitude that is greater than a threshold sonic amplitude. 7. The method of claim 3, wherein: the head-worn client computing device comprises an image sensor; and the event comprises an aspect of the physical environment sensed by the image sensor. 8. The method of claim 7, wherein the aspect comprises a distance between the user and an object sensed by the image sensor. 9. The method of claim 7, wherein the aspect comprises a speed of an object sensed by the image sensor. 10. The method of claim 7, wherein the aspect comprises a particular gesture performed by the user or by another person. 11. The method of claim 1, wherein the virtual graphical object comprises a notification. 12. The method of claim 11, wherein the notification comprises a message from another user or from an application. 13. The method of claim 1, wherein the virtual graphical object comprises a virtual input device. 14. The method of claim 13, wherein: the head-worn client computing device comprises an image sensor; and the virtual graphical object corresponds to an object sensed by the image sensor. 15. The method of claim 14, wherein the object sensed by the image sensor comprises a portion of a hand or arm of the user. 16. The method of claim 1, wherein the virtual graphical object comprises a virtual surface displaying a plurality of three-dimensional videos. 17. The method of claim 1, wherein the virtual graphical object comprises information corresponding to an object in the three-dimensional video. 18. The method of claim 17, wherein the information comprises an identification of the object. 19. The method of claim 17, wherein the information comprises an identification that the object has been selected. 20. The method of claim 1, wherein the virtual graphical object comprises content created by the user. 21. The method of claim 20, wherein the content comprises a writing. 22. The method of claim 20, wherein the content comprises a drawing. 23. The method of claim 1, wherein the virtual graphical object comprises an image corresponding to the real-life scene. 24. The method of claim 23, wherein the image corresponding to the real-life scene comprises an identification of a view of at least a portion of the real-life scene at a time different than that corresponding to the real-life scene. 25. One or more non-transitory computer-readable storage media comprising instructions that are operable when executed to: present to a user, on a display of a head-worn client computing device, a three-dimensional video comprising images of a real-life scene that is remote from the user's physical environment; and present to the user, on the display of the head-worn client computing device, a graphical object comprising: an image of the user's physical environment; or a virtual graphical object. 26. The media of claim 25, wherein: the head-worn client computing device comprises an image sensor; and presenting to the user an image of the user's physical environment comprises presenting to the user an image captured by the image sensor. 27. The media of claim 25, wherein the virtual graphical object comprises a notification. 28. An apparatus comprising: one or more non-transitory computer-readable storage media embodying instructions; and one or more processors coupled to the storage media and configured to execute the instructions to: present to a user, on a display of a head-worn client computing device, a three-dimensional video comprising images of a real-life scene that is remote from the user's physical environment; and present to the user, on the display of the head-worn client computing device, a graphical object comprising: an image of the user's physical environment; or a virtual graphical object. 29. The apparatus of claim 28, wherein: the head-worn client computing device comprises an image sensor; and presenting to the user an image of the user's physical environment comprises presenting to the user an image captured by the image sensor. 30. The apparatus of claim 28, wherein the virtual graphical object comprises a notification. 31. A wearable device comprising: a display; one or more non-transitory computer-readable storage media embodying instructions; and one or more processors coupled to the media and configured to execute the instructions to: present on the display a three-dimensional video comprising a sequence of images, at least two of the images comprising: a three-dimensional image of at least a portion of a physical environment of the wearable headset device; and a three-dimensional image of at least a portion of an environment remote from the physical environment of the wearable headset device. 32. A method comprising: by a wearable computing device, presenting on a display of the device a three-dimensional video comprising a sequence of images, at least two of the images comprising: a three-dimensional image of at least a portion of a physical environment of the wearable headset device; and a three-dimensional image of at least a portion of an environment remote from the physical environment of the wearable headset device.
2,600
10,404
10,404
13,726,404
2,626
Described embodiments provide a method and user equipment for dynamically controlling a display mode of an external device coupled to user equipment. The method may include determining whether to detect connection to an external device and upon the detection of the connection, controlling the coupled external device to display image data produced in the user equipment on a display unit of the coupled external device in a display mode different from a display mode of the user equipment.
1. A method of dynamically controlling a display mode of an external device coupled to user equipment, the method comprising: detecting a connection of the user equipment to an external device, wherein, upon the detection of the connection, controlling the coupled external device, and displaying image data produced in the user equipment on a display unit of the coupled external device in a display mode different from a display mode of the user equipment. 2. The method of claim 1, wherein the display mode of the coupled external device is a multiscreen display mode and the display mode of the user equipment is a single screen display mode. 3. The method of claim 1, wherein the controlling includes: creating image data in a multiscreen mode display setting associated with applications running in a multitasking mode of the user equipment and information on the coupled external device; transmitting the created image data to the coupled external device; and controlling the coupled external device to display the transmitted image data in a multiscreen display mode based on the associated multiscreen mode display setting. 4. The method of claim 1, wherein the controlling includes: determining whether a display size of the coupled external device is greater than a reference display size upon the detection of the connection; and when the display size of the coupled external device is determined as being greater than a reference display size, controlling the coupled external device to display the image data produced by the user equipment on the display unit of the coupled external device in a multiscreen display mode. 5. The method of claim 1, wherein the controlling includes: determining whether a display size of the coupled external device is greater than a reference display size upon the detection of the connection; determining a number of applications running in a multitasking mode of the user equipment when the display size of the coupled external device is determined as being greater than a reference display size; obtaining a multiscreen mode display setting of the coupled external device based on the determined number of applications running in a multitasking mode of the user equipment and information on the coupled external device; and controlling the coupled external device to display the produced image data on the display unit of the coupled external device in a multiscreen display mode based on the obtained multiscreen mode display setting. 6. The method of claim 1, further comprising: establishing a host-device connection between the user equipment and the coupled external device and setting a host-device connection display setting based on information on a display unit of the coupled external device upon the detection of the connection; creating image data by running an application in the user equipment based on the host-device connection display setting; transmitting the created image data to the coupled external device and controlling the coupled external device to display the transmitted image data in a single screen mode; determining whether a user input is received for activation of one of applications installed in the user equipment; upon the receipt of the user input for activation, creating image data of the currently activated application; creating multiscreen image data by combining the created image data of the currently activated application and the created image data of the running application based on an associated multiscreen mode display setting; and transmitting the created multiscreen image data to the external device, wherein the controlling includes: controlling the coupled external device and displaying the transmitted multiscreen image data in a dual screen display mode based on the associated multiscreen mode display setting. 7. The method of claim 1, further comprising: establishing a host-device connection between the user equipment and the coupled external device upon the detection of the connection; determining whether a display size of the coupled external device is greater than a reference display size upon the detection of the connection; determining a number of applications running in a multitasking mode of the user equipment when the display size of the coupled external device is greater than a reference display size; and obtaining a multiscreen mode display setting based on the determined number of applications running in a multitasking mode of the user equipment and information on the coupled external device, wherein the controlling includes: creating multiscreen image data by combining image data produced by each application running in the multitasking mode of the user equipment based on the obtained multiscreen mode display setting; transmitting the created multiscreen image data to the coupled external device; and controlling the coupled external device to display the transmitted multiscreen mode image data in a multiscreen display mode based on the obtained multiscreen mode display setting. 8. The method of claim 1, further comprising: obtaining a multiscreen mode display setting based on a number of applications running in the user equipment and information on the coupled external device, wherein the multiscreen mode display setting includes at least one of information on a number of application windows in the multiscreen display mode, a maximum number of allowed application windows, a display size of each application window, a position and a display orientation of each application window, arrangement of each application window, control menus and icons for controlling an overall graphic user interface of the multiscreen display mode and each application window, arrangement and sizes of the control menus and icons. 9. The method of claim 8, wherein: the obtaining includes retrieving, from a memory of the user equipment, the multiscreen mode display setting associated with the number of multitasking applications running in the user equipment and the information on the coupled external device, wherein information on the multiscreen mode display setting is stored as a lookup table in the memory of the user equipment. 10. The method of claim 8, wherein the obtaining includes: determining the multiscreen mode display setting based on the number of multitasking applications running in the user equipment and the information on the coupled external device. 11. The method of claim 1, further comprising: determining whether an user input is received for activation of a new application after controlling the coupled external device to display image data produced in the user equipment in the multiscreen display mode; upon the receipt of the user input for activation of a new application, interrupting creating image data of one of running applications associated with the multiscreen display mode; creating image data of the newly activated application and continuously creating image data of remaining applications associated with the multiscreen display mode; creating multiscreen image data by combining the created image data of the newly activated application and the continuously created image data of the remaining applications based on an associated multiscreen mode display setting; and transmitting the created multiscreen image data to the coupled external device, wherein the controlling includes: controlling the coupled external device and displaying the transmitted multiscreen image data in the associated multiscreen display mode based on the associated multiscreen mode display setting. 12. The method of claim 11, wherein the application interrupted to create image data is running in a background mode. 13. The method of claim 11, wherein the determining includes: upon the receipt of the user input for activating a new application, determining whether a number of application windows in a current multiscreen display mode reaches a maximum number of application windows allowed within the current multiscreen display mode; when the number of application windows in the current multiscreen display mode is less than the maximum number, obtaining a multiscreen mode display setting by increasing a number of associated multitasking applications by one; creating multiscreen image data based on the newly obtained multiscreen mode display setting by additionally including image data produced by the newly activated application; and transmitting the created multiscreen image data to the coupled external device and controlling the external device and displaying the transmitted multiscreen image data based on the newly obtained multiscreen mode display setting. 14. The method of claim 1, further comprising: determining whether an user input is received for closing one of application windows in the multiscreen display mode after controlling the coupled external device to display image data produced in the user equipment in the multiscreen display mode; upon the receipt of the user input for closing one of application windows, interrupting creating image data of the application associated with the user input; resuming one of applications running in a background mode to create image data and continuously creating image data of applications not associated with the user input; creating multiscreen image data by combining the created image data of the resumed application and the continuously created image data of the applications not associated with the user input based on an associated multiscreen mode display setting; and transmitting the created multiscreen image data to the coupled external device, wherein the controlling includes: controlling the coupled external device and displaying the transmitted multiscreen image data in the associated multiscreen display mode based on the associated multiscreen mode display setting. 15. A user equipment for dynamically controlling a display mode of an external device coupled thereto, the user equipment configured to: detect connection to an external device; and upon the detection of the connection, control the coupled external device to display image data produced in the user equipment on a display unit of the coupled external device in a display mode different from a display mode of the user equipment. 16. The user equipment of claim 15, wherein the user equipment is configured to: create image data, as a result of running at least one application in the user equipment, based on a multiscreen mode display setting associated with a number of applications running in a multitasking mode of the user equipment and information on the coupled external device; transmit the created image data to the coupled external device; and control the external device to display the transmitted image data in a multiscreen display mode based on the associated multiscreen mode display setting. 17. The user equipment of claim 15, wherein the user equipment is configured to: determine whether a display size of the coupled external device is greater than a reference display size upon the detection of the connection; determine a number of applications running in a multitasking mode of the user equipment when the display size of the coupled external device is greater than the reference display size; obtain a multiscreen mode display setting based on the determined number of applications running in a multitasking mode of the user equipment and information on the coupled external device; and control the coupled external device to display the produced image data on the display unit of the coupled external device in a multiscreen display mode based on the obtained multiscreen mode display setting. 18. The user equipment of claim 15, wherein the user equipment is configured to: establish a host-device connection between the user equipment and the coupled external device and set a host-device connection display setting based on information on a display unit of the coupled external device upon the detection of the connection; create image data, as a result of running an application in the user equipment, based on the host-device connection display setting; transmit the created image data to the coupled external device and control the coupled external device to display the transmitted image data in a single screen mode; determine whether a user input is received for activation of one of applications installed in the user equipment; upon the receipt of the user input for activation, create image data of the currently activated application; create a multiscreen image data by combining the created image data of the currently activated application and the image data of the running application based on an associated multiscreen mode display setting; transmit the created multiscreen image data to the external device; and control the coupled external device to display the transmitted multiscreen image data in a dual screen display mode based on the associated multiscreen mode display setting. 19. The user equipment of claim 15, wherein the user equipment is configured to: obtain a multiscreen mode display setting based on a number of applications running in the user equipment and information on the coupled external device, wherein the multiscreen mode display setting includes at least one of information on a number of application windows in the multiscreen display mode, a maximum number of allowed application windows, a display size of each application window, a position and a display orientation of each application window, arrangement of each application window, control menus and icons for controlling an overall graphic user interface of the multiscreen display mode and each application window, and arrangement and sizes of the control menus and icons. 20. The user equipment of claim 15, wherein: the user equipment is configured to determine whether an user input is received for at least one of activation of a new application and closure of one of the application windows in the multiscreen display mode, after controlling the coupled external device to display image data produced in the user equipment in the multiscreen display mode; upon the receipt of the user input for activation of a new application, the user equipment is configured to: interrupt creating image data of one of running applications associated with the multiscreen display mode, create image data of the newly activated application and continuously create image data of remaining applications associated with the multiscreen display mode, create multiscreen image data by combining the created image data of the newly activated application and the continuously created image data of the remaining applications based on an associated multiscreen mode display setting, transmit the created multiscreen image data to the coupled external device, and control the coupled external device to display the transmitted multiscreen image data in the associated multiscreen display mode based on the associated multiscreen mode display setting; and upon the receipt of the user input for closing one of application windows in the multiscreen display mode, the user equipment is configured to: determine whether a number of application windows in a current multiscreen display mode reaches a maximum number of application windows allowed within the current multiscreen display mode, when the number of application windows in the current multiscreen display mode is smaller than the maximum number, obtain a multiscreen mode display setting by increasing a number of associated multitasking applications by one, create multiscreen image data based on the newly obtained multiscreen mode display setting by additionally including image data produced by the newly activated application, transmit the created multiscreen image data to the coupled external device, and control the external device to display the transmitted multiscreen image data based on the newly obtained multiscreen mode display setting.
Described embodiments provide a method and user equipment for dynamically controlling a display mode of an external device coupled to user equipment. The method may include determining whether to detect connection to an external device and upon the detection of the connection, controlling the coupled external device to display image data produced in the user equipment on a display unit of the coupled external device in a display mode different from a display mode of the user equipment.1. A method of dynamically controlling a display mode of an external device coupled to user equipment, the method comprising: detecting a connection of the user equipment to an external device, wherein, upon the detection of the connection, controlling the coupled external device, and displaying image data produced in the user equipment on a display unit of the coupled external device in a display mode different from a display mode of the user equipment. 2. The method of claim 1, wherein the display mode of the coupled external device is a multiscreen display mode and the display mode of the user equipment is a single screen display mode. 3. The method of claim 1, wherein the controlling includes: creating image data in a multiscreen mode display setting associated with applications running in a multitasking mode of the user equipment and information on the coupled external device; transmitting the created image data to the coupled external device; and controlling the coupled external device to display the transmitted image data in a multiscreen display mode based on the associated multiscreen mode display setting. 4. The method of claim 1, wherein the controlling includes: determining whether a display size of the coupled external device is greater than a reference display size upon the detection of the connection; and when the display size of the coupled external device is determined as being greater than a reference display size, controlling the coupled external device to display the image data produced by the user equipment on the display unit of the coupled external device in a multiscreen display mode. 5. The method of claim 1, wherein the controlling includes: determining whether a display size of the coupled external device is greater than a reference display size upon the detection of the connection; determining a number of applications running in a multitasking mode of the user equipment when the display size of the coupled external device is determined as being greater than a reference display size; obtaining a multiscreen mode display setting of the coupled external device based on the determined number of applications running in a multitasking mode of the user equipment and information on the coupled external device; and controlling the coupled external device to display the produced image data on the display unit of the coupled external device in a multiscreen display mode based on the obtained multiscreen mode display setting. 6. The method of claim 1, further comprising: establishing a host-device connection between the user equipment and the coupled external device and setting a host-device connection display setting based on information on a display unit of the coupled external device upon the detection of the connection; creating image data by running an application in the user equipment based on the host-device connection display setting; transmitting the created image data to the coupled external device and controlling the coupled external device to display the transmitted image data in a single screen mode; determining whether a user input is received for activation of one of applications installed in the user equipment; upon the receipt of the user input for activation, creating image data of the currently activated application; creating multiscreen image data by combining the created image data of the currently activated application and the created image data of the running application based on an associated multiscreen mode display setting; and transmitting the created multiscreen image data to the external device, wherein the controlling includes: controlling the coupled external device and displaying the transmitted multiscreen image data in a dual screen display mode based on the associated multiscreen mode display setting. 7. The method of claim 1, further comprising: establishing a host-device connection between the user equipment and the coupled external device upon the detection of the connection; determining whether a display size of the coupled external device is greater than a reference display size upon the detection of the connection; determining a number of applications running in a multitasking mode of the user equipment when the display size of the coupled external device is greater than a reference display size; and obtaining a multiscreen mode display setting based on the determined number of applications running in a multitasking mode of the user equipment and information on the coupled external device, wherein the controlling includes: creating multiscreen image data by combining image data produced by each application running in the multitasking mode of the user equipment based on the obtained multiscreen mode display setting; transmitting the created multiscreen image data to the coupled external device; and controlling the coupled external device to display the transmitted multiscreen mode image data in a multiscreen display mode based on the obtained multiscreen mode display setting. 8. The method of claim 1, further comprising: obtaining a multiscreen mode display setting based on a number of applications running in the user equipment and information on the coupled external device, wherein the multiscreen mode display setting includes at least one of information on a number of application windows in the multiscreen display mode, a maximum number of allowed application windows, a display size of each application window, a position and a display orientation of each application window, arrangement of each application window, control menus and icons for controlling an overall graphic user interface of the multiscreen display mode and each application window, arrangement and sizes of the control menus and icons. 9. The method of claim 8, wherein: the obtaining includes retrieving, from a memory of the user equipment, the multiscreen mode display setting associated with the number of multitasking applications running in the user equipment and the information on the coupled external device, wherein information on the multiscreen mode display setting is stored as a lookup table in the memory of the user equipment. 10. The method of claim 8, wherein the obtaining includes: determining the multiscreen mode display setting based on the number of multitasking applications running in the user equipment and the information on the coupled external device. 11. The method of claim 1, further comprising: determining whether an user input is received for activation of a new application after controlling the coupled external device to display image data produced in the user equipment in the multiscreen display mode; upon the receipt of the user input for activation of a new application, interrupting creating image data of one of running applications associated with the multiscreen display mode; creating image data of the newly activated application and continuously creating image data of remaining applications associated with the multiscreen display mode; creating multiscreen image data by combining the created image data of the newly activated application and the continuously created image data of the remaining applications based on an associated multiscreen mode display setting; and transmitting the created multiscreen image data to the coupled external device, wherein the controlling includes: controlling the coupled external device and displaying the transmitted multiscreen image data in the associated multiscreen display mode based on the associated multiscreen mode display setting. 12. The method of claim 11, wherein the application interrupted to create image data is running in a background mode. 13. The method of claim 11, wherein the determining includes: upon the receipt of the user input for activating a new application, determining whether a number of application windows in a current multiscreen display mode reaches a maximum number of application windows allowed within the current multiscreen display mode; when the number of application windows in the current multiscreen display mode is less than the maximum number, obtaining a multiscreen mode display setting by increasing a number of associated multitasking applications by one; creating multiscreen image data based on the newly obtained multiscreen mode display setting by additionally including image data produced by the newly activated application; and transmitting the created multiscreen image data to the coupled external device and controlling the external device and displaying the transmitted multiscreen image data based on the newly obtained multiscreen mode display setting. 14. The method of claim 1, further comprising: determining whether an user input is received for closing one of application windows in the multiscreen display mode after controlling the coupled external device to display image data produced in the user equipment in the multiscreen display mode; upon the receipt of the user input for closing one of application windows, interrupting creating image data of the application associated with the user input; resuming one of applications running in a background mode to create image data and continuously creating image data of applications not associated with the user input; creating multiscreen image data by combining the created image data of the resumed application and the continuously created image data of the applications not associated with the user input based on an associated multiscreen mode display setting; and transmitting the created multiscreen image data to the coupled external device, wherein the controlling includes: controlling the coupled external device and displaying the transmitted multiscreen image data in the associated multiscreen display mode based on the associated multiscreen mode display setting. 15. A user equipment for dynamically controlling a display mode of an external device coupled thereto, the user equipment configured to: detect connection to an external device; and upon the detection of the connection, control the coupled external device to display image data produced in the user equipment on a display unit of the coupled external device in a display mode different from a display mode of the user equipment. 16. The user equipment of claim 15, wherein the user equipment is configured to: create image data, as a result of running at least one application in the user equipment, based on a multiscreen mode display setting associated with a number of applications running in a multitasking mode of the user equipment and information on the coupled external device; transmit the created image data to the coupled external device; and control the external device to display the transmitted image data in a multiscreen display mode based on the associated multiscreen mode display setting. 17. The user equipment of claim 15, wherein the user equipment is configured to: determine whether a display size of the coupled external device is greater than a reference display size upon the detection of the connection; determine a number of applications running in a multitasking mode of the user equipment when the display size of the coupled external device is greater than the reference display size; obtain a multiscreen mode display setting based on the determined number of applications running in a multitasking mode of the user equipment and information on the coupled external device; and control the coupled external device to display the produced image data on the display unit of the coupled external device in a multiscreen display mode based on the obtained multiscreen mode display setting. 18. The user equipment of claim 15, wherein the user equipment is configured to: establish a host-device connection between the user equipment and the coupled external device and set a host-device connection display setting based on information on a display unit of the coupled external device upon the detection of the connection; create image data, as a result of running an application in the user equipment, based on the host-device connection display setting; transmit the created image data to the coupled external device and control the coupled external device to display the transmitted image data in a single screen mode; determine whether a user input is received for activation of one of applications installed in the user equipment; upon the receipt of the user input for activation, create image data of the currently activated application; create a multiscreen image data by combining the created image data of the currently activated application and the image data of the running application based on an associated multiscreen mode display setting; transmit the created multiscreen image data to the external device; and control the coupled external device to display the transmitted multiscreen image data in a dual screen display mode based on the associated multiscreen mode display setting. 19. The user equipment of claim 15, wherein the user equipment is configured to: obtain a multiscreen mode display setting based on a number of applications running in the user equipment and information on the coupled external device, wherein the multiscreen mode display setting includes at least one of information on a number of application windows in the multiscreen display mode, a maximum number of allowed application windows, a display size of each application window, a position and a display orientation of each application window, arrangement of each application window, control menus and icons for controlling an overall graphic user interface of the multiscreen display mode and each application window, and arrangement and sizes of the control menus and icons. 20. The user equipment of claim 15, wherein: the user equipment is configured to determine whether an user input is received for at least one of activation of a new application and closure of one of the application windows in the multiscreen display mode, after controlling the coupled external device to display image data produced in the user equipment in the multiscreen display mode; upon the receipt of the user input for activation of a new application, the user equipment is configured to: interrupt creating image data of one of running applications associated with the multiscreen display mode, create image data of the newly activated application and continuously create image data of remaining applications associated with the multiscreen display mode, create multiscreen image data by combining the created image data of the newly activated application and the continuously created image data of the remaining applications based on an associated multiscreen mode display setting, transmit the created multiscreen image data to the coupled external device, and control the coupled external device to display the transmitted multiscreen image data in the associated multiscreen display mode based on the associated multiscreen mode display setting; and upon the receipt of the user input for closing one of application windows in the multiscreen display mode, the user equipment is configured to: determine whether a number of application windows in a current multiscreen display mode reaches a maximum number of application windows allowed within the current multiscreen display mode, when the number of application windows in the current multiscreen display mode is smaller than the maximum number, obtain a multiscreen mode display setting by increasing a number of associated multitasking applications by one, create multiscreen image data based on the newly obtained multiscreen mode display setting by additionally including image data produced by the newly activated application, transmit the created multiscreen image data to the coupled external device, and control the external device to display the transmitted multiscreen image data based on the newly obtained multiscreen mode display setting.
2,600
10,405
10,405
15,927,873
2,643
The present application provides an access method, device, and system of UE, and relates to the communications field. The method is performed by a first network device on a 3GPP network, including: receiving, by using a second network device on a non-3GPP network, an access request message from the UE; generating a first NAS verification code according to an identifier of the UE and a NAS security context of the UE stored in the first network device; if the access request message includes a second NAS verification code, detecting whether the second NAS verification code is the same as the first NAS verification code; and if the second NAS verification code is the same as the first NAS verification code, sending an access key of the non-3GPP network to the second network device.
1. An access method for a user equipment (UE), the method comprising: receiving, by a first network device on a 3rd Generation Partnership Project (3GPP) network, wherein the first network device uses a second network device on a non 3rd Generation Partnership Project (non-3GPP) network, an access request message from the UE, wherein the access request message comprises an identifier of the UE; generating, by the first network device, a first non-access stratum (NAS) verification code based on the identifier of the UE and a NAS security context of the UE that is stored in the first network device; when the access request message comprises a second NAS verification code, detecting, by the first network device, whether the second NAS verification code is the same as the first NAS verification code, wherein the second NAS verification code is a verification code that is generated by the UE based on a NAS security context stored in the UE; and when the second NAS verification code is the same as the first NAS verification code, sending, by the first network device, an access key of the non-3GPP network to the second network device. 2. The method of claim 1, further comprising: determining, by the first network device, the access key of the non-3GPP network based on a NAS sequence number of the 3GPP network, a key of the 3GPP network, and a type identifier of the non-3GPP network. 3. The method of claim 2, further comprising: obtaining, by the first network device from the NAS security context of the UE stored in the first network device, the NAS sequence number of the 3GPP network and the key of the 3GPP network; and receiving, by the first network device, the type identifier of the non-3GPP network from the second network device. 4. The method of claim 1, further comprising: when the second NAS verification code is different from the first NAS verification code, performing, by the first network device, security authentication on the UE; or when the access request message does not comprise a NAS verification code, performing, by the first network device, security authentication on the UE. 5. The method of claim 1, further comprising: obtaining, by the first network device, capability information of the UE, wherein the capability information is used to indicate a capability of the UE on the non-3GPP network; and sending, by the first network device, the capability information to the second network device, wherein the capability information is used by the second network device to determine a cryptographic algorithm, and the cryptographic algorithm is used by the second network device to generate an access stratum (AS) key of the non-3GPP network. 6. An access method for a user equipment (UE), the method comprising: generating, by the UE, an access request message, wherein the access request message comprises an identifier of the UE; and sending, by the UE, the access request message to a first network device on a 3rd Generation Partnership Project (3GPP) network by using a second network device on a non 3rd Generation Partnership Project (non-3GPP) network. 7. The method of claim 6, further comprising: determining, by the UE, an access key of the non-3GPP network based on a non-access stratum (NAS) sequence number of the 3GPP network, a key of the 3GPP network, and a preset type identifier of the non-3GPP network. 8. The method of claim 6, wherein the access request message comprises a second NAS verification code and wherein the method further comprises: generating, by the UE, the second NAS verification code based on a NAS security context stored in the UE. 9. The method of claim 6, further comprising: receiving, by the UE by using the second network device, an authentication message from the first network device; and sending, by the UE by using the second network device, an authentication response message corresponding to the authentication message to the first network device. 10. The method of claim 7, further comprising: generating, by the UE, an access stratum (AS) key of the non-3GPP network based on the access key of the non-3GPP network. 11. An access device for a user equipment (UE), the access device comprising: a receiver, a transmitter, a processor, a bus, and a memory, wherein: the bus is configured to connect the receiver, the transmitter, the processor, and the memory; the receiver is configured to receive, using a second network device on a non 3rd Generation Partnership Project (non-3GPP) network, an access request message sent by the UE wherein the access request message comprises an identifier of the UE; the processor is configured to execute a program stored in the memory, generate a first non-access stratum (NAS) verification code based on the identifier of the UE and a NAS security context of the UE that is stored in the access device of the UE, and when the access request message comprises a second NAS verification code, detect whether the second NAS verification code is the same as the first NAS verification code, wherein the second NAS verification code is a verification code that is generated by the UE based on a NAS security context stored in the UE; and the transmitter is configured to, when the second NAS verification code is the same as the first NAS verification code, send an access key of the non-3GPP network to the second network device. 12. The device of claim 11, wherein the processor is further configured to: determine the access key of the non-3GPP network based on a NAS sequence number of the 3GPP network, a key of the 3GPP network, and a type identifier of the non-3GPP network. 13. The device of claim 11, wherein the processor is further configured to: when the second NAS verification code is different from the first NAS verification code, perform security authentication on the UE; or when the access request message does not comprise a NAS verification code, perform security authentication on the UE. 14. The device of claim 11, wherein: the processor is further configured to obtain capability information of the UE, wherein the capability information is used to indicate a capability of the UE on the non-3GPP network; and the transmitter is further configured to send the capability information to the second network device, wherein the capability information is used by the second network device to determine a cryptographic algorithm, and the cryptographic algorithm is used by the second network device to generate an access stratum (AS) key of the non-3GPP network. 15. The device of claim 11, wherein the second network device is a wireless access point (AP). 16. An access device for a user equipment (UE), the access device comprising: a transmitter, a processor, a bus, and a memory, wherein: the bus is configured to connect the transmitter, the processor, and the memory; the processor is configured to execute a program stored in the memory and generate an access request message, wherein the access request message comprises an identifier of the access device of the UE; and the transmitter is configured to send the access request message to a first network device on a 3rd Generation Partnership Project (3GPP) network by using a second network device on a non 3rd Generation Partnership Project (non-3GPP) network. 17. The device of claim 16, wherein the processor is further configured to: determine an access key of the non-3GPP network based on a NAS sequence number of the 3GPP network, a key of the 3GPP network, and a preset type identifier of the non-3GPP network. 18. The device of claim 16, wherein the processor is further configured to: generate the second NAS verification code based on a NAS security context stored in the access device of the UE. 19. The device of claim 16, wherein the device further comprises: a receiver configured to receive, by using the second network device, an authentication message from the first network device and wherein the transmitter is further configured to send, by using the second network device, an authentication response message corresponding to the authentication message to the first network device. 20. The device of claim 17, wherein the processor is further configured to: generate an access stratum (AS) key of the non-3GPP network based on the access key of the non-3GPP network.
The present application provides an access method, device, and system of UE, and relates to the communications field. The method is performed by a first network device on a 3GPP network, including: receiving, by using a second network device on a non-3GPP network, an access request message from the UE; generating a first NAS verification code according to an identifier of the UE and a NAS security context of the UE stored in the first network device; if the access request message includes a second NAS verification code, detecting whether the second NAS verification code is the same as the first NAS verification code; and if the second NAS verification code is the same as the first NAS verification code, sending an access key of the non-3GPP network to the second network device.1. An access method for a user equipment (UE), the method comprising: receiving, by a first network device on a 3rd Generation Partnership Project (3GPP) network, wherein the first network device uses a second network device on a non 3rd Generation Partnership Project (non-3GPP) network, an access request message from the UE, wherein the access request message comprises an identifier of the UE; generating, by the first network device, a first non-access stratum (NAS) verification code based on the identifier of the UE and a NAS security context of the UE that is stored in the first network device; when the access request message comprises a second NAS verification code, detecting, by the first network device, whether the second NAS verification code is the same as the first NAS verification code, wherein the second NAS verification code is a verification code that is generated by the UE based on a NAS security context stored in the UE; and when the second NAS verification code is the same as the first NAS verification code, sending, by the first network device, an access key of the non-3GPP network to the second network device. 2. The method of claim 1, further comprising: determining, by the first network device, the access key of the non-3GPP network based on a NAS sequence number of the 3GPP network, a key of the 3GPP network, and a type identifier of the non-3GPP network. 3. The method of claim 2, further comprising: obtaining, by the first network device from the NAS security context of the UE stored in the first network device, the NAS sequence number of the 3GPP network and the key of the 3GPP network; and receiving, by the first network device, the type identifier of the non-3GPP network from the second network device. 4. The method of claim 1, further comprising: when the second NAS verification code is different from the first NAS verification code, performing, by the first network device, security authentication on the UE; or when the access request message does not comprise a NAS verification code, performing, by the first network device, security authentication on the UE. 5. The method of claim 1, further comprising: obtaining, by the first network device, capability information of the UE, wherein the capability information is used to indicate a capability of the UE on the non-3GPP network; and sending, by the first network device, the capability information to the second network device, wherein the capability information is used by the second network device to determine a cryptographic algorithm, and the cryptographic algorithm is used by the second network device to generate an access stratum (AS) key of the non-3GPP network. 6. An access method for a user equipment (UE), the method comprising: generating, by the UE, an access request message, wherein the access request message comprises an identifier of the UE; and sending, by the UE, the access request message to a first network device on a 3rd Generation Partnership Project (3GPP) network by using a second network device on a non 3rd Generation Partnership Project (non-3GPP) network. 7. The method of claim 6, further comprising: determining, by the UE, an access key of the non-3GPP network based on a non-access stratum (NAS) sequence number of the 3GPP network, a key of the 3GPP network, and a preset type identifier of the non-3GPP network. 8. The method of claim 6, wherein the access request message comprises a second NAS verification code and wherein the method further comprises: generating, by the UE, the second NAS verification code based on a NAS security context stored in the UE. 9. The method of claim 6, further comprising: receiving, by the UE by using the second network device, an authentication message from the first network device; and sending, by the UE by using the second network device, an authentication response message corresponding to the authentication message to the first network device. 10. The method of claim 7, further comprising: generating, by the UE, an access stratum (AS) key of the non-3GPP network based on the access key of the non-3GPP network. 11. An access device for a user equipment (UE), the access device comprising: a receiver, a transmitter, a processor, a bus, and a memory, wherein: the bus is configured to connect the receiver, the transmitter, the processor, and the memory; the receiver is configured to receive, using a second network device on a non 3rd Generation Partnership Project (non-3GPP) network, an access request message sent by the UE wherein the access request message comprises an identifier of the UE; the processor is configured to execute a program stored in the memory, generate a first non-access stratum (NAS) verification code based on the identifier of the UE and a NAS security context of the UE that is stored in the access device of the UE, and when the access request message comprises a second NAS verification code, detect whether the second NAS verification code is the same as the first NAS verification code, wherein the second NAS verification code is a verification code that is generated by the UE based on a NAS security context stored in the UE; and the transmitter is configured to, when the second NAS verification code is the same as the first NAS verification code, send an access key of the non-3GPP network to the second network device. 12. The device of claim 11, wherein the processor is further configured to: determine the access key of the non-3GPP network based on a NAS sequence number of the 3GPP network, a key of the 3GPP network, and a type identifier of the non-3GPP network. 13. The device of claim 11, wherein the processor is further configured to: when the second NAS verification code is different from the first NAS verification code, perform security authentication on the UE; or when the access request message does not comprise a NAS verification code, perform security authentication on the UE. 14. The device of claim 11, wherein: the processor is further configured to obtain capability information of the UE, wherein the capability information is used to indicate a capability of the UE on the non-3GPP network; and the transmitter is further configured to send the capability information to the second network device, wherein the capability information is used by the second network device to determine a cryptographic algorithm, and the cryptographic algorithm is used by the second network device to generate an access stratum (AS) key of the non-3GPP network. 15. The device of claim 11, wherein the second network device is a wireless access point (AP). 16. An access device for a user equipment (UE), the access device comprising: a transmitter, a processor, a bus, and a memory, wherein: the bus is configured to connect the transmitter, the processor, and the memory; the processor is configured to execute a program stored in the memory and generate an access request message, wherein the access request message comprises an identifier of the access device of the UE; and the transmitter is configured to send the access request message to a first network device on a 3rd Generation Partnership Project (3GPP) network by using a second network device on a non 3rd Generation Partnership Project (non-3GPP) network. 17. The device of claim 16, wherein the processor is further configured to: determine an access key of the non-3GPP network based on a NAS sequence number of the 3GPP network, a key of the 3GPP network, and a preset type identifier of the non-3GPP network. 18. The device of claim 16, wherein the processor is further configured to: generate the second NAS verification code based on a NAS security context stored in the access device of the UE. 19. The device of claim 16, wherein the device further comprises: a receiver configured to receive, by using the second network device, an authentication message from the first network device and wherein the transmitter is further configured to send, by using the second network device, an authentication response message corresponding to the authentication message to the first network device. 20. The device of claim 17, wherein the processor is further configured to: generate an access stratum (AS) key of the non-3GPP network based on the access key of the non-3GPP network.
2,600
10,406
10,406
15,989,063
2,621
Acoustic touch and/or force sensing system architectures and methods for acoustic touch and/or force sensing can be used to detect a position of an object touching a surface and an amount of force applied to the surface by the object. The position and/or an applied force can be determined using time-of-flight (TOF) techniques, for example. Acoustic touch sensing can utilize transducers (e.g., piezoelectric) to simultaneously transmit ultrasonic waves along a surface and through a thickness of a deformable material. The location of the object and the applied force can be determined based on the amount of time elapsing between the transmission of the waves and receipt of the reflected waves. In some examples, an acoustic touch sensing system can be insensitive to water contact on the device surface, and thus acoustic touch sensing can be used for touch sensing in devices that may become wet or fully submerged in water.
1. A touch and force sensitive device, comprising: a surface; a deformable material disposed between the surface and a rigid material, such that force on the surface causes a deformation of the deformable material; one or more transducers coupled to the surface and the deformable material and configured to transmit ultrasonic waves to and receive ultrasonic waves from the surface and the deformable material; and a processor capable of: determining a location of a contact by an object on the surface based on ultrasonic waves propagating in the surface; and determining an applied force by the contact on the surface based on ultrasonic waves propagating in the deformable material. 2. The device of claim 1, wherein the surface comprises a glass or sapphire external surface of the device, the rigid material comprises a portion of a metal housing of the device, and the deformable material forms a gasket between the metal housing and the surface. 3. The device of claim 1, wherein the one or more transducers comprises at least a first transducer coupled to the deformable material and configured to transmit an ultrasonic wave through the thickness of the deformable material. 4. The device of claim 3, wherein the first transducer is also configured to receive one or more ultrasonic reflections from a boundary between the deformable material and the rigid material. 5. The device of claim 3, wherein the one or more transducers comprises at least a second transducer coupled between the deformable material and the rigid material and configured to receive the ultrasonic wave transmitted through the thickness of the deformable material. 6. The device of claim 1, wherein the one or more transducers comprises at least one transducer configured to simultaneously transmit an ultrasonic wave in the surface and an ultrasonic wave through the deformable material. 7. The device of claim 1, wherein the one or more transducers comprises four transducers, wherein each of the four transducers is disposed proximate to a respective edge of the surface. 8. The device of claim 1, further comprising an ultrasonic absorbent material coupled to the deformable material, the ultrasonic absorbent material configured to dampen ultrasonic ringing in the deformable material. 9. The device of claim 1, wherein determining the location of the contact by the object on the surface comprises: determining a first time-of-flight of an ultrasonic wave propagating between a first edge the surface and a first leading edge of the object proximate to the first edge; determining a second time-of-flight of an ultrasonic wave propagating between a second edge the surface and a second leading edge of the object proximate to the second edge; determining a third time-of-flight of an ultrasonic wave propagating between a third edge the surface and a third leading edge of the object proximate to the third edge; and determining a fourth time-of-flight of an ultrasonic wave propagating between a fourth edge the surface and a fourth leading edge of the object proximate to the fourth edge. 10. The device of claim 1, wherein determining the applied force by the contact on the surface comprises: determining a time-of-flight of an ultrasonic wave propagating from a first side of the deformable material and reflecting off of a second side, opposite the first side, of the deformable material. 11. A method comprising: transmitting ultrasonic waves in a surface; receiving ultrasonic reflections from the surface; transmitting ultrasonic waves through a deformable material; receiving ultrasonic reflections from the deformable material; determining a position of an object in contact with the surface from the ultrasonic reflections received from the surface; and determining a force applied by the object in contact with the surface from the ultrasonic reflections received from the deformable material. 12. The method of claim 11, wherein at least one of the ultrasonic waves transmitted in the surface and at least one of the ultrasonic waves transmitted in the deformable material are transmitted simultaneously. 13. The method of claim 12, wherein the at least one of the ultrasonic waves transmitted in the surface and the at least one of the ultrasonic waves transmitted in the deformable material are transmitted by a common transducer. 14. The method of claim 11, further comprising: determining a time-of-flight through the deformable material based on a time difference between transmitting an ultrasonic wave through the deformable material and receiving an ultrasonic reflection from the deformable material, wherein the force applied by the object is determined based on the time-of-flight through the deformable material. 15. The method of claim 14, wherein the ultrasonic reflection from the deformable material results from the ultrasonic wave transmitted through the deformable material reaching a boundary between the deformable material and a rigid material. 16. The method of claim 14, wherein the ultrasonic reflection from the deformable material is received before the ultrasonic reflection from the surface. 17. The method of claim 11, further comprising: determining a time-of-flight in the surface based on a time difference between transmitting an ultrasonic wave in the surface and receiving an ultrasonic reflection from the surface corresponding to the object in contact with the surface, wherein determining the position of the object comprises determining a distance from an edge of the surface to a leading edge of the object proximate to the edge of the surface based on the time-of-flight in the surface. 18. A non-transitory computer readable storage medium storing instructions, which when executed by a device comprising a surface, a plurality of acoustic transducers coupled to edges of the surface, an acoustic touch and force sensing circuit, and one or more processors, cause the acoustic touch and force sensing circuit and the one or more processors to: for each of the plurality of acoustic transducers: simultaneously transmit an ultrasonic wave in the surface toward an opposite edge of the surface and transmit an ultrasonic wave through a deformable material; receive an ultrasonic reflection from the deformable material in response to the ultrasonic wave transmitted through the deformable material traversing the thickness of the deformable material; receive an ultrasonic reflection from the surface; determine a first time-of-flight between the ultrasonic wave transmitted through the deformable material and the ultrasonic reflection from the deformable material; and determine a second time-of-flight between the ultrasonic wave transmitted in the surface and the ultrasonic reflection from the surface; determine a position of an object on the surface based on respective second time-of-flight measurements corresponding to the plurality of transducers; and determine an amount of applied force by the object on the surface based on respective first time-of-flight measurements corresponding to the plurality of transducers. 19. The non-transitory computer readable storage medium of claim 18, wherein the ultrasonic wave transmitted in the surface and the ultrasonic wave transmitted through the deformable material comprise shear waves. 20. The non-transitory computer readable storage medium of claim 18, wherein the ultrasonic reflection from the deformable material is received before the ultrasonic reflection from the surface. 21. An electronic device, comprising: a cover surface; a deformable material disposed between the cover surface and a housing of the electronic device; an acoustic transducer coupled to the cover surface and the deformable material and configured to produce a first acoustic wave in the cover surface and a second acoustic wave in the deformable material. 22. The electronic device of claim 21, wherein the deformable material and cover surface are further configured such that the first acoustic wave is capable of being propagated in a first direction and the second acoustic wave is capable of being propagated in a second direction, different from the first direction. 23. The electronic device of claim 22, wherein the first acoustic wave is incident upon a bezel portion of the cover glass in a third direction and reflected by the bezel portion of the cover glass in the first direction, different from the third direction. 24. The electronic device of claim 23, wherein the first and third directions are opposite to one another. 25. The electronic device of claim 23, wherein the first and third direction are orthogonal. 26. The electronic device of claim 21, wherein the deformable material is included in a gasket positioned between the housing and a first side of the cover surface. 27. A wearable audio device, comprising: a surface; a plurality of ultrasonic transducers coupled to the surface, the plurality of ultrasonic transducers configured to generate or receive a plurality ultrasonic waves; and a processor coupled to the plurality of ultrasonic transducers and configured to determine a location of an object contacting the surface based on one or more received ultrasonic waves of the plurality of ultrasonic waves.
Acoustic touch and/or force sensing system architectures and methods for acoustic touch and/or force sensing can be used to detect a position of an object touching a surface and an amount of force applied to the surface by the object. The position and/or an applied force can be determined using time-of-flight (TOF) techniques, for example. Acoustic touch sensing can utilize transducers (e.g., piezoelectric) to simultaneously transmit ultrasonic waves along a surface and through a thickness of a deformable material. The location of the object and the applied force can be determined based on the amount of time elapsing between the transmission of the waves and receipt of the reflected waves. In some examples, an acoustic touch sensing system can be insensitive to water contact on the device surface, and thus acoustic touch sensing can be used for touch sensing in devices that may become wet or fully submerged in water.1. A touch and force sensitive device, comprising: a surface; a deformable material disposed between the surface and a rigid material, such that force on the surface causes a deformation of the deformable material; one or more transducers coupled to the surface and the deformable material and configured to transmit ultrasonic waves to and receive ultrasonic waves from the surface and the deformable material; and a processor capable of: determining a location of a contact by an object on the surface based on ultrasonic waves propagating in the surface; and determining an applied force by the contact on the surface based on ultrasonic waves propagating in the deformable material. 2. The device of claim 1, wherein the surface comprises a glass or sapphire external surface of the device, the rigid material comprises a portion of a metal housing of the device, and the deformable material forms a gasket between the metal housing and the surface. 3. The device of claim 1, wherein the one or more transducers comprises at least a first transducer coupled to the deformable material and configured to transmit an ultrasonic wave through the thickness of the deformable material. 4. The device of claim 3, wherein the first transducer is also configured to receive one or more ultrasonic reflections from a boundary between the deformable material and the rigid material. 5. The device of claim 3, wherein the one or more transducers comprises at least a second transducer coupled between the deformable material and the rigid material and configured to receive the ultrasonic wave transmitted through the thickness of the deformable material. 6. The device of claim 1, wherein the one or more transducers comprises at least one transducer configured to simultaneously transmit an ultrasonic wave in the surface and an ultrasonic wave through the deformable material. 7. The device of claim 1, wherein the one or more transducers comprises four transducers, wherein each of the four transducers is disposed proximate to a respective edge of the surface. 8. The device of claim 1, further comprising an ultrasonic absorbent material coupled to the deformable material, the ultrasonic absorbent material configured to dampen ultrasonic ringing in the deformable material. 9. The device of claim 1, wherein determining the location of the contact by the object on the surface comprises: determining a first time-of-flight of an ultrasonic wave propagating between a first edge the surface and a first leading edge of the object proximate to the first edge; determining a second time-of-flight of an ultrasonic wave propagating between a second edge the surface and a second leading edge of the object proximate to the second edge; determining a third time-of-flight of an ultrasonic wave propagating between a third edge the surface and a third leading edge of the object proximate to the third edge; and determining a fourth time-of-flight of an ultrasonic wave propagating between a fourth edge the surface and a fourth leading edge of the object proximate to the fourth edge. 10. The device of claim 1, wherein determining the applied force by the contact on the surface comprises: determining a time-of-flight of an ultrasonic wave propagating from a first side of the deformable material and reflecting off of a second side, opposite the first side, of the deformable material. 11. A method comprising: transmitting ultrasonic waves in a surface; receiving ultrasonic reflections from the surface; transmitting ultrasonic waves through a deformable material; receiving ultrasonic reflections from the deformable material; determining a position of an object in contact with the surface from the ultrasonic reflections received from the surface; and determining a force applied by the object in contact with the surface from the ultrasonic reflections received from the deformable material. 12. The method of claim 11, wherein at least one of the ultrasonic waves transmitted in the surface and at least one of the ultrasonic waves transmitted in the deformable material are transmitted simultaneously. 13. The method of claim 12, wherein the at least one of the ultrasonic waves transmitted in the surface and the at least one of the ultrasonic waves transmitted in the deformable material are transmitted by a common transducer. 14. The method of claim 11, further comprising: determining a time-of-flight through the deformable material based on a time difference between transmitting an ultrasonic wave through the deformable material and receiving an ultrasonic reflection from the deformable material, wherein the force applied by the object is determined based on the time-of-flight through the deformable material. 15. The method of claim 14, wherein the ultrasonic reflection from the deformable material results from the ultrasonic wave transmitted through the deformable material reaching a boundary between the deformable material and a rigid material. 16. The method of claim 14, wherein the ultrasonic reflection from the deformable material is received before the ultrasonic reflection from the surface. 17. The method of claim 11, further comprising: determining a time-of-flight in the surface based on a time difference between transmitting an ultrasonic wave in the surface and receiving an ultrasonic reflection from the surface corresponding to the object in contact with the surface, wherein determining the position of the object comprises determining a distance from an edge of the surface to a leading edge of the object proximate to the edge of the surface based on the time-of-flight in the surface. 18. A non-transitory computer readable storage medium storing instructions, which when executed by a device comprising a surface, a plurality of acoustic transducers coupled to edges of the surface, an acoustic touch and force sensing circuit, and one or more processors, cause the acoustic touch and force sensing circuit and the one or more processors to: for each of the plurality of acoustic transducers: simultaneously transmit an ultrasonic wave in the surface toward an opposite edge of the surface and transmit an ultrasonic wave through a deformable material; receive an ultrasonic reflection from the deformable material in response to the ultrasonic wave transmitted through the deformable material traversing the thickness of the deformable material; receive an ultrasonic reflection from the surface; determine a first time-of-flight between the ultrasonic wave transmitted through the deformable material and the ultrasonic reflection from the deformable material; and determine a second time-of-flight between the ultrasonic wave transmitted in the surface and the ultrasonic reflection from the surface; determine a position of an object on the surface based on respective second time-of-flight measurements corresponding to the plurality of transducers; and determine an amount of applied force by the object on the surface based on respective first time-of-flight measurements corresponding to the plurality of transducers. 19. The non-transitory computer readable storage medium of claim 18, wherein the ultrasonic wave transmitted in the surface and the ultrasonic wave transmitted through the deformable material comprise shear waves. 20. The non-transitory computer readable storage medium of claim 18, wherein the ultrasonic reflection from the deformable material is received before the ultrasonic reflection from the surface. 21. An electronic device, comprising: a cover surface; a deformable material disposed between the cover surface and a housing of the electronic device; an acoustic transducer coupled to the cover surface and the deformable material and configured to produce a first acoustic wave in the cover surface and a second acoustic wave in the deformable material. 22. The electronic device of claim 21, wherein the deformable material and cover surface are further configured such that the first acoustic wave is capable of being propagated in a first direction and the second acoustic wave is capable of being propagated in a second direction, different from the first direction. 23. The electronic device of claim 22, wherein the first acoustic wave is incident upon a bezel portion of the cover glass in a third direction and reflected by the bezel portion of the cover glass in the first direction, different from the third direction. 24. The electronic device of claim 23, wherein the first and third directions are opposite to one another. 25. The electronic device of claim 23, wherein the first and third direction are orthogonal. 26. The electronic device of claim 21, wherein the deformable material is included in a gasket positioned between the housing and a first side of the cover surface. 27. A wearable audio device, comprising: a surface; a plurality of ultrasonic transducers coupled to the surface, the plurality of ultrasonic transducers configured to generate or receive a plurality ultrasonic waves; and a processor coupled to the plurality of ultrasonic transducers and configured to determine a location of an object contacting the surface based on one or more received ultrasonic waves of the plurality of ultrasonic waves.
2,600
10,407
10,407
14,915,224
2,694
A device including a mechanical input and a touch-sensitive surface for detecting one or more touch inputs and an input from the mechanical input. The touch-sensitive surface can include a first portion for detecting at least the touch inputs, and a second portion for detecting at least the mechanical input. The touch-sensitive surface can include a first portion for detecting at least the touch inputs and the mechanical input. The mechanical input can comprise an electrically conductive material, and the mechanical input can be detected based on capacitance measurements between the mechanical input and the touch-sensitive surface. The device can include a sensing element, the mechanical input can comprise an electrically insulating material, and the mechanical input can be detected based on capacitance measurements between the touch-sensitive surface and the sensing element. The device can include logic to differentiate between the touch inputs and the mechanical input.
1. A device comprising: a mechanical input; and a touch-sensitive surface configured to detect one or more touch inputs and an input from the mechanical input. 2. The device of claim 1, wherein the touch-sensitive surface comprises: a first portion for detecting at least the one or more touch inputs; and a second portion for detecting at least the input from the mechanical input. 3. The device of claim 1, wherein the touch-sensitive surface comprises a first portion for detecting at least the one or more touch inputs and the input from the mechanical input. 4. The device of claim 1, wherein the mechanical input comprises an electrically conductive material, and detecting the input from the mechanical input comprises detecting the input from the mechanical input based on one or more capacitance measurements between the mechanical input and the touch-sensitive surface. 5. The device of claim 1, further comprising a sensing element, wherein the mechanical input comprises an electrically insulating material and is disposed between the touch-sensitive surface and the sensing element, and detecting the input from the mechanical input comprises detecting the input from the mechanical input based on one or more capacitance measurements between the touch-sensitive surface and the sensing element. 6. The device of claim 2, further comprising a barrier, wherein the barrier at least partially shields the second portion from the one or more touch inputs. 7. The device of claim 1, further comprising logic, the logic configured to differentiate between the one or more touch inputs and the input from the mechanical input. 8. The device of claim 7, wherein differentiating comprises differentiating between the one or more touch inputs and the input from the mechanical input based on a difference in magnitude of a capacitance measurement associated with the one or more touch inputs and a capacitance measurement associated with the input from the mechanical input. 9. The device of claim 7, wherein differentiating comprises differentiating between the one or more touch inputs and the input from the mechanical input based on a difference in size of a capacitance measurement associated with the one or more touch inputs and a capacitance measurement associated with the input from the mechanical input. 10. The device of claim 1, wherein the mechanical input comprises one or more extensions, and the touch-sensitive surface is further configured to detect a position of each of the one or more extensions. 11. The device of claim 1, further comprising a structure for providing a tactile response for the mechanical input. 12. The device of claim 1, further comprising a removable component, wherein the touch-sensitive surface is further configured to detect a presence of the removable component. 13. The device of claim 1, wherein the touch-sensitive surface is further configured to detect damage to the device. 14. A device comprising: a first electrode; a second electrode; and logic coupled to the first electrode and the second electrode, wherein the logic is configured to detect damage to the device based on a capacitance measurement between the first electrode and the second electrode. 15. A method comprising: detecting an input from a mechanical input using a touch-sensitive surface, the touch-sensitive surface configured to detect the input from the mechanical input and one or more touch inputs. 16. The method of claim 15, wherein the touch-sensitive surface comprises: a first portion for detecting at least the one or more touch inputs; and a second portion for detecting at least the input from the mechanical input. 17. The method of claim 15, wherein the touch-sensitive surface comprises a first portion for detecting at least the one or more touch inputs and the input from the mechanical input. 18. The method of claim 15, wherein the mechanical input comprises an electrically conductive material, and detecting the input from the mechanical input comprises detecting the input from the mechanical input based on one or more capacitance measurements between the mechanical input and the touch-sensitive surface. 19. The method of claim 15, the mechanical input comprises an electrically insulating material and is disposed between the touch-sensitive surface and a sensing element, and detecting the input from the mechanical input comprises detecting the input from the mechanical input based on one or more capacitance measurements between the touch-sensitive surface and the sensing element. 20. The method of claim 16, wherein a barrier at least partially shields the second portion from the one or more touch inputs. 21. The method of claim 15, further comprising differentiating between the one or more touch inputs and the input from the mechanical input. 22. The method of claim 21, wherein differentiating comprises differentiating between the one or more touch inputs and the input from the mechanical input based on a difference in magnitude of a capacitance measurement associated with the one or more touch inputs and a capacitance measurement associated with the input from the mechanical input. 23. The method of claim 21, wherein differentiating comprises differentiating between the one or more touch inputs and the input from the mechanical input based on a difference in size of a capacitance measurement associated with the one or more touch inputs and a capacitance measurement associated with the input from the mechanical input. 24. The method of claim 15, wherein the mechanical input comprises one or more extensions, the method further comprising detecting a position of each of the one or more extensions. 25. The method of claim 15, further comprising providing a tactile response for the mechanical input.
A device including a mechanical input and a touch-sensitive surface for detecting one or more touch inputs and an input from the mechanical input. The touch-sensitive surface can include a first portion for detecting at least the touch inputs, and a second portion for detecting at least the mechanical input. The touch-sensitive surface can include a first portion for detecting at least the touch inputs and the mechanical input. The mechanical input can comprise an electrically conductive material, and the mechanical input can be detected based on capacitance measurements between the mechanical input and the touch-sensitive surface. The device can include a sensing element, the mechanical input can comprise an electrically insulating material, and the mechanical input can be detected based on capacitance measurements between the touch-sensitive surface and the sensing element. The device can include logic to differentiate between the touch inputs and the mechanical input.1. A device comprising: a mechanical input; and a touch-sensitive surface configured to detect one or more touch inputs and an input from the mechanical input. 2. The device of claim 1, wherein the touch-sensitive surface comprises: a first portion for detecting at least the one or more touch inputs; and a second portion for detecting at least the input from the mechanical input. 3. The device of claim 1, wherein the touch-sensitive surface comprises a first portion for detecting at least the one or more touch inputs and the input from the mechanical input. 4. The device of claim 1, wherein the mechanical input comprises an electrically conductive material, and detecting the input from the mechanical input comprises detecting the input from the mechanical input based on one or more capacitance measurements between the mechanical input and the touch-sensitive surface. 5. The device of claim 1, further comprising a sensing element, wherein the mechanical input comprises an electrically insulating material and is disposed between the touch-sensitive surface and the sensing element, and detecting the input from the mechanical input comprises detecting the input from the mechanical input based on one or more capacitance measurements between the touch-sensitive surface and the sensing element. 6. The device of claim 2, further comprising a barrier, wherein the barrier at least partially shields the second portion from the one or more touch inputs. 7. The device of claim 1, further comprising logic, the logic configured to differentiate between the one or more touch inputs and the input from the mechanical input. 8. The device of claim 7, wherein differentiating comprises differentiating between the one or more touch inputs and the input from the mechanical input based on a difference in magnitude of a capacitance measurement associated with the one or more touch inputs and a capacitance measurement associated with the input from the mechanical input. 9. The device of claim 7, wherein differentiating comprises differentiating between the one or more touch inputs and the input from the mechanical input based on a difference in size of a capacitance measurement associated with the one or more touch inputs and a capacitance measurement associated with the input from the mechanical input. 10. The device of claim 1, wherein the mechanical input comprises one or more extensions, and the touch-sensitive surface is further configured to detect a position of each of the one or more extensions. 11. The device of claim 1, further comprising a structure for providing a tactile response for the mechanical input. 12. The device of claim 1, further comprising a removable component, wherein the touch-sensitive surface is further configured to detect a presence of the removable component. 13. The device of claim 1, wherein the touch-sensitive surface is further configured to detect damage to the device. 14. A device comprising: a first electrode; a second electrode; and logic coupled to the first electrode and the second electrode, wherein the logic is configured to detect damage to the device based on a capacitance measurement between the first electrode and the second electrode. 15. A method comprising: detecting an input from a mechanical input using a touch-sensitive surface, the touch-sensitive surface configured to detect the input from the mechanical input and one or more touch inputs. 16. The method of claim 15, wherein the touch-sensitive surface comprises: a first portion for detecting at least the one or more touch inputs; and a second portion for detecting at least the input from the mechanical input. 17. The method of claim 15, wherein the touch-sensitive surface comprises a first portion for detecting at least the one or more touch inputs and the input from the mechanical input. 18. The method of claim 15, wherein the mechanical input comprises an electrically conductive material, and detecting the input from the mechanical input comprises detecting the input from the mechanical input based on one or more capacitance measurements between the mechanical input and the touch-sensitive surface. 19. The method of claim 15, the mechanical input comprises an electrically insulating material and is disposed between the touch-sensitive surface and a sensing element, and detecting the input from the mechanical input comprises detecting the input from the mechanical input based on one or more capacitance measurements between the touch-sensitive surface and the sensing element. 20. The method of claim 16, wherein a barrier at least partially shields the second portion from the one or more touch inputs. 21. The method of claim 15, further comprising differentiating between the one or more touch inputs and the input from the mechanical input. 22. The method of claim 21, wherein differentiating comprises differentiating between the one or more touch inputs and the input from the mechanical input based on a difference in magnitude of a capacitance measurement associated with the one or more touch inputs and a capacitance measurement associated with the input from the mechanical input. 23. The method of claim 21, wherein differentiating comprises differentiating between the one or more touch inputs and the input from the mechanical input based on a difference in size of a capacitance measurement associated with the one or more touch inputs and a capacitance measurement associated with the input from the mechanical input. 24. The method of claim 15, wherein the mechanical input comprises one or more extensions, the method further comprising detecting a position of each of the one or more extensions. 25. The method of claim 15, further comprising providing a tactile response for the mechanical input.
2,600
10,408
10,408
15,747,225
2,621
An information processing apparatus is provided in which an enlarged image that is updated in conjugation with a motion of a head of a user is displayed on a video display apparatus worn on the head of the user without making the user feel discomfort. Disclosed herein is an information processing apparatus connected to a video display apparatus worn on the head of the user displays a target image to be updated in conjugation with a change in the direction of the video display apparatus onto the video display apparatus and, at the same time, displays an enlarged image obtained by enlarging a part of the target image by superimposing the enlarged image on the target image with a size smaller than a display area of the target image.
1. An information processing apparatus connected to a video display apparatus worn on a head of a user, the information processing apparatus comprising: an acquisition block configured to acquire a direction of the video display apparatus; a target image display control block configured to display a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus; and an enlarged image display control block configured to display an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image display control block displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image. 2. The information processing apparatus according to claim 1, wherein the video display apparatus displays an image for the right eye and an image for the left eye, and the enlarged image display control apparatus provides control such that a parallax in the enlarged image between the image for the right eye and the image for the left eye gets greater than a parallax in the target image. 3. The information processing apparatus according to claim 1, wherein the target image display control block executes, while the enlarged image is displayed, blurring processing on the target image before being displayed. 4. The information processing apparatus according to claim 1, wherein the enlarged image display control block displays, as the enlarged image, an image obtained by enlarging an enlarged target area that is a part of the target image, if a direction of the video display apparatus matches a predetermined reference direction, the enlarged image display control block matches a center of the enlarged target area with a center of the target image and, if a direction of the video display apparatus shifts from the reference direction, shifts the center of the enlarged target area to a position near an outer periphery of the target image. 5. The information processing apparatus according to claim 1, wherein if a direction of the video display apparatus matches a predetermined reference direction, the enlarged image display control block displays the enlarged image at a center of a display area of the target image and, if a direction of the video display apparatus shifts from the predetermined reference direction, shifts a display position of the enlarged image to a position near an outer periphery of the target image. 6. A video display system comprising: a video display apparatus worn on a head of a user; and an information processing apparatus connected to the video display apparatus, wherein the information processing apparatus includes an acquisition block configured to acquire a direction of the video display apparatus, a target image display control block configured to display a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus, and an enlarged image display control block configured to display an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image display control block displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image. 7. A control method for an information processing apparatus connected to a video display apparatus worn on a head of a user, the control method comprising: acquiring a direction of the video display apparatus; displaying a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus; and displaying an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image displaying displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image. 8. A non-transitory, computer readable storage medium containing a computer program, which when executed by a computer connected to a video display apparatus worn on a head of a user, causes the computer to executed actions, comprising: acquiring a direction of the video display apparatus; displaying a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus; and displaying an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image display control means displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image. 9. An Apparatus, comprising: a non-transitory, computer readable storage medium containing a computer program; a video display apparatus worn on a head of a user; a computer connected to the video display apparatus; wherein the computer program, when executed by the computer, causes the computer to carry out actions, including: acquiring a direction of the video display apparatus; displaying a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus; and displaying an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image display control means displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image.
An information processing apparatus is provided in which an enlarged image that is updated in conjugation with a motion of a head of a user is displayed on a video display apparatus worn on the head of the user without making the user feel discomfort. Disclosed herein is an information processing apparatus connected to a video display apparatus worn on the head of the user displays a target image to be updated in conjugation with a change in the direction of the video display apparatus onto the video display apparatus and, at the same time, displays an enlarged image obtained by enlarging a part of the target image by superimposing the enlarged image on the target image with a size smaller than a display area of the target image.1. An information processing apparatus connected to a video display apparatus worn on a head of a user, the information processing apparatus comprising: an acquisition block configured to acquire a direction of the video display apparatus; a target image display control block configured to display a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus; and an enlarged image display control block configured to display an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image display control block displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image. 2. The information processing apparatus according to claim 1, wherein the video display apparatus displays an image for the right eye and an image for the left eye, and the enlarged image display control apparatus provides control such that a parallax in the enlarged image between the image for the right eye and the image for the left eye gets greater than a parallax in the target image. 3. The information processing apparatus according to claim 1, wherein the target image display control block executes, while the enlarged image is displayed, blurring processing on the target image before being displayed. 4. The information processing apparatus according to claim 1, wherein the enlarged image display control block displays, as the enlarged image, an image obtained by enlarging an enlarged target area that is a part of the target image, if a direction of the video display apparatus matches a predetermined reference direction, the enlarged image display control block matches a center of the enlarged target area with a center of the target image and, if a direction of the video display apparatus shifts from the reference direction, shifts the center of the enlarged target area to a position near an outer periphery of the target image. 5. The information processing apparatus according to claim 1, wherein if a direction of the video display apparatus matches a predetermined reference direction, the enlarged image display control block displays the enlarged image at a center of a display area of the target image and, if a direction of the video display apparatus shifts from the predetermined reference direction, shifts a display position of the enlarged image to a position near an outer periphery of the target image. 6. A video display system comprising: a video display apparatus worn on a head of a user; and an information processing apparatus connected to the video display apparatus, wherein the information processing apparatus includes an acquisition block configured to acquire a direction of the video display apparatus, a target image display control block configured to display a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus, and an enlarged image display control block configured to display an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image display control block displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image. 7. A control method for an information processing apparatus connected to a video display apparatus worn on a head of a user, the control method comprising: acquiring a direction of the video display apparatus; displaying a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus; and displaying an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image displaying displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image. 8. A non-transitory, computer readable storage medium containing a computer program, which when executed by a computer connected to a video display apparatus worn on a head of a user, causes the computer to executed actions, comprising: acquiring a direction of the video display apparatus; displaying a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus; and displaying an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image display control means displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image. 9. An Apparatus, comprising: a non-transitory, computer readable storage medium containing a computer program; a video display apparatus worn on a head of a user; a computer connected to the video display apparatus; wherein the computer program, when executed by the computer, causes the computer to carry out actions, including: acquiring a direction of the video display apparatus; displaying a target image to be updated in conjugation with a change in a direction of the video display apparatus onto the video display apparatus; and displaying an enlarged image obtained by enlarging a part of the target image onto the video display apparatus, wherein the enlarged image display control means displays the enlarged image by superimposing the enlarged image on the target image with a size smaller than that of a display area of the target image.
2,600
10,409
10,409
15,787,090
2,649
In one or more implementations, a request is received at a client device to initiate a communication session with a selected contact using a communication service. One of a first communication network or a second communication network for the communication session with the selected contact is selected at the client device. The selection is based on the selected contact and user preferences. Next, the communication session is established with the selected contact using the selected first communication network or second communication network.
1. A method comprising: receiving, at a client device, a request to initiate a communication session with a selected contact using a communication service; selecting, at the client device, one of a first communication network or a second communication network for the communication session with the selected contact based on the selected contact and user preferences; and establishing the communication session with the selected contact using the selected first communication network or second communication network. 2. The method of claim 1, wherein the user preferences map two or more groups of particular types of contacts to one of the first communication network or the second communication network. 3. The method of claim 2, wherein the selecting one of the first communication network or the second communication network further comprises selecting the first communication network or the second communication network based on whether the selected contact comprises the particular type of contact mapped to the first communication network or the particular type of contact mapped to the second communication network. 4. The method of claim 1, wherein the first communication network comprises a packet data network. 5. The method of claim 1, wherein the second communication network comprises a public switched telephone network. 6. The method of claim 1, wherein the client device comprises a base station, and wherein the request is received from a handset that is wirelessly coupled to the base station. 7. The method of claim 1, wherein the receiving a request further comprises receiving a selection of the selected contact from a contact list received from one or more server devices of the communication service. 8. A computing device comprising: at least a memory and a processor configured to implement operations comprising: rendering a user interface for selecting contacts to initiate communication sessions using a communication service; receiving a selection of a contact via the user interface; and establishing a communication session with the selected contact over one of a first communication network or a second communication network, the first or second communication network selected based on the selected contact and user preferences. 9. The computing device of claim 8, wherein the user preferences map two or more groups of particular types of contacts to one of the first communication network or the second communication network, and wherein the first communication network or the second communication network is selected based on whether the selected contact comprises the particular type of contact mapped to the first communication network or the particular type of contact mapped to the second communication network. 10. The computing device of claim 8, wherein the first communication network comprises a packet data network. 11. The computing device of claim 8, wherein the second communication network comprises a public switched telephone network. 12. The computing device of claim 8, wherein the computing device comprises a handset. 13. The computing device of claim 8, wherein the user interface displays contacts from a contact list. 14. The computing device of claim 13, wherein the contact list is stored at the computing device. 15. The computing device of claim 13, wherein the contact list is received from a server device. 16. A method implemented at one or more server devices, the method comprising: maintaining, at one or more server devices, a contact list for users of a communication service, each contact list comprising identifiers of contacts and contact information for each respective contact, the contact information mapping one or more contacts to both a first communication network and a second communication network over which communication sessions can be established via the communication service; receiving a request, from a client device associated with a user of the communication service, for the respective contact list of the user; and communicating the respective contact list of the user to the client device associated with the user, the client device configured to automatically select one of the first communication network or the second communication network for a communication session with a selected contact based on the selected contact and user preferences mapping two or more groups of particular types of contacts to one of the first communication network or the second communication network. 17. The method of claim 16, wherein the communication service enables synchronization of the contact list with multiple devices of the user. 18. The method of claim 16, wherein the request is received in response to the user logging into the communication service at the client device for a first time. 19. The method of claim 16, wherein communicating the respective contact list causes the respective contact list to be cached at the client device. 20. The method of claim 16, wherein the first communication network comprises a packet data network, and wherein the second communication network comprises a public switched telephone network.
In one or more implementations, a request is received at a client device to initiate a communication session with a selected contact using a communication service. One of a first communication network or a second communication network for the communication session with the selected contact is selected at the client device. The selection is based on the selected contact and user preferences. Next, the communication session is established with the selected contact using the selected first communication network or second communication network.1. A method comprising: receiving, at a client device, a request to initiate a communication session with a selected contact using a communication service; selecting, at the client device, one of a first communication network or a second communication network for the communication session with the selected contact based on the selected contact and user preferences; and establishing the communication session with the selected contact using the selected first communication network or second communication network. 2. The method of claim 1, wherein the user preferences map two or more groups of particular types of contacts to one of the first communication network or the second communication network. 3. The method of claim 2, wherein the selecting one of the first communication network or the second communication network further comprises selecting the first communication network or the second communication network based on whether the selected contact comprises the particular type of contact mapped to the first communication network or the particular type of contact mapped to the second communication network. 4. The method of claim 1, wherein the first communication network comprises a packet data network. 5. The method of claim 1, wherein the second communication network comprises a public switched telephone network. 6. The method of claim 1, wherein the client device comprises a base station, and wherein the request is received from a handset that is wirelessly coupled to the base station. 7. The method of claim 1, wherein the receiving a request further comprises receiving a selection of the selected contact from a contact list received from one or more server devices of the communication service. 8. A computing device comprising: at least a memory and a processor configured to implement operations comprising: rendering a user interface for selecting contacts to initiate communication sessions using a communication service; receiving a selection of a contact via the user interface; and establishing a communication session with the selected contact over one of a first communication network or a second communication network, the first or second communication network selected based on the selected contact and user preferences. 9. The computing device of claim 8, wherein the user preferences map two or more groups of particular types of contacts to one of the first communication network or the second communication network, and wherein the first communication network or the second communication network is selected based on whether the selected contact comprises the particular type of contact mapped to the first communication network or the particular type of contact mapped to the second communication network. 10. The computing device of claim 8, wherein the first communication network comprises a packet data network. 11. The computing device of claim 8, wherein the second communication network comprises a public switched telephone network. 12. The computing device of claim 8, wherein the computing device comprises a handset. 13. The computing device of claim 8, wherein the user interface displays contacts from a contact list. 14. The computing device of claim 13, wherein the contact list is stored at the computing device. 15. The computing device of claim 13, wherein the contact list is received from a server device. 16. A method implemented at one or more server devices, the method comprising: maintaining, at one or more server devices, a contact list for users of a communication service, each contact list comprising identifiers of contacts and contact information for each respective contact, the contact information mapping one or more contacts to both a first communication network and a second communication network over which communication sessions can be established via the communication service; receiving a request, from a client device associated with a user of the communication service, for the respective contact list of the user; and communicating the respective contact list of the user to the client device associated with the user, the client device configured to automatically select one of the first communication network or the second communication network for a communication session with a selected contact based on the selected contact and user preferences mapping two or more groups of particular types of contacts to one of the first communication network or the second communication network. 17. The method of claim 16, wherein the communication service enables synchronization of the contact list with multiple devices of the user. 18. The method of claim 16, wherein the request is received in response to the user logging into the communication service at the client device for a first time. 19. The method of claim 16, wherein communicating the respective contact list causes the respective contact list to be cached at the client device. 20. The method of claim 16, wherein the first communication network comprises a packet data network, and wherein the second communication network comprises a public switched telephone network.
2,600
10,410
10,410
15,561,373
2,645
It is presented a method for determining whether a portable key device is located in an active area in relation to a barrier. The method is performed in an access control device and comprises the steps of: detecting an angle of arrival of a wireless signal from the portable key device using a pair of antennas provided such that a line between the pair of antennas crosses the barrier; and determining whether the portable key device is located within the active area based on the angle of arrival.
1. A method for determining whether a portable key device is located in an active area in relation to a barrier, the method being performed in an access control device and comprising the steps of: detecting an angle of arrival of a wireless signal from the portable key device using a pair of antennas provided such that a line between the pair of antennas crosses the barrier; and determining whether the portable key device is located within the active area based on the angle of arrival. 2. The method according to claim 1, wherein the active area is on the outside of the barrier. 3. The method according to claim 1, wherein the pair of antennas is provided such that the line between the pair of antennas is perpendicular, within a margin of error, to the barrier. 4. The method according to claim 1, further comprising the step of: determining a range of possible positions of the portable key device, based on the angle of arrival; and wherein the step of determining whether the portable key device is located within the active area comprises determining whether the range of possible positions are within the active area. 5. The method according to claim 4, further comprising the step of: determining a distance to the portable key device in relation to the antennas; and wherein the step of determining a range of possible positions comprises determining a range of possible positions position based on the distance. 6. The method according to claim 5, wherein the step of determining a range of possible positions comprises determining positions in three dimensions. 7. An access control device arranged to determine whether a portable key device is located in an active area in relation to a barrier, the access control device comprising: a processor; and a memory storing instructions that, when executed by the processor, causes the access control device to: detect an angle of arrival of a wireless signal from the portable key device using a pair of antennas provided such that a line between the pair of antennas crosses the barrier; and determine whether the portable key device is located within the active area based on the angle of arrival. 8. The access control device according to claim 7, wherein the active area is on the outside of the barrier. 9. The access control device according to claim 7, wherein the pair of antennas is provided such that the line between the pair of antennas is perpendicular, within a margin of error, to the barrier. 10. The access control device according to claim 7, further comprising instructions that, when executed by the processor, causes the access control device to: determine a range of possible positions of the portable key device, based on the angle of arrival; and wherein the instructions to determine whether the portable key device is located within the active area comprise instructions that, when executed by the processor, causes the access control device to determine whether the range of possible positions are within the active area. 11. The access control device according to claim 10, further comprising instructions that, when executed by the processor, causes the access control device to determine a distance to the portable key device in relation to the antennas; and wherein the instructions to determine a range of possible positions comprise instructions that, when executed by the processor, causes the access control device to determine a range of possible positions based on the distance. 12. A computer program for determining whether a portable key device is located in an active area in relation to a barrier, the computer program comprising computer program code which, when run on an access control device, causes the access control device to: detect an angle of arrival of a wireless signal from the portable key device using a pair of antennas provided such that a line between the pair of antennas crosses the barrier; and determine whether the portable key device is located within the active area based on the angle of arrival. 13. A computer program product comprising a computer program according to claim 12 and a computer readable means on which the computer program is stored.
It is presented a method for determining whether a portable key device is located in an active area in relation to a barrier. The method is performed in an access control device and comprises the steps of: detecting an angle of arrival of a wireless signal from the portable key device using a pair of antennas provided such that a line between the pair of antennas crosses the barrier; and determining whether the portable key device is located within the active area based on the angle of arrival.1. A method for determining whether a portable key device is located in an active area in relation to a barrier, the method being performed in an access control device and comprising the steps of: detecting an angle of arrival of a wireless signal from the portable key device using a pair of antennas provided such that a line between the pair of antennas crosses the barrier; and determining whether the portable key device is located within the active area based on the angle of arrival. 2. The method according to claim 1, wherein the active area is on the outside of the barrier. 3. The method according to claim 1, wherein the pair of antennas is provided such that the line between the pair of antennas is perpendicular, within a margin of error, to the barrier. 4. The method according to claim 1, further comprising the step of: determining a range of possible positions of the portable key device, based on the angle of arrival; and wherein the step of determining whether the portable key device is located within the active area comprises determining whether the range of possible positions are within the active area. 5. The method according to claim 4, further comprising the step of: determining a distance to the portable key device in relation to the antennas; and wherein the step of determining a range of possible positions comprises determining a range of possible positions position based on the distance. 6. The method according to claim 5, wherein the step of determining a range of possible positions comprises determining positions in three dimensions. 7. An access control device arranged to determine whether a portable key device is located in an active area in relation to a barrier, the access control device comprising: a processor; and a memory storing instructions that, when executed by the processor, causes the access control device to: detect an angle of arrival of a wireless signal from the portable key device using a pair of antennas provided such that a line between the pair of antennas crosses the barrier; and determine whether the portable key device is located within the active area based on the angle of arrival. 8. The access control device according to claim 7, wherein the active area is on the outside of the barrier. 9. The access control device according to claim 7, wherein the pair of antennas is provided such that the line between the pair of antennas is perpendicular, within a margin of error, to the barrier. 10. The access control device according to claim 7, further comprising instructions that, when executed by the processor, causes the access control device to: determine a range of possible positions of the portable key device, based on the angle of arrival; and wherein the instructions to determine whether the portable key device is located within the active area comprise instructions that, when executed by the processor, causes the access control device to determine whether the range of possible positions are within the active area. 11. The access control device according to claim 10, further comprising instructions that, when executed by the processor, causes the access control device to determine a distance to the portable key device in relation to the antennas; and wherein the instructions to determine a range of possible positions comprise instructions that, when executed by the processor, causes the access control device to determine a range of possible positions based on the distance. 12. A computer program for determining whether a portable key device is located in an active area in relation to a barrier, the computer program comprising computer program code which, when run on an access control device, causes the access control device to: detect an angle of arrival of a wireless signal from the portable key device using a pair of antennas provided such that a line between the pair of antennas crosses the barrier; and determine whether the portable key device is located within the active area based on the angle of arrival. 13. A computer program product comprising a computer program according to claim 12 and a computer readable means on which the computer program is stored.
2,600
10,411
10,411
15,140,083
2,692
A memo and photo authenticating system comprises a multiuser touch input device configured for input by a plurality of users. The memo and photo authenticating system comprises at least one camera communicatively coupled to the multiuser touch input device. The memo and photo authenticating system comprises at least one processor. The memo and photo authenticating system comprises a non-transitory tangible machine readable medium. The non-transitory tangible machine readable medium comprises instructions configured to cause the at least one processor to capture a freehand memo employing the multiuser touch input device, store the freehand memo in a data storage device, capture at least one image of at least one user employing the at least one camera, store the at least one image in the data storage device, and associate the freehand memo with the at least one image of the at least one user in the data storage device.
1. A memo and photo authentication system comprising: a) a multiuser touch input device configured for input by a plurality of users; b) at least one camera communicatively coupled to the multiuser touch input device; c) at least one processor; and d) a non-transitory tangible machine readable medium comprising instructions configured to cause the at least one processor to: i) capture a freehand memo employing the multiuser touch input device; ii) store the freehand memo in a data storage device; iii) capture at least one image of at least one user employing the at least one camera; iv) store the at least one image in the data storage device; and v) associate the freehand memo with the at least one image of the at least one user in the data storage device. 2. The system according to claim 1, further comprising a mount configurable to position the multiuser touch input device for multiple user viewing. 3. The system according to claim 1, further comprising at least one remote camera mounted to view the multiuser touch input device. 4. The system according to claim 3, wherein the instructions are further configured to cause the at least one processor to: a) capture at least one remote image of at least one user employing the at least one remote camera; b) store the at least one remote image in the data storage device; and c) associate the at least one remote image with the freehand memo in the data storage device. 5. The system according to claim 1, further comprising a communications interface configured to communicate with at least a part of a distribution system comprising a plurality of display devices. 6. The memo and photo capturing system according to claim 1, wherein the at least one camera is integrated with at least one of the following: a) the multiuser touch input device; and b) a display device. 7. The system according to claim 1, further comprising a voice recognition module. 8. The system according to claim 1, further comprising a voice synthesis module. 9. The system according to claim 1, wherein the instructions are further configured to cause the at least one processor to produce a digital photo album comprising the freehand memo and the associated at least one image. 10. The system according to claim 1, wherein the at least one image comprises at least one video. 11. The system according to claim 1, further comprising an advertisement module. 12. A memo and photo authenticating method comprising: a) capturing a freehand memo employing a multiuser touch input device; b) storing the freehand memo, employing at least one processor, in a data storage device; c) capturing at least one image of at least one user employing at least one camera; d) storing the at least one image in the data storage device; and e) associating the freehand memo with the at least one image of the at least one user in the data storage device. 13. The method according to claim 12, further comprising: a) capturing at least one remote image of the least one user employing at least one remote camera; b) storing the at least one remote image in the data storage device; and c) associating the at least one remote image with the freehand memo in the data storage device. 14. The method according to claim 12, further comprising communicating with at least a part of a distribution system comprising a plurality of display devices. 15. The method according to claim 12, wherein the freehand memo is addressed to at least one of the following: a) an honoree; b) a host of a party or event; c) a bride; d) a groom; e) a retiree; f) an employee; g) a guest of honor; h) a patient; i) a donor; j) a member of a military unit; k) a competitor; and l) a contestant. 16. The memo and photo capturing method according to claim 12, further comprising prompting at least one of the plurality of users for at least one of the following actions: a) erase a freehand memo written on the multiuser touch input device; b) view at least one freehand memo written on the multiuser touch input device; c) save a freehand memo written on the multiuser touch input device to the data storage device; and d) erase at least one freehand memo from the data storage device. 17. The memo and photo capturing method according to claim 12, further comprising prompting at least one of the plurality of users for at least one of the following actions: a) erase an image taken by at least one of the at least one camera; b) view at least one image taken by at least one of the at least one camera; c) save an image taken by at least one of the at least one camera to the data storage device; and d) erase at least one image from the data storage device. 18. The method according to claim 12, further comprising prompting the at least one user to prepare themselves for an image type, the image type comprising at least one of the following: a) a sequence of images; b) a full body image; c) a close up image; and d) a funny image. 19. The method according to claim 12, further comprising accepting vocal commands employing a voice recognition module. 20. The method according to claim 12, further comprising producing a digital photo album comprising the freehand memo and the associated at least one image. 21. The method according to claim 12, wherein the at least one remote image comprises at least one remote video.
A memo and photo authenticating system comprises a multiuser touch input device configured for input by a plurality of users. The memo and photo authenticating system comprises at least one camera communicatively coupled to the multiuser touch input device. The memo and photo authenticating system comprises at least one processor. The memo and photo authenticating system comprises a non-transitory tangible machine readable medium. The non-transitory tangible machine readable medium comprises instructions configured to cause the at least one processor to capture a freehand memo employing the multiuser touch input device, store the freehand memo in a data storage device, capture at least one image of at least one user employing the at least one camera, store the at least one image in the data storage device, and associate the freehand memo with the at least one image of the at least one user in the data storage device.1. A memo and photo authentication system comprising: a) a multiuser touch input device configured for input by a plurality of users; b) at least one camera communicatively coupled to the multiuser touch input device; c) at least one processor; and d) a non-transitory tangible machine readable medium comprising instructions configured to cause the at least one processor to: i) capture a freehand memo employing the multiuser touch input device; ii) store the freehand memo in a data storage device; iii) capture at least one image of at least one user employing the at least one camera; iv) store the at least one image in the data storage device; and v) associate the freehand memo with the at least one image of the at least one user in the data storage device. 2. The system according to claim 1, further comprising a mount configurable to position the multiuser touch input device for multiple user viewing. 3. The system according to claim 1, further comprising at least one remote camera mounted to view the multiuser touch input device. 4. The system according to claim 3, wherein the instructions are further configured to cause the at least one processor to: a) capture at least one remote image of at least one user employing the at least one remote camera; b) store the at least one remote image in the data storage device; and c) associate the at least one remote image with the freehand memo in the data storage device. 5. The system according to claim 1, further comprising a communications interface configured to communicate with at least a part of a distribution system comprising a plurality of display devices. 6. The memo and photo capturing system according to claim 1, wherein the at least one camera is integrated with at least one of the following: a) the multiuser touch input device; and b) a display device. 7. The system according to claim 1, further comprising a voice recognition module. 8. The system according to claim 1, further comprising a voice synthesis module. 9. The system according to claim 1, wherein the instructions are further configured to cause the at least one processor to produce a digital photo album comprising the freehand memo and the associated at least one image. 10. The system according to claim 1, wherein the at least one image comprises at least one video. 11. The system according to claim 1, further comprising an advertisement module. 12. A memo and photo authenticating method comprising: a) capturing a freehand memo employing a multiuser touch input device; b) storing the freehand memo, employing at least one processor, in a data storage device; c) capturing at least one image of at least one user employing at least one camera; d) storing the at least one image in the data storage device; and e) associating the freehand memo with the at least one image of the at least one user in the data storage device. 13. The method according to claim 12, further comprising: a) capturing at least one remote image of the least one user employing at least one remote camera; b) storing the at least one remote image in the data storage device; and c) associating the at least one remote image with the freehand memo in the data storage device. 14. The method according to claim 12, further comprising communicating with at least a part of a distribution system comprising a plurality of display devices. 15. The method according to claim 12, wherein the freehand memo is addressed to at least one of the following: a) an honoree; b) a host of a party or event; c) a bride; d) a groom; e) a retiree; f) an employee; g) a guest of honor; h) a patient; i) a donor; j) a member of a military unit; k) a competitor; and l) a contestant. 16. The memo and photo capturing method according to claim 12, further comprising prompting at least one of the plurality of users for at least one of the following actions: a) erase a freehand memo written on the multiuser touch input device; b) view at least one freehand memo written on the multiuser touch input device; c) save a freehand memo written on the multiuser touch input device to the data storage device; and d) erase at least one freehand memo from the data storage device. 17. The memo and photo capturing method according to claim 12, further comprising prompting at least one of the plurality of users for at least one of the following actions: a) erase an image taken by at least one of the at least one camera; b) view at least one image taken by at least one of the at least one camera; c) save an image taken by at least one of the at least one camera to the data storage device; and d) erase at least one image from the data storage device. 18. The method according to claim 12, further comprising prompting the at least one user to prepare themselves for an image type, the image type comprising at least one of the following: a) a sequence of images; b) a full body image; c) a close up image; and d) a funny image. 19. The method according to claim 12, further comprising accepting vocal commands employing a voice recognition module. 20. The method according to claim 12, further comprising producing a digital photo album comprising the freehand memo and the associated at least one image. 21. The method according to claim 12, wherein the at least one remote image comprises at least one remote video.
2,600
10,412
10,412
15,654,187
2,685
A movable barrier operator transmits a message to a remote peripheral platform and, upon determining that the remote peripheral platform is presently able to carry out a given functionality, responsively permits a particular function to be carried out by the movable barrier operator. Conversely, upon determining that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality, the movable barrier operator responsively prevents the movable barrier operator from carrying out the particular function. Also, upon detecting that a targeted remote platform does not acknowledge a previously re-transmitted message and further upon detecting that this same remote platform has also not acknowledged a subsequent wirelessly-transmitted second message, the system can switch to automatically retransmitting that second message a lesser number of times than would otherwise be required.
1. A method comprising: at a movable barrier operator: transmitting a message to a remote peripheral platform; upon determining that the remote peripheral platform is presently able to carry out a given functionality, responsively permitting a particular function to be carried out by the movable barrier operator; upon determining that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality, responsively preventing the movable barrier operator from carrying out the particular function. 2. The method of claim 1 wherein the particular function comprises at least one of a timer-to-close function and a remote-close function. 3. The method of claim 2 wherein the remote peripheral platform comprises an announcing device. 4. The method of claim 3 wherein the announcing device comprises a light fixture. 5. The method of claim 4 wherein the given functionality comprises, at least in part, having the light fixture flash a light as a visual warning that the movable barrier operator will imminently carry out the particular function. 6. The method of claim 3 wherein the announcing device comprises a sound producing device. 7. The method of claim 6 wherein the given functionality comprises, at least in part, having the sound producing device produce a sound to announce a warning that the movable barrier operator will imminently carry out the particular function. 8. The method of claim 1 wherein determining that the remote peripheral platform is presently able to carry out the given functionality comprises, at least in part, the movable barrier operator receiving an acknowledgement transmission from the remote peripheral platform in response to the message. 9. The method of claim 8 wherein determining that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality comprises, at least in part, determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform. 10. The method of claim 9 wherein determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform comprises, at least in part, determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform within a predetermined period of time. 11. The method of claim 1 further comprising: transmitting another message to a second remote peripheral platform; and wherein responsively preventing the particular function to be carried out by the movable barrier operator upon determining that the remote peripheral platform is presently able to carry out the given functionality comprises permitting the movable barrier operator to carry out the particular function regardless of whether it can be ascertained that the second remote peripheral platform is presently able to carry out another given functionality. 12. The method of claim 1 wherein transmitting the message comprises wirelessly transmitting the message. 13. A movable barrier operator comprising: a transceiver; a control circuit operably coupled to the transceiver and configured to: transmit a message to a remote peripheral platform; upon determining that the remote peripheral platform is presently able to carry out a given functionality, responsively permit a particular function to be carried out by the movable barrier operator; upon determining that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality, responsively prevent the movable barrier operator from carrying out the particular function. 14. The movable barrier operator of claim 13 wherein the particular function comprises at least one of a timer-to-close function and a remote-close function. 15. The movable barrier operator of claim 14 wherein the remote peripheral platform comprises an announcing device. 16. The movable barrier operator of claim 15 wherein the announcing device comprises a light fixture. 17. The movable barrier operator of claim 16 wherein the given functionality comprises, at least in part, having the light fixture flash a light as a visual warning that the movable barrier operator will imminently carry out the particular function. 18. The movable barrier of claim 15 wherein the announcing device comprises a sound producing device. 19. The movable barrier operator of claim 18 wherein the given functionality comprises, at least in part, having the sound producing device producing a sound to announce a warning that the movable barrier operator will imminently carry out the particular function. 20. The movable barrier operator of claim 13 wherein the control circuit is configured to determine that the remote peripheral platform is presently able to carry out the given functionality by, at least in part, receiving an acknowledgement transmission from the remote peripheral platform in response to the message. 21. The movable barrier operator of claim 20 wherein the control circuit is configured to determine that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality by, at least in part, determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform. 22. The movable barrier operator of claim 21 wherein the control circuit is configured to determine that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform by, at least in part, determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform within a predetermined period of time. 23. The movable barrier operator of claim 13 wherein the control circuit is configured to: transmit another message to a second remote peripheral platform; and wherein the control circuit is configured to responsively carry out the particular function upon determining that the remote peripheral platform is presently able to carry out the given functionality by carrying out the particular function regardless of whether it can be ascertained that the second remote peripheral platform is presently able to carry out another given functionality. 24. The movable barrier operator of claim 13 wherein the transceiver comprises a wireless transceiver. 25. A method comprising: determining whether an announcing device is presently able to announce a warning in response to a command to do so; receiving a timer-to-close signal or a remote-close signal to close a barrier; effecting closing the barrier in response to receiving the timer-to-close signal or the remote-close signal to close the barrier where the announcing device is presently able announce the warning in response to the command to do so; not effecting closing the barrier in response to receiving the timer-to-close signal or the remote-close signal to close the barrier where it cannot be ascertained whether the announcing device is presently able to announce the warning in response to the command to do so. 26. The method of claim 25 further comprising: transmitting a message to the announcing device. 27. The method of claim 26 wherein transmitting the message to the announcing device comprises wirelessly transmitting the message to the announcing device. 28. The method of claim 25 further comprising: receiving an acknowledgement transmission from the announcing device. 29. The method of claim 28 wherein receiving the acknowledgement transmission from the announcing device further comprises wirelessly receiving the acknowledgement transmission from the announcing device. 30. The method of claim 25 further comprising: determining an acknowledgement transmission has not been received from the announcing device. 31. The method of claim 25 further comprising: determining an acknowledgement transmission has not been received from the announcing device within a predetermined period of time. 32. The method of claim 25, wherein the announcing device comprises at least one of a light fixture and an audible-announcing fixture, and wherein the announcing the warning comprises at least one of the light fixture flashing a light and the audible-announcing fixture generating an audible warning. 33. A movable barrier operator comprising: a transceiver configured to receive a timer-to-close signal or a remote-close signal to close a barrier; a control circuit configured to: determine whether an announcing device is presently able to announce a warning in response to a command to do so; in response to the transceiver receiving the timer-to-close signal or the remote-close signal to close the barrier: effect closing the barrier where the announcing device is presently able to announce the warning in response to the command to do so; not effect closing the barrier where the announcing device is presently not able to announce the warning in response to the command to do so. 34. The movable barrier operator of claim 33 wherein the control circuit is further configured to effect transmission of a message to the announcing device. 35. The movable barrier operator of claim 33 wherein the transmission of the message to the announcing device is a wireless transmission. 36. The movable barrier operator of claim 33 wherein the control circuit is further configured to determine an acknowledgement transmission has been received from the announcing device. 37. The movable barrier operator of claim 36 wherein the acknowledgement transmission is a wireless acknowledgement transmission. 38. The movable barrier operator of claim 33 wherein the control circuit is further configured to determine an acknowledgement transmission has not been received from the announcing device. 39. The movable barrier operator of claim 33 wherein the control circuit is further configured to determine an acknowledgement transmission has not been received from the announcing device within a predetermined period of time. 40. The movable barrier operator of claim 33 wherein the announcing device comprises at least one of a light fixture and an audible-announcing fixture, and wherein the announcing the warning comprises at least one of the light fixture flashing a light and the audible-announcing fixture generating an audible warning.
A movable barrier operator transmits a message to a remote peripheral platform and, upon determining that the remote peripheral platform is presently able to carry out a given functionality, responsively permits a particular function to be carried out by the movable barrier operator. Conversely, upon determining that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality, the movable barrier operator responsively prevents the movable barrier operator from carrying out the particular function. Also, upon detecting that a targeted remote platform does not acknowledge a previously re-transmitted message and further upon detecting that this same remote platform has also not acknowledged a subsequent wirelessly-transmitted second message, the system can switch to automatically retransmitting that second message a lesser number of times than would otherwise be required.1. A method comprising: at a movable barrier operator: transmitting a message to a remote peripheral platform; upon determining that the remote peripheral platform is presently able to carry out a given functionality, responsively permitting a particular function to be carried out by the movable barrier operator; upon determining that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality, responsively preventing the movable barrier operator from carrying out the particular function. 2. The method of claim 1 wherein the particular function comprises at least one of a timer-to-close function and a remote-close function. 3. The method of claim 2 wherein the remote peripheral platform comprises an announcing device. 4. The method of claim 3 wherein the announcing device comprises a light fixture. 5. The method of claim 4 wherein the given functionality comprises, at least in part, having the light fixture flash a light as a visual warning that the movable barrier operator will imminently carry out the particular function. 6. The method of claim 3 wherein the announcing device comprises a sound producing device. 7. The method of claim 6 wherein the given functionality comprises, at least in part, having the sound producing device produce a sound to announce a warning that the movable barrier operator will imminently carry out the particular function. 8. The method of claim 1 wherein determining that the remote peripheral platform is presently able to carry out the given functionality comprises, at least in part, the movable barrier operator receiving an acknowledgement transmission from the remote peripheral platform in response to the message. 9. The method of claim 8 wherein determining that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality comprises, at least in part, determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform. 10. The method of claim 9 wherein determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform comprises, at least in part, determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform within a predetermined period of time. 11. The method of claim 1 further comprising: transmitting another message to a second remote peripheral platform; and wherein responsively preventing the particular function to be carried out by the movable barrier operator upon determining that the remote peripheral platform is presently able to carry out the given functionality comprises permitting the movable barrier operator to carry out the particular function regardless of whether it can be ascertained that the second remote peripheral platform is presently able to carry out another given functionality. 12. The method of claim 1 wherein transmitting the message comprises wirelessly transmitting the message. 13. A movable barrier operator comprising: a transceiver; a control circuit operably coupled to the transceiver and configured to: transmit a message to a remote peripheral platform; upon determining that the remote peripheral platform is presently able to carry out a given functionality, responsively permit a particular function to be carried out by the movable barrier operator; upon determining that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality, responsively prevent the movable barrier operator from carrying out the particular function. 14. The movable barrier operator of claim 13 wherein the particular function comprises at least one of a timer-to-close function and a remote-close function. 15. The movable barrier operator of claim 14 wherein the remote peripheral platform comprises an announcing device. 16. The movable barrier operator of claim 15 wherein the announcing device comprises a light fixture. 17. The movable barrier operator of claim 16 wherein the given functionality comprises, at least in part, having the light fixture flash a light as a visual warning that the movable barrier operator will imminently carry out the particular function. 18. The movable barrier of claim 15 wherein the announcing device comprises a sound producing device. 19. The movable barrier operator of claim 18 wherein the given functionality comprises, at least in part, having the sound producing device producing a sound to announce a warning that the movable barrier operator will imminently carry out the particular function. 20. The movable barrier operator of claim 13 wherein the control circuit is configured to determine that the remote peripheral platform is presently able to carry out the given functionality by, at least in part, receiving an acknowledgement transmission from the remote peripheral platform in response to the message. 21. The movable barrier operator of claim 20 wherein the control circuit is configured to determine that it cannot be ascertained whether the remote peripheral platform is presently able to carry out the given functionality by, at least in part, determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform. 22. The movable barrier operator of claim 21 wherein the control circuit is configured to determine that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform by, at least in part, determining that the movable barrier operator has not received the acknowledgement transmission from the remote peripheral platform within a predetermined period of time. 23. The movable barrier operator of claim 13 wherein the control circuit is configured to: transmit another message to a second remote peripheral platform; and wherein the control circuit is configured to responsively carry out the particular function upon determining that the remote peripheral platform is presently able to carry out the given functionality by carrying out the particular function regardless of whether it can be ascertained that the second remote peripheral platform is presently able to carry out another given functionality. 24. The movable barrier operator of claim 13 wherein the transceiver comprises a wireless transceiver. 25. A method comprising: determining whether an announcing device is presently able to announce a warning in response to a command to do so; receiving a timer-to-close signal or a remote-close signal to close a barrier; effecting closing the barrier in response to receiving the timer-to-close signal or the remote-close signal to close the barrier where the announcing device is presently able announce the warning in response to the command to do so; not effecting closing the barrier in response to receiving the timer-to-close signal or the remote-close signal to close the barrier where it cannot be ascertained whether the announcing device is presently able to announce the warning in response to the command to do so. 26. The method of claim 25 further comprising: transmitting a message to the announcing device. 27. The method of claim 26 wherein transmitting the message to the announcing device comprises wirelessly transmitting the message to the announcing device. 28. The method of claim 25 further comprising: receiving an acknowledgement transmission from the announcing device. 29. The method of claim 28 wherein receiving the acknowledgement transmission from the announcing device further comprises wirelessly receiving the acknowledgement transmission from the announcing device. 30. The method of claim 25 further comprising: determining an acknowledgement transmission has not been received from the announcing device. 31. The method of claim 25 further comprising: determining an acknowledgement transmission has not been received from the announcing device within a predetermined period of time. 32. The method of claim 25, wherein the announcing device comprises at least one of a light fixture and an audible-announcing fixture, and wherein the announcing the warning comprises at least one of the light fixture flashing a light and the audible-announcing fixture generating an audible warning. 33. A movable barrier operator comprising: a transceiver configured to receive a timer-to-close signal or a remote-close signal to close a barrier; a control circuit configured to: determine whether an announcing device is presently able to announce a warning in response to a command to do so; in response to the transceiver receiving the timer-to-close signal or the remote-close signal to close the barrier: effect closing the barrier where the announcing device is presently able to announce the warning in response to the command to do so; not effect closing the barrier where the announcing device is presently not able to announce the warning in response to the command to do so. 34. The movable barrier operator of claim 33 wherein the control circuit is further configured to effect transmission of a message to the announcing device. 35. The movable barrier operator of claim 33 wherein the transmission of the message to the announcing device is a wireless transmission. 36. The movable barrier operator of claim 33 wherein the control circuit is further configured to determine an acknowledgement transmission has been received from the announcing device. 37. The movable barrier operator of claim 36 wherein the acknowledgement transmission is a wireless acknowledgement transmission. 38. The movable barrier operator of claim 33 wherein the control circuit is further configured to determine an acknowledgement transmission has not been received from the announcing device. 39. The movable barrier operator of claim 33 wherein the control circuit is further configured to determine an acknowledgement transmission has not been received from the announcing device within a predetermined period of time. 40. The movable barrier operator of claim 33 wherein the announcing device comprises at least one of a light fixture and an audible-announcing fixture, and wherein the announcing the warning comprises at least one of the light fixture flashing a light and the audible-announcing fixture generating an audible warning.
2,600
10,413
10,413
15,409,444
2,641
According to one embodiment, a system comprises a first mobile device comprising a processor; and a computer usable medium, where the computer usable medium has computer usable program code embodied therewith, which when executed by the processor causes the processor to send a request for a location of a closest person to the first mobile device, determine a first location of the first mobile device, receive, in response to the request, a plurality of potential matches, where each of the potential matches includes a name and location of a person, receive a selection of one of the plurality of potential matches, obtain location coordinates of the selected potential match, establish a link with a global positioning service (GPS) system, transmit geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match, and output a geographic solution on the first mobile device, wherein the geographic solution includes: a calculated route between the first location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time.
1. A system, comprising: a first mobile device comprising a processor; and a computer usable medium, the computer usable medium having computer usable program code embodied therewith, which when executed by the processor causes the processor to: send a request for a location of a closest person to the first mobile device; determine a first location of the first mobile device; receive, in response to the request, a plurality of potential matches, where each of the potential matches includes a name and location of a person; receive a selection of one of the plurality of potential matches; obtain location coordinates of the selected potential match; establish a link with a global positioning service (GPS) system; transmit geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; and output a geographic solution on the first mobile device, wherein the geographic solution includes: a calculated route between the first location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time. 2. A system as recited in claim 1, wherein the calculated route is calculated based in part on one or more of: whether the calculated route is to be traveled by automobile or foot, available modes of transit, fees associated with the calculated route, traffic conditions on the calculated route, and restrictions on the calculated route. 3. A system as recited in claim 1, wherein the location coordinates of the at least one potential match are received at the first mobile device in a text message. 4. A system as recited in claim 1, wherein the geographic solution further comprises a calculated route between the first mobile device and the location coordinates of the selected potential match. 5. A system as recited in claim 1, wherein an amount of the geographical data transmitted to the GPS system is based, at least in part on, the link established between the first mobile device and the GPS system. 6. A system as recited in claim 1, further comprising computer usable program code executable by the processor to cause the processor to: receive, from the GPS system, additional geographical data at the first mobile device. 7. A system as recited in claim 1, wherein the calculated route is output as one or more of: a map, a list of directions, and audible directions. 8. A system as recited in claim 7, wherein the map includes satellite imagery. 9. A method, comprising: sending a request for a location of a closest person to a first mobile device; determining a first location of the first mobile device; receiving, in response to the request, a plurality of potential matches, where each of the potential matches includes a name and location of a person; receiving a selection of one of the plurality of potential matches; obtaining location coordinates of the selected potential match; establishing a link with a global positioning service (GPS) system; transmitting geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; and outputting a geographic solution on the first mobile device, wherein the geographic solution includes: a calculated route between the first location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time. 10. A method, comprising: sending a request for a location of a closest person to a first mobile device; determining a first location of the first mobile device; receiving, in response to the request, a plurality of potential matches, where each of the potential matches includes a name and location of a person; receiving a selection of one of the plurality of potential matches; obtaining location coordinates of the selected potential match; establishing a link with a global positioning service (GPS) system; transmitting geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; and outputting a geographic solution on the first mobile device, wherein the geographic solution includes: a calculated route between the first location of the first mobile device and the location coordinates of the selected potential match. 11. A method as recited in claim 10, wherein the first mobile device is a mobile phone. 12. A method as recited in claim 10, further comprising sending a request to the first mobile device to resend the geographical data to the GPS system in response to determining that the geographical data was not received by the GPS system within a predetermined time period. 13. A system, comprising: a server including a processor for: receiving a request for a location of a closest person to a first mobile device; determining a plurality of potential matches, where each of the potential matches includes a name and location of a person; sending to the first mobile device the plurality of potential matches; identifying a selection of one of the plurality of potential matches; and sending to the first mobile device location coordinates of the selected potential match; wherein the first mobile device establishes a link with a global positioning service (GPS) system; wherein the first mobile device transmits geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; wherein the first mobile device outputs a geographic solution, wherein the geographic solution includes: a calculated route between a location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time. 14. A system as recited in claim 13, wherein the first mobile device is a mobile phone. 15. A system as recited in claim 13, further comprising: sending a confirmation to the first mobile device that the geographic data was successfully received at the GPS system. 16. A method, comprising: receiving at a server a request for a location of a closest person to a first mobile device; determining at the server a plurality of potential matches, where each of the potential matches includes a name and location of a person; sending from the server to the first mobile device the plurality of potential matches; identifying at the server a selection of one of the plurality of potential matches; and sending from the server to the first mobile device location coordinates of the selected potential match; wherein the first mobile device establishes a link with a global positioning service (GPS) system; wherein the first mobile device transmits geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; wherein the first mobile device outputs a geographic solution, wherein the geographic solution includes: a calculated route between a location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time. 17. A method as recited in claim 16, further comprising transmitting all geographic data stored on the first mobile device to the GPS system, the geographic data being selected from the group consisting of: waypoints, destinations, origins, routes, distances, travel times, and combinations thereof. 18. A method, comprising: receiving at a server a request for a location of a closest person to a first mobile device; determining at the server a plurality of potential matches, where each of the potential matches includes a name and location of a person; sending from the server to the first mobile device the plurality of potential matches; identifying at the server a selection of one of the plurality of potential matches; and sending from the server to the first mobile device location coordinates of the selected potential match; wherein the first mobile device establishes a link with a global positioning service (GPS) system; wherein the first mobile device transmits geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; wherein the first mobile device outputs a geographic solution, wherein the geographic solution includes: a calculated route between a location of the first mobile device and the location coordinates of the selected potential match. 19. A method as recited in claim 18, wherein the first mobile device is in direct communication with the GPS system via at least one of a hardwired connection, a local area network only, and a direct wireless connection. 20. A method as recited in claim 18, further comprising waiting for a confirmation that the geographic data stored on the first mobile device was successfully received at the GPS system, wherein the geographical data stored on the first mobile device is resent to the GPS system in response to determining that the confirmation has not been received at the first mobile device before a predetermined time period elapses.
According to one embodiment, a system comprises a first mobile device comprising a processor; and a computer usable medium, where the computer usable medium has computer usable program code embodied therewith, which when executed by the processor causes the processor to send a request for a location of a closest person to the first mobile device, determine a first location of the first mobile device, receive, in response to the request, a plurality of potential matches, where each of the potential matches includes a name and location of a person, receive a selection of one of the plurality of potential matches, obtain location coordinates of the selected potential match, establish a link with a global positioning service (GPS) system, transmit geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match, and output a geographic solution on the first mobile device, wherein the geographic solution includes: a calculated route between the first location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time.1. A system, comprising: a first mobile device comprising a processor; and a computer usable medium, the computer usable medium having computer usable program code embodied therewith, which when executed by the processor causes the processor to: send a request for a location of a closest person to the first mobile device; determine a first location of the first mobile device; receive, in response to the request, a plurality of potential matches, where each of the potential matches includes a name and location of a person; receive a selection of one of the plurality of potential matches; obtain location coordinates of the selected potential match; establish a link with a global positioning service (GPS) system; transmit geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; and output a geographic solution on the first mobile device, wherein the geographic solution includes: a calculated route between the first location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time. 2. A system as recited in claim 1, wherein the calculated route is calculated based in part on one or more of: whether the calculated route is to be traveled by automobile or foot, available modes of transit, fees associated with the calculated route, traffic conditions on the calculated route, and restrictions on the calculated route. 3. A system as recited in claim 1, wherein the location coordinates of the at least one potential match are received at the first mobile device in a text message. 4. A system as recited in claim 1, wherein the geographic solution further comprises a calculated route between the first mobile device and the location coordinates of the selected potential match. 5. A system as recited in claim 1, wherein an amount of the geographical data transmitted to the GPS system is based, at least in part on, the link established between the first mobile device and the GPS system. 6. A system as recited in claim 1, further comprising computer usable program code executable by the processor to cause the processor to: receive, from the GPS system, additional geographical data at the first mobile device. 7. A system as recited in claim 1, wherein the calculated route is output as one or more of: a map, a list of directions, and audible directions. 8. A system as recited in claim 7, wherein the map includes satellite imagery. 9. A method, comprising: sending a request for a location of a closest person to a first mobile device; determining a first location of the first mobile device; receiving, in response to the request, a plurality of potential matches, where each of the potential matches includes a name and location of a person; receiving a selection of one of the plurality of potential matches; obtaining location coordinates of the selected potential match; establishing a link with a global positioning service (GPS) system; transmitting geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; and outputting a geographic solution on the first mobile device, wherein the geographic solution includes: a calculated route between the first location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time. 10. A method, comprising: sending a request for a location of a closest person to a first mobile device; determining a first location of the first mobile device; receiving, in response to the request, a plurality of potential matches, where each of the potential matches includes a name and location of a person; receiving a selection of one of the plurality of potential matches; obtaining location coordinates of the selected potential match; establishing a link with a global positioning service (GPS) system; transmitting geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; and outputting a geographic solution on the first mobile device, wherein the geographic solution includes: a calculated route between the first location of the first mobile device and the location coordinates of the selected potential match. 11. A method as recited in claim 10, wherein the first mobile device is a mobile phone. 12. A method as recited in claim 10, further comprising sending a request to the first mobile device to resend the geographical data to the GPS system in response to determining that the geographical data was not received by the GPS system within a predetermined time period. 13. A system, comprising: a server including a processor for: receiving a request for a location of a closest person to a first mobile device; determining a plurality of potential matches, where each of the potential matches includes a name and location of a person; sending to the first mobile device the plurality of potential matches; identifying a selection of one of the plurality of potential matches; and sending to the first mobile device location coordinates of the selected potential match; wherein the first mobile device establishes a link with a global positioning service (GPS) system; wherein the first mobile device transmits geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; wherein the first mobile device outputs a geographic solution, wherein the geographic solution includes: a calculated route between a location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time. 14. A system as recited in claim 13, wherein the first mobile device is a mobile phone. 15. A system as recited in claim 13, further comprising: sending a confirmation to the first mobile device that the geographic data was successfully received at the GPS system. 16. A method, comprising: receiving at a server a request for a location of a closest person to a first mobile device; determining at the server a plurality of potential matches, where each of the potential matches includes a name and location of a person; sending from the server to the first mobile device the plurality of potential matches; identifying at the server a selection of one of the plurality of potential matches; and sending from the server to the first mobile device location coordinates of the selected potential match; wherein the first mobile device establishes a link with a global positioning service (GPS) system; wherein the first mobile device transmits geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; wherein the first mobile device outputs a geographic solution, wherein the geographic solution includes: a calculated route between a location of the first mobile device and the location coordinates of the selected potential match, and information associated with the calculated route, the information including one or more of an estimated travel time, an average travelling speed, and an elapsed travel time. 17. A method as recited in claim 16, further comprising transmitting all geographic data stored on the first mobile device to the GPS system, the geographic data being selected from the group consisting of: waypoints, destinations, origins, routes, distances, travel times, and combinations thereof. 18. A method, comprising: receiving at a server a request for a location of a closest person to a first mobile device; determining at the server a plurality of potential matches, where each of the potential matches includes a name and location of a person; sending from the server to the first mobile device the plurality of potential matches; identifying at the server a selection of one of the plurality of potential matches; and sending from the server to the first mobile device location coordinates of the selected potential match; wherein the first mobile device establishes a link with a global positioning service (GPS) system; wherein the first mobile device transmits geographical data to the GPS system, wherein the geographical data includes the location coordinates of the selected potential match; wherein the first mobile device outputs a geographic solution, wherein the geographic solution includes: a calculated route between a location of the first mobile device and the location coordinates of the selected potential match. 19. A method as recited in claim 18, wherein the first mobile device is in direct communication with the GPS system via at least one of a hardwired connection, a local area network only, and a direct wireless connection. 20. A method as recited in claim 18, further comprising waiting for a confirmation that the geographic data stored on the first mobile device was successfully received at the GPS system, wherein the geographical data stored on the first mobile device is resent to the GPS system in response to determining that the confirmation has not been received at the first mobile device before a predetermined time period elapses.
2,600
10,414
10,414
15,303,218
2,685
A wireless transmitter adapter ( 206 ) can provide wireless data transmission capability to a battery-operated biosensor meter ( 100 ), such as a blood glucose meter, originally configured for hardwired data downloads. In some embodiments, the wireless transmitter adapter ( 206 ) can be configured to replace a bio-sensor meter's battery cover ( 106 ). The wireless transmitter adapter ( 206 ) can include wireless transmitter circuitry ( 700 ), a connector ( 218 ) configured to be received in a biosensor meter's communications port ( 112 ) to electrically couple to the meter's processor circuitry ( 107 ), and one or more electrical contacts ( 625 a , 625 b ) configured to electrically couple to the biosensor meter's one or more batteries ( 102 a , 102 b ) to power the wireless transmitter circuitry ( 700 ). In other embodiments, the wireless transmitter adapter ( 206 ) can be configured to surround at least a portion of a biosensor meter ( 100 ) and to include its own battery compartment ( 804 ) to separately provide power to the wireless transmitter circuitry ( 700 ). Methods of providing a wireless transmitter adapter ( 206 ) for battery-operated biosensor meters ( 100 ) are also provided, as are numerous other aspects.
1. A wireless transmitter adapter configured for a biosensor meter, the wireless adapter comprising: a body configured to be disposed about and conform to at least a portion of the biosensor meter; a connector configured to be received in a communications port of the biosensor meter to electrically couple to circuitry in the biosensor meter via the communications port; and wireless transmitter circuitry housed within the body, the wireless transmitter circuitry electrically coupled to the connector and configured to wirelessly transmit data from the biosensor meter. 2. The wireless transmitter adapter of claim 1, wherein the body is configured to replace a battery cover of the biosensor meter. 3. The wireless transmitter adapter of claim 2, further comprising one or more electrical contacts configured to electrically couple to one or more batteries of the biosensor meter to power the wireless transmitter circuitry. 4. The wireless transmitter adapter of claim 1, wherein the body is configured to surround at least a portion of the biosensor meter. 5. The wireless transmitter adapter of claim 4, wherein the body comprises a battery compartment. 6. The wireless transmitter adapter of claim 1, wherein the connector extends from the body and comprises one of an elastomeric cord connector arm, a rigid arm, a flexible arm, or a slidable arm. 7. The wireless transmitter adapter of claim 1, wherein the connector comprises one of a stereo plug, an RS232 plug, or a USB plug. 8. The wireless transmitter adapter of claim 1, wherein the wireless transmitter circuitry comprises Bluetooth® low energy technology. 9. The wireless transmitter adapter of claim 1, wherein the biosensor meter comprises a blood glucose meter and the data comprises blood glucose measurement readings. 10. A biosensor meter, comprising: a battery holder configured to receive one or more batteries; a microcontroller configured to be powered by the one or more batteries and to determine a property of an analyte in a fluid; a memory configured to be powered by the one or more batteries and coupled to the microcontroller to store data including a determined property of an analyte in a fluid; a housing configured to house the battery holder, the microcontroller, and the memory; a communications port disposed in the housing and configured to electrically couple a cable received in the communications port to the microcontroller; and a wireless transmitter adapter comprising: a body disposed about at least a portion of the housing; a connector configured to be received in the communications port to electrically couple to the microcontroller; and wireless transmitter circuitry located within the body and electrically coupled to the connector to wirelessly transmit data from the biosensor meter. 11. The biosensor meter of claim 10, wherein the wireless transmitter circuitry is configured to be powered by the one or more batteries. 12. The biosensor meter of claim 10, wherein the body is configured to conform to and surround most of the housing. 13. The biosensor meter of claim 10, wherein the body is configured to replace a battery cover of the biosensor meter. 14. The biosensor meter of claim 10, wherein the biosensor meter comprises a blood glucose meter and the data includes blood glucose measurement readings. 15. A method of providing a wireless transmitter adapter configured for a biosensor meter capable of hardwired downloading of data, the method comprising: configuring a body of the wireless transmitter adapter to be disposed about and conform to at least a portion of the biosensor meter; providing a connector configured to be received in a communications port of the biosensor meter to electrically couple to circuitry of the biosensor meter via the communications port; and providing wireless transmitter circuitry within the body, the wireless transmitter circuitry electrically coupled to the connector and configured to wirelessly transmit data from the biosensor meter. 16. The method of claim 15, wherein the configuring a body comprises configuring the body to replace a battery cover of the biosensor meter. 17. The method of claim 16 further comprising providing one or more electrical contacts configured to electrically couple to one or more batteries of the biosensor meter. 18. The method of claim 17, further comprising: replacing the battery cover of the biosensor meter with the body of the wireless transmitter adapter such that the one or more electrical contacts electrically couple to the one or more batteries to provide power to the wireless transmitter circuitry; and inserting the connector in the communications port to electrically couple to circuitry in the biosensor meter. 19. The method of claim 15, wherein the configuring a body comprises configuring the body to conform to and surround most of the biosensor meter. 20. The method of claim 15, wherein the biosensor meter comprises a blood glucose meter and the data comprises blood glucose measurement readings.
A wireless transmitter adapter ( 206 ) can provide wireless data transmission capability to a battery-operated biosensor meter ( 100 ), such as a blood glucose meter, originally configured for hardwired data downloads. In some embodiments, the wireless transmitter adapter ( 206 ) can be configured to replace a bio-sensor meter's battery cover ( 106 ). The wireless transmitter adapter ( 206 ) can include wireless transmitter circuitry ( 700 ), a connector ( 218 ) configured to be received in a biosensor meter's communications port ( 112 ) to electrically couple to the meter's processor circuitry ( 107 ), and one or more electrical contacts ( 625 a , 625 b ) configured to electrically couple to the biosensor meter's one or more batteries ( 102 a , 102 b ) to power the wireless transmitter circuitry ( 700 ). In other embodiments, the wireless transmitter adapter ( 206 ) can be configured to surround at least a portion of a biosensor meter ( 100 ) and to include its own battery compartment ( 804 ) to separately provide power to the wireless transmitter circuitry ( 700 ). Methods of providing a wireless transmitter adapter ( 206 ) for battery-operated biosensor meters ( 100 ) are also provided, as are numerous other aspects.1. A wireless transmitter adapter configured for a biosensor meter, the wireless adapter comprising: a body configured to be disposed about and conform to at least a portion of the biosensor meter; a connector configured to be received in a communications port of the biosensor meter to electrically couple to circuitry in the biosensor meter via the communications port; and wireless transmitter circuitry housed within the body, the wireless transmitter circuitry electrically coupled to the connector and configured to wirelessly transmit data from the biosensor meter. 2. The wireless transmitter adapter of claim 1, wherein the body is configured to replace a battery cover of the biosensor meter. 3. The wireless transmitter adapter of claim 2, further comprising one or more electrical contacts configured to electrically couple to one or more batteries of the biosensor meter to power the wireless transmitter circuitry. 4. The wireless transmitter adapter of claim 1, wherein the body is configured to surround at least a portion of the biosensor meter. 5. The wireless transmitter adapter of claim 4, wherein the body comprises a battery compartment. 6. The wireless transmitter adapter of claim 1, wherein the connector extends from the body and comprises one of an elastomeric cord connector arm, a rigid arm, a flexible arm, or a slidable arm. 7. The wireless transmitter adapter of claim 1, wherein the connector comprises one of a stereo plug, an RS232 plug, or a USB plug. 8. The wireless transmitter adapter of claim 1, wherein the wireless transmitter circuitry comprises Bluetooth® low energy technology. 9. The wireless transmitter adapter of claim 1, wherein the biosensor meter comprises a blood glucose meter and the data comprises blood glucose measurement readings. 10. A biosensor meter, comprising: a battery holder configured to receive one or more batteries; a microcontroller configured to be powered by the one or more batteries and to determine a property of an analyte in a fluid; a memory configured to be powered by the one or more batteries and coupled to the microcontroller to store data including a determined property of an analyte in a fluid; a housing configured to house the battery holder, the microcontroller, and the memory; a communications port disposed in the housing and configured to electrically couple a cable received in the communications port to the microcontroller; and a wireless transmitter adapter comprising: a body disposed about at least a portion of the housing; a connector configured to be received in the communications port to electrically couple to the microcontroller; and wireless transmitter circuitry located within the body and electrically coupled to the connector to wirelessly transmit data from the biosensor meter. 11. The biosensor meter of claim 10, wherein the wireless transmitter circuitry is configured to be powered by the one or more batteries. 12. The biosensor meter of claim 10, wherein the body is configured to conform to and surround most of the housing. 13. The biosensor meter of claim 10, wherein the body is configured to replace a battery cover of the biosensor meter. 14. The biosensor meter of claim 10, wherein the biosensor meter comprises a blood glucose meter and the data includes blood glucose measurement readings. 15. A method of providing a wireless transmitter adapter configured for a biosensor meter capable of hardwired downloading of data, the method comprising: configuring a body of the wireless transmitter adapter to be disposed about and conform to at least a portion of the biosensor meter; providing a connector configured to be received in a communications port of the biosensor meter to electrically couple to circuitry of the biosensor meter via the communications port; and providing wireless transmitter circuitry within the body, the wireless transmitter circuitry electrically coupled to the connector and configured to wirelessly transmit data from the biosensor meter. 16. The method of claim 15, wherein the configuring a body comprises configuring the body to replace a battery cover of the biosensor meter. 17. The method of claim 16 further comprising providing one or more electrical contacts configured to electrically couple to one or more batteries of the biosensor meter. 18. The method of claim 17, further comprising: replacing the battery cover of the biosensor meter with the body of the wireless transmitter adapter such that the one or more electrical contacts electrically couple to the one or more batteries to provide power to the wireless transmitter circuitry; and inserting the connector in the communications port to electrically couple to circuitry in the biosensor meter. 19. The method of claim 15, wherein the configuring a body comprises configuring the body to conform to and surround most of the biosensor meter. 20. The method of claim 15, wherein the biosensor meter comprises a blood glucose meter and the data comprises blood glucose measurement readings.
2,600
10,415
10,415
14,683,100
2,616
An animation framework for animating arbitrary changes in a visualization via morphing of geometries is provided. Geometry from a visualization is captured from before and after a change to the visualization, which is used to generate a series of frames to provide a smooth morphing animation of the change to the visualization. Transitional geometry representing a merged state between the initial geometry and the final geometry of the visualization is generated to build frames between the initial frame and the final frame. The morphing animation may be governed by a timing curve and may be built according to a display rate to ensure a smooth animation.
1. A method for creating a morphing animation of a change to a visualization, comprising: taking an initial snapshot of geometries comprising the visualization before the change; taking a final snapshot of the geometries comprising the visualization after the change; caching the initial snapshot and the final snapshot; interpreting the cached snapshots to create merged geometries, wherein the merged geometries represent transitional states between the cached snapshots; synthesizing a plurality of frames renderable as static images to comprise the morphing animation, wherein the frames are generated based on the merged geometries; and transmitting the plurality of frames to a client to be rendered in the visualization, thereby providing the morphing animation of the change to the visualization. 2. The method of claim 1, further comprising: taking an intermediate snapshot of the geometries comprising the visualization during the change; and caching the intermediate snapshot. 3. The method of claim 1, wherein a final frame of the plurality of frames corresponds to the final snapshot, and the plurality of frames does not include a frame corresponding to the initial snapshot. 4. The method of claim 1, wherein generating, the plurality of frames further comprises: receiving an animation loop, the animation loop including a duration and a Frames per Second (FPS) rate for playback of the morphing animation, wherein a number of frames of the plurality of frames synthesized does not exceed a number based on the duration and the FPS rate. 5. The method of claim 4, wherein the animating loop further includes a timing curve, wherein the timing curve specifies a rate at which the merged geometries illustrate the change from the initial geometry to the final geometry in the morphing animation. 6. The method of claim 1, wherein the snapshots and the plurality of frames are cached in a storyboard object, wherein the storyboard object is operable to provide repeated playback of the plurality of frames according to a timeline. 7. The method of claim 1, wherein an element of the geometries comprising the initial snapshot is associated with an element of the geometries comprising the final snapshot. 8. The method of claim 7, wherein an element of the geometries comprising the initial snapshot is not present in the final snapshot, further comprising: associating the element with an arbitrary geometry in the final snapshot; and wherein creating the merged geometries for the arbitrary geometry and the element includes at least one of: fading the element out as the morphing animation progresses; shrinking the element to the arbitrary geometry, wherein the arbitrary geometry is a point; and merging the element into the arbitrary geometry, wherein the arbitrary geometry is a neighboring element sharing common geometry with the element in the initial snapshot. 9. The method of claim 7, wherein an element of the geometries comprising the final snapshot is not present in the initial snapshot, further comprising: associating the element with an arbitrary geometry in the initial snapshot; and wherein creating the merged geometries for the arbitrary geometry and the element includes at least one of: fading the element in as the morphing animation progresses; growing the element from the arbitrary geometry, wherein the arbitrary geometry is a point; and splitting the element from the arbitrary geometry, wherein the arbitrary geometry is a neighboring element sharing common geometry with the element in the initial snapshot. 10. A system for creating a morphing animation of a change to a visualization, comprising: a processor; and a memory storage including instructions, which when executed by the processor are operable to provide: an animation engine operable to provide a morphing animation of a change to a visualization of data, the animation engine including: a snapshot module, operable to respond to the change to the visualization by taking an initial snapshot of geometries comprising the visualization before the change and a final snapshot of the geometries comprising the visualization after the change; a tweener module, operable to receive the initial snapshot and the final snapshot from the snapshot module, and interpret the snapshots to create merged geometries, wherein the merged geometries represent transitional states between the initial snapshot and the final snapshot; a framing module, operable to receive the merged geometries from the tweener module to generate a plurality of frames renderable by a client as static images to comprise the morphing animation, wherein the frames are generated based on the merged geometries; and a buffer module, operable to receive the plurality of frames from the framing module to store the plurality of frames and transmit the plurality of frames to the client to be rendered in the visualization, thereby providing the morphing animation of the change. 11. The system of claim 10, wherein the snapshot module is further operable to take an intermediate snapshot of the geometries comprising the visualization during the change. 12. The system of claim 10, wherein the tweener module is operable to synthesize a number of the merged geometries based on an animation duration and a Frames per Second (FPS) rate specified by the client, wherein the number of the merged geometries does not exceed a number of frames corresponding to the animation duration at the FPS rate. 13. The system of claim 12, wherein the tweener module is operable to synthesize the merged geometries at a highest steady rate that does not exceed the FPS rate. 14. The system of claim 10, wherein the buffer module includes multiple sub-buffers, wherein the sub-buffers enable the framing module to generate the plurality of frames independently from the rendering of the visualization. 15. The system of claim 10, wherein the tweener module is further operable to receive a timing curve, wherein the tweener module applies the timing curve to set a rate of change from the initial geometries to the final geometries in the transitional states. 16. The system of claim 15, wherein the timing curve is linear. 17. The system of claim 10, wherein the buffer module is further operable to store the plurality of frames in sequential order as a storyboard object, wherein the storyboard object is operable to provide repeated playback of the morphing animation of the change. 18. A computing device for creating a morphing animation, comprising: a processor; and a memory storage including instructions, which when executed by the processor are operable to: receive a timing curve, the timing curve including a duration of the morphing animation; receive a Frames per Second (FPS) rate at which a client will render the morphing animation; take an initial snapshot of geometries comprising the visualization before the change; take a final snapshot of the geometries comprising the visualization after the change; cache the initial snapshot and the final snap shot as key frames within a storyboard object; interpret the cached snapshots to create merged geometries, wherein the merged geometries represent transitional states between the cached snapshots, wherein the transitional states are determined according to the timing curve; generate a plurality of frames renderable as static images to comprise the morphing animation, wherein the frames are generated based on the merged geometries, and wherein a number of the plurality of frames does not exceed a number based on the duration and the FPS rate; cache the plurality of frames within the storyboard object according to a timeline; and transmit the storyboard object to the client to be rendered in the visualization, wherein the storyboard object is operable to provide repeated playback of the morphing animation of the change. 19. The computing device of claim 18, wherein an element of the geometries comprising the initial snapshot is not present in the final snapshot, further comprising: associating the element with an arbitrary geometry in the final snapshot; and wherein creating the merged geometries for the arbitrary geometry and the element includes at least one of: fading the element out as the morphing animation progresses; shrinking the element to the arbitrary geometry, wherein the arbitrary geometry is a point; and merging the element into the arbitrary geometry, wherein the arbitrary geometry is a neighboring element sharing common geometry with the element in the initial snapshot. 20. The computing device of claim 18, wherein an element of the geometries comprising the final snapshot is not present in the initial snapshot, further comprising: associating the element with an arbitrary geometry in the initial snapshot; and wherein creating the merged geometries for the arbitrary geometry and the element includes at least one of: fading the element into the geometries comprising the final snapshot as the morphing animation progresses; growing the element from the arbitrary geometry, wherein the arbitrary geometry is a point; and splitting the element from the arbitrary geometry, wherein the arbitrary geometry is a neighboring element sharing common geometry with the element in the initial snapshot.
An animation framework for animating arbitrary changes in a visualization via morphing of geometries is provided. Geometry from a visualization is captured from before and after a change to the visualization, which is used to generate a series of frames to provide a smooth morphing animation of the change to the visualization. Transitional geometry representing a merged state between the initial geometry and the final geometry of the visualization is generated to build frames between the initial frame and the final frame. The morphing animation may be governed by a timing curve and may be built according to a display rate to ensure a smooth animation.1. A method for creating a morphing animation of a change to a visualization, comprising: taking an initial snapshot of geometries comprising the visualization before the change; taking a final snapshot of the geometries comprising the visualization after the change; caching the initial snapshot and the final snapshot; interpreting the cached snapshots to create merged geometries, wherein the merged geometries represent transitional states between the cached snapshots; synthesizing a plurality of frames renderable as static images to comprise the morphing animation, wherein the frames are generated based on the merged geometries; and transmitting the plurality of frames to a client to be rendered in the visualization, thereby providing the morphing animation of the change to the visualization. 2. The method of claim 1, further comprising: taking an intermediate snapshot of the geometries comprising the visualization during the change; and caching the intermediate snapshot. 3. The method of claim 1, wherein a final frame of the plurality of frames corresponds to the final snapshot, and the plurality of frames does not include a frame corresponding to the initial snapshot. 4. The method of claim 1, wherein generating, the plurality of frames further comprises: receiving an animation loop, the animation loop including a duration and a Frames per Second (FPS) rate for playback of the morphing animation, wherein a number of frames of the plurality of frames synthesized does not exceed a number based on the duration and the FPS rate. 5. The method of claim 4, wherein the animating loop further includes a timing curve, wherein the timing curve specifies a rate at which the merged geometries illustrate the change from the initial geometry to the final geometry in the morphing animation. 6. The method of claim 1, wherein the snapshots and the plurality of frames are cached in a storyboard object, wherein the storyboard object is operable to provide repeated playback of the plurality of frames according to a timeline. 7. The method of claim 1, wherein an element of the geometries comprising the initial snapshot is associated with an element of the geometries comprising the final snapshot. 8. The method of claim 7, wherein an element of the geometries comprising the initial snapshot is not present in the final snapshot, further comprising: associating the element with an arbitrary geometry in the final snapshot; and wherein creating the merged geometries for the arbitrary geometry and the element includes at least one of: fading the element out as the morphing animation progresses; shrinking the element to the arbitrary geometry, wherein the arbitrary geometry is a point; and merging the element into the arbitrary geometry, wherein the arbitrary geometry is a neighboring element sharing common geometry with the element in the initial snapshot. 9. The method of claim 7, wherein an element of the geometries comprising the final snapshot is not present in the initial snapshot, further comprising: associating the element with an arbitrary geometry in the initial snapshot; and wherein creating the merged geometries for the arbitrary geometry and the element includes at least one of: fading the element in as the morphing animation progresses; growing the element from the arbitrary geometry, wherein the arbitrary geometry is a point; and splitting the element from the arbitrary geometry, wherein the arbitrary geometry is a neighboring element sharing common geometry with the element in the initial snapshot. 10. A system for creating a morphing animation of a change to a visualization, comprising: a processor; and a memory storage including instructions, which when executed by the processor are operable to provide: an animation engine operable to provide a morphing animation of a change to a visualization of data, the animation engine including: a snapshot module, operable to respond to the change to the visualization by taking an initial snapshot of geometries comprising the visualization before the change and a final snapshot of the geometries comprising the visualization after the change; a tweener module, operable to receive the initial snapshot and the final snapshot from the snapshot module, and interpret the snapshots to create merged geometries, wherein the merged geometries represent transitional states between the initial snapshot and the final snapshot; a framing module, operable to receive the merged geometries from the tweener module to generate a plurality of frames renderable by a client as static images to comprise the morphing animation, wherein the frames are generated based on the merged geometries; and a buffer module, operable to receive the plurality of frames from the framing module to store the plurality of frames and transmit the plurality of frames to the client to be rendered in the visualization, thereby providing the morphing animation of the change. 11. The system of claim 10, wherein the snapshot module is further operable to take an intermediate snapshot of the geometries comprising the visualization during the change. 12. The system of claim 10, wherein the tweener module is operable to synthesize a number of the merged geometries based on an animation duration and a Frames per Second (FPS) rate specified by the client, wherein the number of the merged geometries does not exceed a number of frames corresponding to the animation duration at the FPS rate. 13. The system of claim 12, wherein the tweener module is operable to synthesize the merged geometries at a highest steady rate that does not exceed the FPS rate. 14. The system of claim 10, wherein the buffer module includes multiple sub-buffers, wherein the sub-buffers enable the framing module to generate the plurality of frames independently from the rendering of the visualization. 15. The system of claim 10, wherein the tweener module is further operable to receive a timing curve, wherein the tweener module applies the timing curve to set a rate of change from the initial geometries to the final geometries in the transitional states. 16. The system of claim 15, wherein the timing curve is linear. 17. The system of claim 10, wherein the buffer module is further operable to store the plurality of frames in sequential order as a storyboard object, wherein the storyboard object is operable to provide repeated playback of the morphing animation of the change. 18. A computing device for creating a morphing animation, comprising: a processor; and a memory storage including instructions, which when executed by the processor are operable to: receive a timing curve, the timing curve including a duration of the morphing animation; receive a Frames per Second (FPS) rate at which a client will render the morphing animation; take an initial snapshot of geometries comprising the visualization before the change; take a final snapshot of the geometries comprising the visualization after the change; cache the initial snapshot and the final snap shot as key frames within a storyboard object; interpret the cached snapshots to create merged geometries, wherein the merged geometries represent transitional states between the cached snapshots, wherein the transitional states are determined according to the timing curve; generate a plurality of frames renderable as static images to comprise the morphing animation, wherein the frames are generated based on the merged geometries, and wherein a number of the plurality of frames does not exceed a number based on the duration and the FPS rate; cache the plurality of frames within the storyboard object according to a timeline; and transmit the storyboard object to the client to be rendered in the visualization, wherein the storyboard object is operable to provide repeated playback of the morphing animation of the change. 19. The computing device of claim 18, wherein an element of the geometries comprising the initial snapshot is not present in the final snapshot, further comprising: associating the element with an arbitrary geometry in the final snapshot; and wherein creating the merged geometries for the arbitrary geometry and the element includes at least one of: fading the element out as the morphing animation progresses; shrinking the element to the arbitrary geometry, wherein the arbitrary geometry is a point; and merging the element into the arbitrary geometry, wherein the arbitrary geometry is a neighboring element sharing common geometry with the element in the initial snapshot. 20. The computing device of claim 18, wherein an element of the geometries comprising the final snapshot is not present in the initial snapshot, further comprising: associating the element with an arbitrary geometry in the initial snapshot; and wherein creating the merged geometries for the arbitrary geometry and the element includes at least one of: fading the element into the geometries comprising the final snapshot as the morphing animation progresses; growing the element from the arbitrary geometry, wherein the arbitrary geometry is a point; and splitting the element from the arbitrary geometry, wherein the arbitrary geometry is a neighboring element sharing common geometry with the element in the initial snapshot.
2,600
10,416
10,416
13,943,974
2,622
An aspect provides a method, including: accepting, at a touch surface of an information handling device, one or more touch inputs; providing, at the touch surface of the information handling device, one or more visual renderings corresponding to the one or more touch inputs; identifying, using at least one processor, a character included in the one or more touch inputs; and rendering, on a separate display device of the information handling device, the character identified. Other aspects are described and claimed.
1. A method, comprising: accepting, at a touch surface of an information handling device, one or more touch inputs; providing, at the touch surface of the information handling device, one or more visual renderings corresponding to the one or more touch inputs; identifying, using at least one processor, a character included in the one or more touch inputs; and rendering, on a separate display device of the information handling device, the character identified. 2. The method of claim 1, wherein the touch surface comprises a visual layer. 3. The method of claim 2, wherein the visual layer comprises a cholesteric layer. 4. The method of claim 3, wherein the touch surface comprises a touch input layer. 5. The method of claim 4, wherein the touch input layer comprises a flexible input layer overlaying the cholesteric layer. 6. The method of claim 4, wherein the touch input layer comprises a stylus sensing layer underlying the cholesteric layer. 7. The method of claim 3, further comprising erasing the cholesteric layer following the identifying of the character included in the one or more touch inputs. 8. The method of claim 1, wherein the touch surface is selected from the group of touch surfaces consisting of a touch pad and a digitizer. 9. The method of claim 1, wherein the separate display device comprises an LCD panel. 10. The method of claim 1, wherein the information handling device comprises a clamshell style laptop computer. 11. An information handling device, comprising: a touch surface; a display device; one or more processors; a memory device accessible to the one or more processors and storing code executable by the one or more processors to: accept, at the touch surface, one or more touch inputs; providing, at the touch surface, one or more visual renderings corresponding to the one or more touch inputs; identifying, using at least one processor, a character included in the one or more touch inputs; and rendering, on the display device, the character identified. 12. The information handling device of claim 11, wherein the touch surface comprises a visual layer. 13. The information handling device of claim 12, wherein the visual layer comprises a cholesteric layer. 14. The information handling device of claim 13, wherein the touch surface comprises a touch input layer. 15. The information handling device of claim 14, wherein the touch input layer comprises a flexible input layer overlaying the cholesteric layer. 16. The information handling device of claim 14, wherein the touch input layer comprises a stylus sensing layer underlying the cholesteric layer. 17. The information handling device of claim 13, wherein the code is further executable by the one or more processors to erase the cholesteric layer following identification of the character included in the one or more touch inputs. 18. The information handling device of claim 11, wherein the touch surface is selected from the group of touch surfaces consisting of a touch pad and a digitizer. 19. The information handling device of claim 11, wherein the display device comprises an LCD panel. 20. A program product, comprising: a storage device having computer readable program code stored therewith, the computer readable program code comprising: computer readable program code configured to accept, at a touch surface of an information handling device, one or more touch inputs; computer readable program code configured to provide, at the touch surface of the information handling device, one or more visual renderings corresponding to the one or more touch inputs; computer readable program code configured to identify, using at least one processor, a character included in the one or more touch inputs; and computer readable program code configured to render, on a separate display device of the information handling device, the character identified.
An aspect provides a method, including: accepting, at a touch surface of an information handling device, one or more touch inputs; providing, at the touch surface of the information handling device, one or more visual renderings corresponding to the one or more touch inputs; identifying, using at least one processor, a character included in the one or more touch inputs; and rendering, on a separate display device of the information handling device, the character identified. Other aspects are described and claimed.1. A method, comprising: accepting, at a touch surface of an information handling device, one or more touch inputs; providing, at the touch surface of the information handling device, one or more visual renderings corresponding to the one or more touch inputs; identifying, using at least one processor, a character included in the one or more touch inputs; and rendering, on a separate display device of the information handling device, the character identified. 2. The method of claim 1, wherein the touch surface comprises a visual layer. 3. The method of claim 2, wherein the visual layer comprises a cholesteric layer. 4. The method of claim 3, wherein the touch surface comprises a touch input layer. 5. The method of claim 4, wherein the touch input layer comprises a flexible input layer overlaying the cholesteric layer. 6. The method of claim 4, wherein the touch input layer comprises a stylus sensing layer underlying the cholesteric layer. 7. The method of claim 3, further comprising erasing the cholesteric layer following the identifying of the character included in the one or more touch inputs. 8. The method of claim 1, wherein the touch surface is selected from the group of touch surfaces consisting of a touch pad and a digitizer. 9. The method of claim 1, wherein the separate display device comprises an LCD panel. 10. The method of claim 1, wherein the information handling device comprises a clamshell style laptop computer. 11. An information handling device, comprising: a touch surface; a display device; one or more processors; a memory device accessible to the one or more processors and storing code executable by the one or more processors to: accept, at the touch surface, one or more touch inputs; providing, at the touch surface, one or more visual renderings corresponding to the one or more touch inputs; identifying, using at least one processor, a character included in the one or more touch inputs; and rendering, on the display device, the character identified. 12. The information handling device of claim 11, wherein the touch surface comprises a visual layer. 13. The information handling device of claim 12, wherein the visual layer comprises a cholesteric layer. 14. The information handling device of claim 13, wherein the touch surface comprises a touch input layer. 15. The information handling device of claim 14, wherein the touch input layer comprises a flexible input layer overlaying the cholesteric layer. 16. The information handling device of claim 14, wherein the touch input layer comprises a stylus sensing layer underlying the cholesteric layer. 17. The information handling device of claim 13, wherein the code is further executable by the one or more processors to erase the cholesteric layer following identification of the character included in the one or more touch inputs. 18. The information handling device of claim 11, wherein the touch surface is selected from the group of touch surfaces consisting of a touch pad and a digitizer. 19. The information handling device of claim 11, wherein the display device comprises an LCD panel. 20. A program product, comprising: a storage device having computer readable program code stored therewith, the computer readable program code comprising: computer readable program code configured to accept, at a touch surface of an information handling device, one or more touch inputs; computer readable program code configured to provide, at the touch surface of the information handling device, one or more visual renderings corresponding to the one or more touch inputs; computer readable program code configured to identify, using at least one processor, a character included in the one or more touch inputs; and computer readable program code configured to render, on a separate display device of the information handling device, the character identified.
2,600
10,417
10,417
14,741,391
2,636
An optical link transmits light between a transmitter and a receiver. The transmitter includes a laser cavity that outputs a laser light signal. The laser cavity is configured such that the mode of the laser light signal hops during operation of the optical link. The transmitter outputs an output light signal that includes light from the laser light signal. The output light signal travels a data travel distance before being received at the receiver. The data travel distance is greater than 0 m and less than 1 km and the optical link has a Bit Error Rate less than 10 −12 . In some instances, the laser cavity is an external cavity laser.
1. An optical system, comprising: a laser cavity on a substrate, the laser cavity outputting a laser light signal that exhibits one or more longitudinal mode hops; the laser cavity having a SideMode Suppression Ratio (SMSR) that is less than 100 dB and greater than 40 dB; the laser cavity having a wavelength error greater than 0.15 nm and less than 0.25 nm for one or more of the mode hops; the laser cavity having a power variation is greater than −4 dBm and less than 0.2 dBm for one or more of the mode hops. 2. The system of claim 1, wherein the wavelength error for a first one of the mode hops is greater than 0.178 nm and less than 0.239 nm and the power variation for the first mode hop is greater than −3 dBm and less than 0.1 dBm. 3. The system of claim 1, wherein an optical link includes a transmitter that includes the laser cavity, the optical link has a receiver that receives an output light signal from the transmitter, the output light signal including light from the laser light signal. 4. The system of claim 3, wherein the output light signal travels a data travel distance between the transmitter and the receiver, the data travel distance being greater than 0.5 m and less than 1 km. 5. The system of claim 4, wherein a Bit Error Rate for the system is less than 10−12. 6. The system of claim 4, wherein the data travel distance is less than 500 m. 7. The system of claim 4, wherein the laser cavity includes a cavity waveguide guiding a laser light signal between a gain medium and a partial return device, the partial return device positioned to receive the laser light signal from the cavity waveguide and to return a first portion of the laser light signal to the cavity waveguide and to transmit a second portion of the laser light signal onto an output waveguide. 8. The system of claim 7, wherein the cavity waveguide guides the laser light signal through a medium that is different from the gain medium. 9. The system of claim 7, wherein the partial return device is a Bragg grating. 10. The system of claim 1, wherein each of the one or more mode hops occur when operating the laser cavity in a functional operational range, the functional operating range occurring at temperatures greater than 55° C. and less than 65° C. and at an applied current greater than 75 mA and less than 100 mA, where the applied current is an amount of electrical current that flows through a gain medium included in the laser cavity. 11. The system of claim 1, wherein the substrate is included in a silicon-on-insulator wafer. 12. The system of claim 1, wherein the laser cavity is an External Cavity Laser. 13. An optical link, comprising: a transmitter that outputs a laser light signal, the transmitter including a laser cavity configured to longitudinally mode hop during operation of the optical link; and a receiver that receives an output light signal from the transmitter, the output light signal including light from the laser light signal, the output light signal traveling a data travel distance between the transmitter and the receiver, the data travel distance being greater than 0.1 m and less than 1 km; and the optical link having a Bit Error Rate less than 10−12. 14. The link of claim 13, wherein the laser cavity has at least one condition selected from a group consisting of a wavelength error that is greater than 0.15 nm and less than 0.30 for at least one of the mode hops, a power variation that is greater than −10 dBm and less than 0.6 dBm, and a SideMode Suppression Ratio (SMSR) that is less than 100 dB and greater than 30 dB. 15. The link of claim 14, wherein the laser cavity has three of the conditions. 16. The link of claim 13, wherein the laser cavity is included in an external cavity laser. 17. The link of claim 13, wherein the laser cavity includes a cavity waveguide guiding a laser light signal between a gain medium and a partial return device, the partial return device positioned to receive the laser light signal from the cavity waveguide and to return a first portion of the laser light signal to the cavity waveguide and to transmit a second portion of the laser light signal onto an output waveguide. 18. The link of claim 17, wherein the cavity waveguide guides the laser light signal through a medium that is different from the gain medium. 19. The link of claim 18, wherein the partial return device is a Bragg grating. 20. The link of claim 13, wherein each of the one or more mode hops occur when operating the laser cavity in a functional operational range, the functional operating range occurring at temperatures greater than 55° C. and less than 65° C. and at an applied current greater than 75 mA and less than 100 mA, where the applied current is an amount of electrical current that flows through a gain medium included in the laser cavity
An optical link transmits light between a transmitter and a receiver. The transmitter includes a laser cavity that outputs a laser light signal. The laser cavity is configured such that the mode of the laser light signal hops during operation of the optical link. The transmitter outputs an output light signal that includes light from the laser light signal. The output light signal travels a data travel distance before being received at the receiver. The data travel distance is greater than 0 m and less than 1 km and the optical link has a Bit Error Rate less than 10 −12 . In some instances, the laser cavity is an external cavity laser.1. An optical system, comprising: a laser cavity on a substrate, the laser cavity outputting a laser light signal that exhibits one or more longitudinal mode hops; the laser cavity having a SideMode Suppression Ratio (SMSR) that is less than 100 dB and greater than 40 dB; the laser cavity having a wavelength error greater than 0.15 nm and less than 0.25 nm for one or more of the mode hops; the laser cavity having a power variation is greater than −4 dBm and less than 0.2 dBm for one or more of the mode hops. 2. The system of claim 1, wherein the wavelength error for a first one of the mode hops is greater than 0.178 nm and less than 0.239 nm and the power variation for the first mode hop is greater than −3 dBm and less than 0.1 dBm. 3. The system of claim 1, wherein an optical link includes a transmitter that includes the laser cavity, the optical link has a receiver that receives an output light signal from the transmitter, the output light signal including light from the laser light signal. 4. The system of claim 3, wherein the output light signal travels a data travel distance between the transmitter and the receiver, the data travel distance being greater than 0.5 m and less than 1 km. 5. The system of claim 4, wherein a Bit Error Rate for the system is less than 10−12. 6. The system of claim 4, wherein the data travel distance is less than 500 m. 7. The system of claim 4, wherein the laser cavity includes a cavity waveguide guiding a laser light signal between a gain medium and a partial return device, the partial return device positioned to receive the laser light signal from the cavity waveguide and to return a first portion of the laser light signal to the cavity waveguide and to transmit a second portion of the laser light signal onto an output waveguide. 8. The system of claim 7, wherein the cavity waveguide guides the laser light signal through a medium that is different from the gain medium. 9. The system of claim 7, wherein the partial return device is a Bragg grating. 10. The system of claim 1, wherein each of the one or more mode hops occur when operating the laser cavity in a functional operational range, the functional operating range occurring at temperatures greater than 55° C. and less than 65° C. and at an applied current greater than 75 mA and less than 100 mA, where the applied current is an amount of electrical current that flows through a gain medium included in the laser cavity. 11. The system of claim 1, wherein the substrate is included in a silicon-on-insulator wafer. 12. The system of claim 1, wherein the laser cavity is an External Cavity Laser. 13. An optical link, comprising: a transmitter that outputs a laser light signal, the transmitter including a laser cavity configured to longitudinally mode hop during operation of the optical link; and a receiver that receives an output light signal from the transmitter, the output light signal including light from the laser light signal, the output light signal traveling a data travel distance between the transmitter and the receiver, the data travel distance being greater than 0.1 m and less than 1 km; and the optical link having a Bit Error Rate less than 10−12. 14. The link of claim 13, wherein the laser cavity has at least one condition selected from a group consisting of a wavelength error that is greater than 0.15 nm and less than 0.30 for at least one of the mode hops, a power variation that is greater than −10 dBm and less than 0.6 dBm, and a SideMode Suppression Ratio (SMSR) that is less than 100 dB and greater than 30 dB. 15. The link of claim 14, wherein the laser cavity has three of the conditions. 16. The link of claim 13, wherein the laser cavity is included in an external cavity laser. 17. The link of claim 13, wherein the laser cavity includes a cavity waveguide guiding a laser light signal between a gain medium and a partial return device, the partial return device positioned to receive the laser light signal from the cavity waveguide and to return a first portion of the laser light signal to the cavity waveguide and to transmit a second portion of the laser light signal onto an output waveguide. 18. The link of claim 17, wherein the cavity waveguide guides the laser light signal through a medium that is different from the gain medium. 19. The link of claim 18, wherein the partial return device is a Bragg grating. 20. The link of claim 13, wherein each of the one or more mode hops occur when operating the laser cavity in a functional operational range, the functional operating range occurring at temperatures greater than 55° C. and less than 65° C. and at an applied current greater than 75 mA and less than 100 mA, where the applied current is an amount of electrical current that flows through a gain medium included in the laser cavity
2,600
10,418
10,418
14,214,346
2,625
A method for assembling a head mounted display includes providing a rigid structural frame, and forming an inner optical assembly by assembling optical components to the structural frame including at least one micro-display configured to generate an image, and at least one reflective optical component configured to direct the image to a user's eye. The method includes assembling an outer frame to the inner optical assembly to provide protection for the optical components and customization of the head-mounted display for the user.
1. A method for assembling a head mounted display, comprising: providing a rigid structural frame; forming an inner optical assembly by assembling optical components to the structural frame including at least one micro-display configured to generate an image, and at least one reflective optical component configured to direct the image to a user's eye; and assembling an outer frame to the inner optical assembly to provide protection for the optical components and customization of the head-mounted display for the user. 2. The method of claim 1, wherein the structural frame maintains the optical components in alignment to define an optical light path for reflectively guiding a light ray bundle from the at least one micro-display to a user's eyes. 3. The method of claim 1, wherein the structural frame includes opposing upper and lower rims between which are defined first and second openings, and wherein the method further comprises: inserting the at least one micro-display into at least one pocket formed in the upper rim. 4. The method of claim 3, and further comprising: incorporating the at least one micro-display into at least one micro-display mechanism that includes mechanisms for user adjustment of focus and interpupillary distance; and wherein the inserting the at least one micro-display into at least one pocket formed in the upper rim comprises inserting the at least one micro-display mechanism with the at least one micro-display incorporated therein into the at least one pocket. 5. The method of claim 3, and further comprising: attaching at least one second mirror to a mount formed on the at least one pocket. 6. The method of claim 1, and further comprising: attaching a compound optical element to the structural frame on a front side of the at least one micro-display. 7. The method of claim 6, wherein the compound optical element includes a first mirror and a third mirror. 8. The method of claim 1, and further comprising: attaching temple arms to the outer frame for securing the head-mounted display to a user's head. 9. The method of claim 1, and further comprising: attaching at least one lens to a front side of the outer frame to provide a front dust covering for protecting the optical components of the inner optical assembly. 10. The method of claim 9, and further comprising: attaching at least one dust cover to a rear side of the outer frame to provide a rear dust covering for protecting the optical components of the inner optical assembly. 11. The method of claim 1 wherein the rigid structural frame defines datums and wherein assembling the optical components to the rigid structural frame includes engaging portions of the optical components to the datums in order to facilitate accurate alignment of the optical components during assembly. 12. A head mounted display device comprising: a structural frame including opposing upper and lower rims between which are defined left and right openings; left and right micro-displays coupled to the structural frame, and configured to project visual content in a substantially forward direction toward the left and right openings, respectively, and away from a user; a plurality of optical elements coupled to the structural frame, wherein the structural frame maintains the micro-displays and the optical elements in alignment to define an optical light path for reflectively guiding a light ray bundle from the micro-displays to a user's eyes; and an outer frame coupled to the structural frame, wherein the outer frame provides protection for the micro-displays and optical elements and includes at least one mechanism for securing the head mounted display to a user's head. 13. The head mounted display device of claim 12, wherein the at least one mechanism comprises temple arms. 14. The head mounted display device of claim 12, wherein the structural frame has a higher elastic modulus than the outer frame. 15. The head mounted display device of claim 12, wherein the outer frame comprises a polymer and the structural frame comprises an injection molded or cast magnesium alloy. 16. The head mounted display device of claim 12, wherein the outer frame includes left and right lenses on a front side of the outer frame to provide a front dust covering. 17. The head mounted display device of claim 12, wherein the outer frame includes at least one rear dust cover on a rear side of the outer frame. 18. The head mounted display device of claim 12, wherein the plurality of optical elements includes left and right compound optical elements coupled to the structural frame and respectively positioned over the left and right openings on a front side of the micro-displays. 19. The head mounted display device of claim 18, wherein each of the compound optical elements includes a plurality of mirrors. 20. The head mounted display device of claim 12, wherein the plurality of optical elements includes left and right second mirrors coupled to the structural frame with reflective optical surfaces facing in a substantially forward direction. 21. A method for assembling a head mounted display comprising: providing a structural frame and an outer frame, wherein the structural frame has a higher elastic modulus than the outer frame; attaching optical components to the structural frame including at least one micro-display configured to generate an image, and at least one reflective optical component configured to direct the image to a user's eye; attaching temple arms and at least one lens to the outer frame; and assembling the outer frame with the temple arms and the at least one lens attached thereto to the structural frame with the optical components attached thereto.
A method for assembling a head mounted display includes providing a rigid structural frame, and forming an inner optical assembly by assembling optical components to the structural frame including at least one micro-display configured to generate an image, and at least one reflective optical component configured to direct the image to a user's eye. The method includes assembling an outer frame to the inner optical assembly to provide protection for the optical components and customization of the head-mounted display for the user.1. A method for assembling a head mounted display, comprising: providing a rigid structural frame; forming an inner optical assembly by assembling optical components to the structural frame including at least one micro-display configured to generate an image, and at least one reflective optical component configured to direct the image to a user's eye; and assembling an outer frame to the inner optical assembly to provide protection for the optical components and customization of the head-mounted display for the user. 2. The method of claim 1, wherein the structural frame maintains the optical components in alignment to define an optical light path for reflectively guiding a light ray bundle from the at least one micro-display to a user's eyes. 3. The method of claim 1, wherein the structural frame includes opposing upper and lower rims between which are defined first and second openings, and wherein the method further comprises: inserting the at least one micro-display into at least one pocket formed in the upper rim. 4. The method of claim 3, and further comprising: incorporating the at least one micro-display into at least one micro-display mechanism that includes mechanisms for user adjustment of focus and interpupillary distance; and wherein the inserting the at least one micro-display into at least one pocket formed in the upper rim comprises inserting the at least one micro-display mechanism with the at least one micro-display incorporated therein into the at least one pocket. 5. The method of claim 3, and further comprising: attaching at least one second mirror to a mount formed on the at least one pocket. 6. The method of claim 1, and further comprising: attaching a compound optical element to the structural frame on a front side of the at least one micro-display. 7. The method of claim 6, wherein the compound optical element includes a first mirror and a third mirror. 8. The method of claim 1, and further comprising: attaching temple arms to the outer frame for securing the head-mounted display to a user's head. 9. The method of claim 1, and further comprising: attaching at least one lens to a front side of the outer frame to provide a front dust covering for protecting the optical components of the inner optical assembly. 10. The method of claim 9, and further comprising: attaching at least one dust cover to a rear side of the outer frame to provide a rear dust covering for protecting the optical components of the inner optical assembly. 11. The method of claim 1 wherein the rigid structural frame defines datums and wherein assembling the optical components to the rigid structural frame includes engaging portions of the optical components to the datums in order to facilitate accurate alignment of the optical components during assembly. 12. A head mounted display device comprising: a structural frame including opposing upper and lower rims between which are defined left and right openings; left and right micro-displays coupled to the structural frame, and configured to project visual content in a substantially forward direction toward the left and right openings, respectively, and away from a user; a plurality of optical elements coupled to the structural frame, wherein the structural frame maintains the micro-displays and the optical elements in alignment to define an optical light path for reflectively guiding a light ray bundle from the micro-displays to a user's eyes; and an outer frame coupled to the structural frame, wherein the outer frame provides protection for the micro-displays and optical elements and includes at least one mechanism for securing the head mounted display to a user's head. 13. The head mounted display device of claim 12, wherein the at least one mechanism comprises temple arms. 14. The head mounted display device of claim 12, wherein the structural frame has a higher elastic modulus than the outer frame. 15. The head mounted display device of claim 12, wherein the outer frame comprises a polymer and the structural frame comprises an injection molded or cast magnesium alloy. 16. The head mounted display device of claim 12, wherein the outer frame includes left and right lenses on a front side of the outer frame to provide a front dust covering. 17. The head mounted display device of claim 12, wherein the outer frame includes at least one rear dust cover on a rear side of the outer frame. 18. The head mounted display device of claim 12, wherein the plurality of optical elements includes left and right compound optical elements coupled to the structural frame and respectively positioned over the left and right openings on a front side of the micro-displays. 19. The head mounted display device of claim 18, wherein each of the compound optical elements includes a plurality of mirrors. 20. The head mounted display device of claim 12, wherein the plurality of optical elements includes left and right second mirrors coupled to the structural frame with reflective optical surfaces facing in a substantially forward direction. 21. A method for assembling a head mounted display comprising: providing a structural frame and an outer frame, wherein the structural frame has a higher elastic modulus than the outer frame; attaching optical components to the structural frame including at least one micro-display configured to generate an image, and at least one reflective optical component configured to direct the image to a user's eye; attaching temple arms and at least one lens to the outer frame; and assembling the outer frame with the temple arms and the at least one lens attached thereto to the structural frame with the optical components attached thereto.
2,600
10,419
10,419
15,689,134
2,677
A computer system for processing unstructured data, the computer system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computer system to carry out the steps of receiving unstructured data input from a client device, analyzing the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP), and partitioning the unstructured data into logical segments based on satisfaction of the logical segment criteria.
1. A method, in a data processing system comprising a processor and a memory, for processing unstructured data, the method comprising: receiving, by the data processing system, unstructured data input from a client device; analyzing, by the data processing system, the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP); and partitioning, by the data processing system, the unstructured data into logical segments based on satisfaction of the logical segment criteria. 2. The method of claim 1 wherein the unstructured data comprise text including a variety of topics or content. 3. The method of claim 1 wherein analyzing the unstructured data for features further comprises using the NLP to identify text that satisfy the logical segment criteria. 4. The method of claim 1 wherein the unstructured data includes compliance obligations. 5. The method of claim 1 wherein the logical segment criteria include features associated with a plurality of industries or companies. 6. The method of claim 1 wherein the logical segment criteria include features associated with importance, priority, or risk. 7. A computer system for processing unstructured data, the computing system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computing system to carry out the steps of: receiving unstructured data input from a client device; analyzing the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP); partitioning the unstructured data into logical segments based on satisfaction of the logical segment criteria; and incorporating the logical segments into a repository. 8. The computer system of claim 7 further comprising the processor linking one or more files from the repository to the logical segments. 9. The computer system of claim 7 further comprising the processor generating files, documents, records, or data entries using the logical segments. 10. The computer system of claim 7 wherein the logical segments comprise one or more pointers, references, linked lists, or data structures. 11. A computer system for natural language processing (NLP), the computer system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computer system to carry out the steps of: receiving unstructured data from a user input; decomposing the unstructured data into text fragments; receiving logical segment evaluation criteria from the user input; identifying features of the text fragments; and assigning a score to the text fragments for one or more logical segments. 12. The computer system of claim 11 wherein decomposing the unstructured data into text fragments further comprises the processor grouping text fragments based on logical operators, formatting codes, and punctuation. 13. The computer system of claim 11 further comprising the processor comparing the unstructured data to the logical segment evaluation criteria. 14. The computer system of claim 11 wherein the logical segment evaluation criteria define how the unstructured data is divided into logical segments. 15. The computer system of claim 14 wherein the logical segments represent topics, topic types, target audiences, and degrees of importance. 16. The computer system of claim 11 wherein identifying features of the text fragments further comprises the processor using NLP to determine that the text fragments satisfy the logical segment evaluation criteria. 17. The computer system of claim 11 wherein assigning the score to the text fragments further comprises the processor evaluating the text fragments in accordance to the logical segment evaluation criteria. 18. The computer system of claim 11 wherein the score comprises a value that indicates a degree to which the text matches a logical segment based on the logical segment evaluation criteria. 19. A computer program product for processing unstructured data, said computer program product comprising: a computer readable storage medium having stored thereon: program instructions executable by a computer to cause the computer to receive unstructured data input from a client device; program instructions executable by the computer to cause the computer to analyze the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP); and program instructions executable by the computer to cause the computer to partition the unstructured data into logical segments based on satisfaction of the logical segment criteria. 20. The computer program product of claim 19 wherein the unstructured data comprise text including a variety of topics or content. 21. The computer program product of claim 19 further comprises program instructions executable by the computer to cause the computer to use the NLP to identify text that satisfy the logical segment criteria. 22. The computer program product of claim 19 wherein the unstructured data includes compliance obligations. 23. The computer program product of claim 19 wherein the logical segment criteria include features associated with a plurality of industries or companies. 24. The computer program product of claim 19 wherein the logical segment criteria include features associated with importance, priority, or risk. 25. A computer program product for processing unstructured data, said computer program product comprising: a computer readable storage medium having stored thereon: program instructions executable by a computer to cause the device to receive unstructured data input from a client device; program instructions executable by the computer to cause the computer to analyze the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP); program instructions executable by the computer to cause the computer to partition the unstructured data into logical segments based on satisfaction of the logical segment criteria; and program instructions executable by the computer to cause the computer to incorporate the logical segments into a repository.
A computer system for processing unstructured data, the computer system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computer system to carry out the steps of receiving unstructured data input from a client device, analyzing the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP), and partitioning the unstructured data into logical segments based on satisfaction of the logical segment criteria.1. A method, in a data processing system comprising a processor and a memory, for processing unstructured data, the method comprising: receiving, by the data processing system, unstructured data input from a client device; analyzing, by the data processing system, the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP); and partitioning, by the data processing system, the unstructured data into logical segments based on satisfaction of the logical segment criteria. 2. The method of claim 1 wherein the unstructured data comprise text including a variety of topics or content. 3. The method of claim 1 wherein analyzing the unstructured data for features further comprises using the NLP to identify text that satisfy the logical segment criteria. 4. The method of claim 1 wherein the unstructured data includes compliance obligations. 5. The method of claim 1 wherein the logical segment criteria include features associated with a plurality of industries or companies. 6. The method of claim 1 wherein the logical segment criteria include features associated with importance, priority, or risk. 7. A computer system for processing unstructured data, the computing system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computing system to carry out the steps of: receiving unstructured data input from a client device; analyzing the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP); partitioning the unstructured data into logical segments based on satisfaction of the logical segment criteria; and incorporating the logical segments into a repository. 8. The computer system of claim 7 further comprising the processor linking one or more files from the repository to the logical segments. 9. The computer system of claim 7 further comprising the processor generating files, documents, records, or data entries using the logical segments. 10. The computer system of claim 7 wherein the logical segments comprise one or more pointers, references, linked lists, or data structures. 11. A computer system for natural language processing (NLP), the computer system comprising a computer processor, a computer memory operatively coupled to the computer processor and the computer memory having disposed within it computer program instructions that, when executed by the processor, cause the computer system to carry out the steps of: receiving unstructured data from a user input; decomposing the unstructured data into text fragments; receiving logical segment evaluation criteria from the user input; identifying features of the text fragments; and assigning a score to the text fragments for one or more logical segments. 12. The computer system of claim 11 wherein decomposing the unstructured data into text fragments further comprises the processor grouping text fragments based on logical operators, formatting codes, and punctuation. 13. The computer system of claim 11 further comprising the processor comparing the unstructured data to the logical segment evaluation criteria. 14. The computer system of claim 11 wherein the logical segment evaluation criteria define how the unstructured data is divided into logical segments. 15. The computer system of claim 14 wherein the logical segments represent topics, topic types, target audiences, and degrees of importance. 16. The computer system of claim 11 wherein identifying features of the text fragments further comprises the processor using NLP to determine that the text fragments satisfy the logical segment evaluation criteria. 17. The computer system of claim 11 wherein assigning the score to the text fragments further comprises the processor evaluating the text fragments in accordance to the logical segment evaluation criteria. 18. The computer system of claim 11 wherein the score comprises a value that indicates a degree to which the text matches a logical segment based on the logical segment evaluation criteria. 19. A computer program product for processing unstructured data, said computer program product comprising: a computer readable storage medium having stored thereon: program instructions executable by a computer to cause the computer to receive unstructured data input from a client device; program instructions executable by the computer to cause the computer to analyze the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP); and program instructions executable by the computer to cause the computer to partition the unstructured data into logical segments based on satisfaction of the logical segment criteria. 20. The computer program product of claim 19 wherein the unstructured data comprise text including a variety of topics or content. 21. The computer program product of claim 19 further comprises program instructions executable by the computer to cause the computer to use the NLP to identify text that satisfy the logical segment criteria. 22. The computer program product of claim 19 wherein the unstructured data includes compliance obligations. 23. The computer program product of claim 19 wherein the logical segment criteria include features associated with a plurality of industries or companies. 24. The computer program product of claim 19 wherein the logical segment criteria include features associated with importance, priority, or risk. 25. A computer program product for processing unstructured data, said computer program product comprising: a computer readable storage medium having stored thereon: program instructions executable by a computer to cause the device to receive unstructured data input from a client device; program instructions executable by the computer to cause the computer to analyze the unstructured data for features that satisfy logical segment criteria by using natural language processing (NLP); program instructions executable by the computer to cause the computer to partition the unstructured data into logical segments based on satisfaction of the logical segment criteria; and program instructions executable by the computer to cause the computer to incorporate the logical segments into a repository.
2,600
10,420
10,420
14,938,878
2,659
A method for voice triggering, the method may include coupling, by an interface of a voice trigger sensor, the voice trigger sensor to a computer; receiving, by the voice trigger sensor, from the computer configuration information; configuring the voice trigger sensor by using the configuration information; coupling, by the interface, the voice trigger sensor to a target device during a voice activation period; receiving, by a processor of the voice trigger sensor, during the voice activation period, input signals; applying, by the processor, on the input signals a voice activation process to detect a voice command; and at least partially participating in an execution of the voice command.
1. A method for voice triggering, the method comprises: coupling, by an interface of a voice trigger sensor, the voice trigger sensor to a computer; receiving, by the voice trigger sensor, from the computer configuration information; configuring the voice trigger sensor by using the configuration information; coupling, by the interface, the voice trigger sensor to a target device during a voice activation period; receiving, by a processor of the voice trigger sensor, during the voice activation period, input signals; applying, by the processor, on the input signals a voice activation process to detect a voice command; and at least partially participating in an execution of the voice command. 2. The method according to claim 1 wherein the applying of the voice activation process comprises applying user independent voice activation. 3. The method according to claim 1 wherein the coupling of the voice trigger sensor to the computer occurs during a training period; wherein the configuration information metadata comprises a training result that is generated by the computer during the training period; wherein the applying of the voice activation process comprises applying, by the processor, on the input signals a training based voice activation process while using the training result to detect the voice command. 4. The method according to claim 3, comprising generating, by a microphone of the voice trigger sensor, first detection signals, during the training period, in response to first audio signals outputted by a user; sending, by the interface the detection signals to the computer; and generating, by the microphone the input signals during the voice activation period. 5. The method according to claim 1, comprising receiving the input signals from the target device. 6. The method according to claim 1, comprising wirelessly coupling the voice trigger sensor to at least one of the computer and the target device. 7. The method according to claim 1, comprising detacheably connecting the voice trigger sensor to at least one of the computer and the target device. 8. The method according to claim 1, comprising operating the voice trigger sensor in a first power consuming mode before detecting the voice command and to operating the voice trigger sensor in a second power consuming mode in response to the detection of the voice command; and wherein a power consumption related to the second power consuming mode exceeds the power consumption related to the first power consuming mode. 9. A voice trigger sensor comprising an interface, a memory module, a power supply module, and a processor; wherein the interface is adapted to couple the voice trigger sensor to a computer and to receive configuration information; wherein the voice trigger sensor is adapted to be configured in response to the configuration information; wherein the interface is adapted to couple the voice trigger sensor to a target device during a voice activation period; wherein the processor is configured to: (i) receive, during the voice activation period, input signals; (ii) apply on the input signals an voice activation process while using the configuration information to detect a voice command; and (iii) at least partially participate in an execution of the voice command. 10. The voice trigger sensor according to claim 9 wherein the processor is adapted to apply on the input signals a user independent voice recognition process. 11. The voice trigger sensor according to claim 9, wherein the configuration information comprising a training result; wherein the training result is obtained during a training period and while the voice trigger sensor is coupled by the interface to the computer. 12. The voice trigger sensor according to claim 11, wherein the processor is adapted to apply on the input signals a user dependent voice recognition process while using the training result. 13. The voice trigger sensor according to claim 11, comprising a microphone; wherein the microphone is configured to generate first detection signals, during the training period, in response to first audio signals outputted by a user; wherein the interface is configured to send the detection signals to the computer; and wherein the microphone is configured to generate the input signals during the voice activation period. 14. The voice trigger sensor according to claim 9, wherein the voice trigger sensor does not include a microphone; and wherein the interface is configured to receive the input signals from the target device. 15. The voice trigger sensor according to claim 9, wherein the interface is configured to wirelessly couple the voice trigger sensor to at least one of the computer and the target device. 16. The voice trigger sensor according to claim 9, wherein the interface is configured to be detacheably connected to at least one of the computer and the target device. 17. The voice trigger sensor according to claim 9, that is configured to operate in a first power consuming mode before the processor detects the voice command and to operate in a second power consuming mode in response to the detection of the voice command; and wherein a power consumption related to the second power consuming mode exceeds the power consumption related to the first power consuming mode. 18. The voice trigger sensor according to claim 9, wherein the interface is configured to receive configuration information from the computer; and wherein the processor is configured to configure the training based voice activation process in response to the configuration information. 19. The voice trigger sensor according to claim 9, wherein the interface is configured to receive configuration information from the computer; wherein the voice trigger sensor comprises a microphone; and wherein the voice trigger sensor is configured to configure the microphone of the voice activated device in response to the configuration information. 20. A non-transitory computer readable medium that stores instructions that once executed by a voice trigger sensor cause the voice trigger sensor to: couple, by an interface of a voice trigger sensor, the voice trigger sensor to a computer; receive, by the voice trigger sensor, from the computer configuration information; configure the voice trigger sensor by using the configuration information; couple, by the interface, the voice trigger sensor to a target device during a voice activation period; receive, by a processor of the voice trigger sensor, during the voice activation period, input signals; apply, by the processor, on the input signals a voice activation process to detect a voice command; and at least partially participate in an execution of the voice command.
A method for voice triggering, the method may include coupling, by an interface of a voice trigger sensor, the voice trigger sensor to a computer; receiving, by the voice trigger sensor, from the computer configuration information; configuring the voice trigger sensor by using the configuration information; coupling, by the interface, the voice trigger sensor to a target device during a voice activation period; receiving, by a processor of the voice trigger sensor, during the voice activation period, input signals; applying, by the processor, on the input signals a voice activation process to detect a voice command; and at least partially participating in an execution of the voice command.1. A method for voice triggering, the method comprises: coupling, by an interface of a voice trigger sensor, the voice trigger sensor to a computer; receiving, by the voice trigger sensor, from the computer configuration information; configuring the voice trigger sensor by using the configuration information; coupling, by the interface, the voice trigger sensor to a target device during a voice activation period; receiving, by a processor of the voice trigger sensor, during the voice activation period, input signals; applying, by the processor, on the input signals a voice activation process to detect a voice command; and at least partially participating in an execution of the voice command. 2. The method according to claim 1 wherein the applying of the voice activation process comprises applying user independent voice activation. 3. The method according to claim 1 wherein the coupling of the voice trigger sensor to the computer occurs during a training period; wherein the configuration information metadata comprises a training result that is generated by the computer during the training period; wherein the applying of the voice activation process comprises applying, by the processor, on the input signals a training based voice activation process while using the training result to detect the voice command. 4. The method according to claim 3, comprising generating, by a microphone of the voice trigger sensor, first detection signals, during the training period, in response to first audio signals outputted by a user; sending, by the interface the detection signals to the computer; and generating, by the microphone the input signals during the voice activation period. 5. The method according to claim 1, comprising receiving the input signals from the target device. 6. The method according to claim 1, comprising wirelessly coupling the voice trigger sensor to at least one of the computer and the target device. 7. The method according to claim 1, comprising detacheably connecting the voice trigger sensor to at least one of the computer and the target device. 8. The method according to claim 1, comprising operating the voice trigger sensor in a first power consuming mode before detecting the voice command and to operating the voice trigger sensor in a second power consuming mode in response to the detection of the voice command; and wherein a power consumption related to the second power consuming mode exceeds the power consumption related to the first power consuming mode. 9. A voice trigger sensor comprising an interface, a memory module, a power supply module, and a processor; wherein the interface is adapted to couple the voice trigger sensor to a computer and to receive configuration information; wherein the voice trigger sensor is adapted to be configured in response to the configuration information; wherein the interface is adapted to couple the voice trigger sensor to a target device during a voice activation period; wherein the processor is configured to: (i) receive, during the voice activation period, input signals; (ii) apply on the input signals an voice activation process while using the configuration information to detect a voice command; and (iii) at least partially participate in an execution of the voice command. 10. The voice trigger sensor according to claim 9 wherein the processor is adapted to apply on the input signals a user independent voice recognition process. 11. The voice trigger sensor according to claim 9, wherein the configuration information comprising a training result; wherein the training result is obtained during a training period and while the voice trigger sensor is coupled by the interface to the computer. 12. The voice trigger sensor according to claim 11, wherein the processor is adapted to apply on the input signals a user dependent voice recognition process while using the training result. 13. The voice trigger sensor according to claim 11, comprising a microphone; wherein the microphone is configured to generate first detection signals, during the training period, in response to first audio signals outputted by a user; wherein the interface is configured to send the detection signals to the computer; and wherein the microphone is configured to generate the input signals during the voice activation period. 14. The voice trigger sensor according to claim 9, wherein the voice trigger sensor does not include a microphone; and wherein the interface is configured to receive the input signals from the target device. 15. The voice trigger sensor according to claim 9, wherein the interface is configured to wirelessly couple the voice trigger sensor to at least one of the computer and the target device. 16. The voice trigger sensor according to claim 9, wherein the interface is configured to be detacheably connected to at least one of the computer and the target device. 17. The voice trigger sensor according to claim 9, that is configured to operate in a first power consuming mode before the processor detects the voice command and to operate in a second power consuming mode in response to the detection of the voice command; and wherein a power consumption related to the second power consuming mode exceeds the power consumption related to the first power consuming mode. 18. The voice trigger sensor according to claim 9, wherein the interface is configured to receive configuration information from the computer; and wherein the processor is configured to configure the training based voice activation process in response to the configuration information. 19. The voice trigger sensor according to claim 9, wherein the interface is configured to receive configuration information from the computer; wherein the voice trigger sensor comprises a microphone; and wherein the voice trigger sensor is configured to configure the microphone of the voice activated device in response to the configuration information. 20. A non-transitory computer readable medium that stores instructions that once executed by a voice trigger sensor cause the voice trigger sensor to: couple, by an interface of a voice trigger sensor, the voice trigger sensor to a computer; receive, by the voice trigger sensor, from the computer configuration information; configure the voice trigger sensor by using the configuration information; couple, by the interface, the voice trigger sensor to a target device during a voice activation period; receive, by a processor of the voice trigger sensor, during the voice activation period, input signals; apply, by the processor, on the input signals a voice activation process to detect a voice command; and at least partially participate in an execution of the voice command.
2,600
10,421
10,421
15,811,581
2,647
A facility for determining the location of a mobile device when a location determination of a desired accuracy is desired. If available, the facility determines the location of the mobile device using a device-based technique or using a location determination technique that is accessible over a macronetwork. Macronetworks are networks that are designed to cover relatively large areas. If a location determination technique of desired accuracy is not available on the device or over a macronetwork, the facility attempts to use a location determination technique that is accessible over a micronetwork to determine the location of the mobile device. Micronetworks are networks that are designed to cover smaller areas. By forcing a switch from a macronetwork-based location determination technique to a micronetwork-based location determination technique, the facility ensures that a location determination is made for the mobile device of a desired accuracy, time to fix (TTF), and/or yield.
1. At least one non-transitory, computer-readable medium carrying instructions, which when executed by at least one data processor, performs operations to determine a location of a mobile device, the operations comprising: following an evaluation that a geographic location of the mobile device provided via a macronetwork would fail a location determination criterion, obtaining data related to a micronetwork after the mobile device has broadcast a distress message to any micronetworks, wherein location information is obtainable regardless of whether a two-way communication session is established between the mobile device and the micronetwork, and wherein the obtaining of data related to the micronetwork is performed while the mobile device has maintained a communication session with the macronetwork; determining updated location information for the mobile device based on the data related to the micronetwork, wherein the data related to the micronetwork is provided by the mobile device via the macronetwork. 2. The computer-readable medium of claim 1, wherein the location determination criterion failed when at least one of an accuracy, a time-to-fix, and a yield associated with a macronetwork-obtained location fails to satisfy a desired accuracy, a desired time-to-fix, or a desired yield associated with the mobile device. 3. The computer-readable medium of claim 1, wherein the desired accuracy is an accuracy within 20 meters of an actual location of the mobile device. 4. The computer-readable medium of claim 1, wherein the micronetwork is distinct from the macronetwork. 5. The computer-readable medium of claim 1, wherein the operations interoperate with a switch or a Secure User Plane Location (SUPL). 6. The computer-readable medium of claim 1, wherein the geographic location of the mobile device provided via the macronetwork is a macronetwork-based location determination technique that includes a Global Positioning System, a Time Delay On Arrival (TDOA), an Assisted Global Positioning System (AGPS), or a Round Trip Time (RTT) technique. 7. The computer-readable medium of claim 1, further comprising providing a retrieved location information as an indication of the location of the mobile device to a Public Safety Answering Point (PSAP). 8. The computer-readable medium of claim 1, wherein the communication session with the macronetwork comprises communicating via Global System for Mobile Communication (GSM), Time Division Multiple Access (TDMA), Universal Mobile Telecommunication System (UMTS), Evolution-Data Optimized (EVDO), Long-Term Evolution (LTE), Code Division Multiple Access (CDMA), Orthogonal Frequency-Division Multiplexing (OFDM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Advanced Mobile Phone System (AMPS), Worldwide Interoperability for Microwave Access (WiMax), or Ultra Mobile Broadband (UMB) protocols, and wherein the broadcasting to any micronetworks comprises communicating via Wireless Fidelity (WiFi), General Access Network (GAN), Unlicensed Mobile Access (UMA), Wireless Universal Serial Bus (WUSB), or ZigBee protocols. 9. A computer-implemented method for determining a location of a mobile device, the method comprising: following an evaluation that a geographic location of the mobile device provided via a macronetwork would fail a location determination criterion, obtaining data related to a micronetwork after the mobile device has broadcast a distress message to any micronetworks, wherein location information is obtainable regardless of whether a two-way communication session is established between the mobile device and the micronetwork, and wherein the obtaining of data related to the micronetwork is performed while the mobile device has maintained a communication session with the macronetwork; determining updated location information for the mobile device based on the data related to the micronetwork, wherein the data related to the micronetwork is provided by the mobile device via the macronetwork. 10. The method of claim 9, wherein the location determination criterion failed when at least one of an accuracy, a time-to-fix, and a yield associated with a macronetwork-obtained location fails to satisfy a desired accuracy, a desired time-to-fix, or a desired yield associated with the mobile device. 11. The method of claim 9, wherein the desired accuracy is an accuracy within 20 meters of an actual location of the mobile device. 12. The method of claim 9, wherein the micronetwork is distinct from the macronetwork. 13. The method of claim 9, wherein the method interoperates with a switch or a Secure User Plane Location (SUPL). 14. The method of claim 9, wherein the geographic location of the mobile device provided via the macronetwork is a macronetwork-based location determination technique that includes a Global Positioning System, a Time Delay On Arrival (TDOA), an Assisted Global Positioning System (AGPS), or a Round Trip Time (RTT) technique. 15. The method of claim 9, further comprising providing a retrieved location information as an indication of the location of the mobile device to a Public Safety Answering Point (PSAP). 16. The method of claim 9, wherein the communication session with the macronetwork comprises communicating via Global System for Mobile Communication (GSM), Time Division Multiple Access (TDMA), Universal Mobile Telecommunication System (UMTS), Evolution-Data Optimized (EVDO), Long-Term Evolution (LTE), Code Division Multiple Access (CDMA), Orthogonal Frequency-Division Multiplexing (OFDM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Advanced Mobile Phone System (AMPS), Worldwide Interoperability for Microwave Access (WiMax), or Ultra Mobile Broadband (UMB) protocols, and wherein the broadcasting to any micronetworks comprises communicating via Wireless Fidelity (WiFi), General Access Network (GAN), Unlicensed Mobile Access (UMA), Wireless Universal Serial Bus (WUSB), or ZigBee protocols. 17. A system for determining a location of a mobile device, the system comprising: at least one hardware processor; at least one non-transitory memory, coupled to the at least one hardware processor and storing instructions, which when executed by the at least one hardware processor, perform a process, the process comprising: following an evaluation that a geographic location of the mobile device provided via a macronetwork would fail a location determination criterion, obtaining data related to a micronetwork after the mobile device has broadcast a distress message to any micronetworks, wherein location information is obtainable regardless of whether a two-way communication session is established between the mobile device and the micronetwork, and wherein the obtaining of data related to the micronetwork is performed while the mobile device has maintained a communication session with the macronetwork; determining updated location information for the mobile device based on the data related to the micronetwork, wherein the data related to the micronetwork is provided by the mobile device via the macronetwork. 18. The system of claim 17, wherein the location determination criterion failed when at least one of an accuracy, a time-to-fix, and a yield associated with a macronetwork-obtained location fails to satisfy a desired accuracy, a desired time-to-fix, or a desired yield associated with the mobile device. 19. The system of claim 17, wherein the desired accuracy is an accuracy within 20 meters of an actual location of the mobile device. 20. The system of claim 17 wherein the method interoperates with a switch or a Secure User Plane Location (SUPL).
A facility for determining the location of a mobile device when a location determination of a desired accuracy is desired. If available, the facility determines the location of the mobile device using a device-based technique or using a location determination technique that is accessible over a macronetwork. Macronetworks are networks that are designed to cover relatively large areas. If a location determination technique of desired accuracy is not available on the device or over a macronetwork, the facility attempts to use a location determination technique that is accessible over a micronetwork to determine the location of the mobile device. Micronetworks are networks that are designed to cover smaller areas. By forcing a switch from a macronetwork-based location determination technique to a micronetwork-based location determination technique, the facility ensures that a location determination is made for the mobile device of a desired accuracy, time to fix (TTF), and/or yield.1. At least one non-transitory, computer-readable medium carrying instructions, which when executed by at least one data processor, performs operations to determine a location of a mobile device, the operations comprising: following an evaluation that a geographic location of the mobile device provided via a macronetwork would fail a location determination criterion, obtaining data related to a micronetwork after the mobile device has broadcast a distress message to any micronetworks, wherein location information is obtainable regardless of whether a two-way communication session is established between the mobile device and the micronetwork, and wherein the obtaining of data related to the micronetwork is performed while the mobile device has maintained a communication session with the macronetwork; determining updated location information for the mobile device based on the data related to the micronetwork, wherein the data related to the micronetwork is provided by the mobile device via the macronetwork. 2. The computer-readable medium of claim 1, wherein the location determination criterion failed when at least one of an accuracy, a time-to-fix, and a yield associated with a macronetwork-obtained location fails to satisfy a desired accuracy, a desired time-to-fix, or a desired yield associated with the mobile device. 3. The computer-readable medium of claim 1, wherein the desired accuracy is an accuracy within 20 meters of an actual location of the mobile device. 4. The computer-readable medium of claim 1, wherein the micronetwork is distinct from the macronetwork. 5. The computer-readable medium of claim 1, wherein the operations interoperate with a switch or a Secure User Plane Location (SUPL). 6. The computer-readable medium of claim 1, wherein the geographic location of the mobile device provided via the macronetwork is a macronetwork-based location determination technique that includes a Global Positioning System, a Time Delay On Arrival (TDOA), an Assisted Global Positioning System (AGPS), or a Round Trip Time (RTT) technique. 7. The computer-readable medium of claim 1, further comprising providing a retrieved location information as an indication of the location of the mobile device to a Public Safety Answering Point (PSAP). 8. The computer-readable medium of claim 1, wherein the communication session with the macronetwork comprises communicating via Global System for Mobile Communication (GSM), Time Division Multiple Access (TDMA), Universal Mobile Telecommunication System (UMTS), Evolution-Data Optimized (EVDO), Long-Term Evolution (LTE), Code Division Multiple Access (CDMA), Orthogonal Frequency-Division Multiplexing (OFDM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Advanced Mobile Phone System (AMPS), Worldwide Interoperability for Microwave Access (WiMax), or Ultra Mobile Broadband (UMB) protocols, and wherein the broadcasting to any micronetworks comprises communicating via Wireless Fidelity (WiFi), General Access Network (GAN), Unlicensed Mobile Access (UMA), Wireless Universal Serial Bus (WUSB), or ZigBee protocols. 9. A computer-implemented method for determining a location of a mobile device, the method comprising: following an evaluation that a geographic location of the mobile device provided via a macronetwork would fail a location determination criterion, obtaining data related to a micronetwork after the mobile device has broadcast a distress message to any micronetworks, wherein location information is obtainable regardless of whether a two-way communication session is established between the mobile device and the micronetwork, and wherein the obtaining of data related to the micronetwork is performed while the mobile device has maintained a communication session with the macronetwork; determining updated location information for the mobile device based on the data related to the micronetwork, wherein the data related to the micronetwork is provided by the mobile device via the macronetwork. 10. The method of claim 9, wherein the location determination criterion failed when at least one of an accuracy, a time-to-fix, and a yield associated with a macronetwork-obtained location fails to satisfy a desired accuracy, a desired time-to-fix, or a desired yield associated with the mobile device. 11. The method of claim 9, wherein the desired accuracy is an accuracy within 20 meters of an actual location of the mobile device. 12. The method of claim 9, wherein the micronetwork is distinct from the macronetwork. 13. The method of claim 9, wherein the method interoperates with a switch or a Secure User Plane Location (SUPL). 14. The method of claim 9, wherein the geographic location of the mobile device provided via the macronetwork is a macronetwork-based location determination technique that includes a Global Positioning System, a Time Delay On Arrival (TDOA), an Assisted Global Positioning System (AGPS), or a Round Trip Time (RTT) technique. 15. The method of claim 9, further comprising providing a retrieved location information as an indication of the location of the mobile device to a Public Safety Answering Point (PSAP). 16. The method of claim 9, wherein the communication session with the macronetwork comprises communicating via Global System for Mobile Communication (GSM), Time Division Multiple Access (TDMA), Universal Mobile Telecommunication System (UMTS), Evolution-Data Optimized (EVDO), Long-Term Evolution (LTE), Code Division Multiple Access (CDMA), Orthogonal Frequency-Division Multiplexing (OFDM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Advanced Mobile Phone System (AMPS), Worldwide Interoperability for Microwave Access (WiMax), or Ultra Mobile Broadband (UMB) protocols, and wherein the broadcasting to any micronetworks comprises communicating via Wireless Fidelity (WiFi), General Access Network (GAN), Unlicensed Mobile Access (UMA), Wireless Universal Serial Bus (WUSB), or ZigBee protocols. 17. A system for determining a location of a mobile device, the system comprising: at least one hardware processor; at least one non-transitory memory, coupled to the at least one hardware processor and storing instructions, which when executed by the at least one hardware processor, perform a process, the process comprising: following an evaluation that a geographic location of the mobile device provided via a macronetwork would fail a location determination criterion, obtaining data related to a micronetwork after the mobile device has broadcast a distress message to any micronetworks, wherein location information is obtainable regardless of whether a two-way communication session is established between the mobile device and the micronetwork, and wherein the obtaining of data related to the micronetwork is performed while the mobile device has maintained a communication session with the macronetwork; determining updated location information for the mobile device based on the data related to the micronetwork, wherein the data related to the micronetwork is provided by the mobile device via the macronetwork. 18. The system of claim 17, wherein the location determination criterion failed when at least one of an accuracy, a time-to-fix, and a yield associated with a macronetwork-obtained location fails to satisfy a desired accuracy, a desired time-to-fix, or a desired yield associated with the mobile device. 19. The system of claim 17, wherein the desired accuracy is an accuracy within 20 meters of an actual location of the mobile device. 20. The system of claim 17 wherein the method interoperates with a switch or a Secure User Plane Location (SUPL).
2,600
10,422
10,422
16,006,163
2,648
Provided are communication devices having adaptable features and methods for implementation. One device includes at least one adaptable component and a processor configured to detect an external cue relevant to operation of the at least one adaptable component, to determine a desired state for the at least one adaptable component corresponding to the external cue, and then to dynamically adapt the at least one adaptable component to substantially produce the desired state. One adaptable component comprises at least one adaptable speaker system. Another adaptable component comprises at least one adaptable antenna.
1. A speaker system, comprising: a speaker; and circuitry configured to: determine a displacement of a component of the speaker by sensing changes in a current draw of the speaker, and determine whether the displacement of the component of the speaker is caused by an external cue. 2. The speaker system of claim 1, wherein the circuitry is further configured to detect a presence of noise in response to determining that the displacement of the component of the speaker is caused by the external cue, and to implement a noise cancelation strategy to cancel the noise. 3. The speaker system according to claim 1, wherein the circuitry is further configured to detect a proximity between the speaker and an object in response to determining that the displacement of the component of the speaker is caused by the external cue. 4. The speaker system according to claim 1, wherein the circuitry is further configured to determine if a user is holding a device that includes the speaker system in response to determining that the displacement of the component of the speaker is caused by the external cue. 5. The speaker system according to claim 1, wherein the circuitry is further configured to detect distortion in response to determining that the displacement of the component of the speaker is caused by the external cue. 6. The speaker system according to claim 5, wherein the circuitry is further configured to apply a frequency-dependent filter to an input signal to the speaker in order to reduce the distortion. 7. The speaker system according to claim 6, wherein the frequency-dependent filter does not affect frequencies of the input signal that do not contribute to the distortion. 8. The speaker system according to claim 1, wherein the speaker system is included in a headset. 9. The speaker system according to claim 1, wherein the speaker system is included in a mobile phone. 10. The speaker system according to claim 1, wherein the circuitry is further configured to determine resonant frequencies of the speaker. 11. The speaker system according to claim 10, wherein the circuitry is further configured to reduce power levels of a subset of frequencies in an input signal to the speaker in order to avoid a resonance event in the speaker. 12. The speaker system according to claim 11, wherein the subset of frequencies includes the resonant frequencies of the speaker. 13. A method for a speaker system, comprising: sensing, with circuitry, changes in current draw of a speaker of the speaker system; determining, with the circuitry, a displacement of a component of the speaker based on the changes in the current draw of the speaker; and determining, with the circuitry, whether the displacement of the component of the speaker is caused by an external cue. 14. The method according to claim 13, further comprising: detecting a presence of noise in response to determining that the displacement of the component of the speaker is caused by the external cue; and implementing a noise cancelation strategy to cancel the noise. 15. The method according to claim 13, further comprising: detecting a proximity between the speaker and an object in response to determining that the displacement of the component of the speaker is caused by the external cue. 16. The method according to claim 13, further comprising: determining if a user is holding a device that includes the speaker system in response to determining that the displacement of the component of the speaker is caused by the external cue. 17. The method according to claim 13, further comprising: detecting distortion in response to determining that the displacement of the component of the speaker is caused by the external cue. 18. The method according to claim 13, further comprising: determining resonant frequencies of the speaker. 19. The method according to claim 18, further comprising: reducing power levels of a subset of frequencies in an input signal to the speaker in order to avoid a resonance event in the speaker, the subset of frequencies corresponding to the resonant frequencies of the speaker. 20. A non-transitory computer-readable medium encoded with computer-readable instructions that, when executed by processing circuitry, cause the processing circuitry to perform a method comprising: sensing changes in current draw of a speaker of the speaker system; determining a displacement of a component of the speaker based on the changes in the current draw of the speaker; and determining whether the displacement of the component of the speaker is caused by an external cue.
Provided are communication devices having adaptable features and methods for implementation. One device includes at least one adaptable component and a processor configured to detect an external cue relevant to operation of the at least one adaptable component, to determine a desired state for the at least one adaptable component corresponding to the external cue, and then to dynamically adapt the at least one adaptable component to substantially produce the desired state. One adaptable component comprises at least one adaptable speaker system. Another adaptable component comprises at least one adaptable antenna.1. A speaker system, comprising: a speaker; and circuitry configured to: determine a displacement of a component of the speaker by sensing changes in a current draw of the speaker, and determine whether the displacement of the component of the speaker is caused by an external cue. 2. The speaker system of claim 1, wherein the circuitry is further configured to detect a presence of noise in response to determining that the displacement of the component of the speaker is caused by the external cue, and to implement a noise cancelation strategy to cancel the noise. 3. The speaker system according to claim 1, wherein the circuitry is further configured to detect a proximity between the speaker and an object in response to determining that the displacement of the component of the speaker is caused by the external cue. 4. The speaker system according to claim 1, wherein the circuitry is further configured to determine if a user is holding a device that includes the speaker system in response to determining that the displacement of the component of the speaker is caused by the external cue. 5. The speaker system according to claim 1, wherein the circuitry is further configured to detect distortion in response to determining that the displacement of the component of the speaker is caused by the external cue. 6. The speaker system according to claim 5, wherein the circuitry is further configured to apply a frequency-dependent filter to an input signal to the speaker in order to reduce the distortion. 7. The speaker system according to claim 6, wherein the frequency-dependent filter does not affect frequencies of the input signal that do not contribute to the distortion. 8. The speaker system according to claim 1, wherein the speaker system is included in a headset. 9. The speaker system according to claim 1, wherein the speaker system is included in a mobile phone. 10. The speaker system according to claim 1, wherein the circuitry is further configured to determine resonant frequencies of the speaker. 11. The speaker system according to claim 10, wherein the circuitry is further configured to reduce power levels of a subset of frequencies in an input signal to the speaker in order to avoid a resonance event in the speaker. 12. The speaker system according to claim 11, wherein the subset of frequencies includes the resonant frequencies of the speaker. 13. A method for a speaker system, comprising: sensing, with circuitry, changes in current draw of a speaker of the speaker system; determining, with the circuitry, a displacement of a component of the speaker based on the changes in the current draw of the speaker; and determining, with the circuitry, whether the displacement of the component of the speaker is caused by an external cue. 14. The method according to claim 13, further comprising: detecting a presence of noise in response to determining that the displacement of the component of the speaker is caused by the external cue; and implementing a noise cancelation strategy to cancel the noise. 15. The method according to claim 13, further comprising: detecting a proximity between the speaker and an object in response to determining that the displacement of the component of the speaker is caused by the external cue. 16. The method according to claim 13, further comprising: determining if a user is holding a device that includes the speaker system in response to determining that the displacement of the component of the speaker is caused by the external cue. 17. The method according to claim 13, further comprising: detecting distortion in response to determining that the displacement of the component of the speaker is caused by the external cue. 18. The method according to claim 13, further comprising: determining resonant frequencies of the speaker. 19. The method according to claim 18, further comprising: reducing power levels of a subset of frequencies in an input signal to the speaker in order to avoid a resonance event in the speaker, the subset of frequencies corresponding to the resonant frequencies of the speaker. 20. A non-transitory computer-readable medium encoded with computer-readable instructions that, when executed by processing circuitry, cause the processing circuitry to perform a method comprising: sensing changes in current draw of a speaker of the speaker system; determining a displacement of a component of the speaker based on the changes in the current draw of the speaker; and determining whether the displacement of the component of the speaker is caused by an external cue.
2,600
10,423
10,423
15,269,129
2,698
In general, techniques are described that facilitate processing of color image data using both a mono image data and a color image data. A device comprising a monochrome camera, a color camera, and a processor may be configured to perform the techniques. The monochrome camera may be configured to capture monochrome image data of a scene. The color camera may be configured to capture color image data of the scene. A processor may be configured to match features of the color image data to features of the monochrome image data, and compute a finite number of shift values based on the matched features of the color image data and the monochrome image data. The processor may further be configured to shift the color image data based on the finite number of shift values to generate enhanced color image data.
1. A method of capturing color image data, the method comprising: capturing, by a monochrome camera of a device, monochrome image data of a scene; capturing, by a color camera of the device, color image data of the scene; matching, by a processor of the device, features of the color image data to features of the monochrome image data; computing, by the processor, a finite number of shift values based on the matched features of the color image data and the monochrome image data, the finite number of shift values mapping pixels of the color image data to pixels of the monochrome image data; and shifting, by the processor, the color image data based on the finite number of shift values to generate enhanced color image data. 2. The method of claim 1, wherein computing the finite number of shift values comprises: computing a weighted histogram of distances between the matched features of the color image data and the monochrome image data; and selecting the finite number of shift values as a threshold number of largest values of the weighted histogram of distances. 3. The method of claim 1, further comprising performing, prior to matching the features of the color image data to the features of the monochrome image data, intensity equalization with respect to a luma component of the color image data and a luma component of the monochrome image data to correct for differences in intensity between the color camera and the monochrome camera. 4. The method of claim 1, further comprising, prior to matching the features of the color image data to the features of the monochrome image data, determining a parallax value indicative of a level of parallax between the monochrome image data and the color image data, and determining whether the parallax value is greater than a parallax threshold, wherein matching the features of the color image data to the features of the monochrome image data comprises matching, in response to the determination that the parallax value is greater than the parallax threshold, the features of the color image data to the features of the monochrome image data comprises matching. 5. The method of claim 4, wherein performing intensity equalization comprises applying the trained regressor to the luma component of the color image data or the shifted color image data and the luma component of the monochrome image data to adapt the luma component of the monochrome image data or the luma component of the color image data. 6. The method of claim 5, wherein the regressor comprises a ridge regressor. 7. The method of claim 1, wherein capturing the monochrome image data comprises capturing two or more different monochrome image data over a period of time, wherein the method further comprises processing the two or more different monochrome image data to generate a single combined monochrome image data, and wherein matching the features of the color image data to the features of the luma image data comprises matching the features of the color image data to the features of the single combined luma image data. 8. The method of claim 1, wherein capturing the color image data comprises capturing a two or more sets of color image data over a period of time, wherein the method further comprises processing the two or more different color image data to generate a single combined color image data, and wherein matching the features of the color image data to the features of the luma image data comprises matching the features of the single combined color image data to the features of the luma image data. 9. The method of claim 1, wherein capturing the monochrome image data comprises capturing two or more different monochrome image data over a period of time, wherein capturing the color image data comprises capturing a two or more sets of color image data over the period of time, and wherein the method further comprises: processing the two or more different monochrome image data to generate a single combined monochrome image data; and processing the two or more different color image data to generate a single combined color image data, and wherein matching the features of the color image data to the features of the luma image data comprises matching the features of the single combined color image data to the features of the single combined luma image data 10. The method of claim 1, further comprising setting, in response to a determination that a parallax value indicative of a level of parallax between the monochrome image data and the color image data is not greater than a parallax threshold, a chroma component of the enhanced image data equal to a chroma component of the color image data. 11. A device configured to capture color image data, the device comprising: a monochrome camera configured to capture monochrome image data of a scene; a color camera configured to capture color image data of the scene; and a processor configured to: match features of the color image data to features of the monochrome image data, compute a finite number of shift values based on the matched features of the color image data and the monochrome image data, the finite number of shift values mapping pixels of the color image data to pixels of the monochrome image data; and shift the color image data based on the finite number of shift values to generate enhanced color image data. 12. The device of claim 11, wherein the processor is configured to: compute a weighted histogram of distances between the matched features of the color image data and the monochrome image data; and select the finite number of shift values as a threshold number of largest values of the weighted histogram of distances. 13. The device of claim 11, wherein the processor is further configured to perform, prior to matching the features of the color image data to the features of the monochrome image data, intensity equalization with respect to a luma component of the color image data and a luma component of the monochrome image data to correct for differences in intensity between the color camera and the monochrome camera. 14. The device of claim 11, wherein the processor is further configured to, prior to matching the features of the color image data to the features of the monochrome image data, determine a parallax value indicative of a level of parallax between the monochrome image data and the color image data, and determine whether the parallax value is greater than a parallax threshold, wherein the processor is configured to match, in response to the determination that the parallax value is greater than the parallax threshold, the features of the color image data to the features of the monochrome image data comprises matching. 15. The device of claim 14, wherein the processor is configured to apply the trained regressor to the luma component of the color image data or the shifted color image data and the luma component of the monochrome image data to adapt the luma component of the monochrome image data. 16. The device of claim 15, wherein the regressor comprises a ridge regressor. 17. The device of claim 11, wherein the monochrome camera is configured to capture two or more different monochrome image data over a period of time, wherein the processor is further configured to process the two or more different monochrome image data to generate a single combined monochrome image data, and wherein the processor is configured to match the features of the color image data to the features of the single combined luma image data 18. The device of claim 11, wherein the color camera is configured to capture a two or more sets of color image data over a period of time, wherein the processor is further configured to process the two or more different color image data to generate a single combined color image data, and wherein the processor is configured to match the features of the single combined color image data to the features of the luma image data. 19. The device of claim 11, wherein the monochrome camera is configured to capture two or more different monochrome image data over a period of time, wherein the color camera is configured to capture a two or more sets of color image data over the period of time, and wherein the processor is further configured to: process the two or more different monochrome image data to generate a single combined monochrome image data; and process the two or more different color image data to generate a single combined color image data, and wherein the processor is configured to match the features of the single combined color image data to the features of the single combined luma image data 20. The device of claim 11, wherein the processor is further configured to set, in response to a determination that a parallax value indicative of a level of parallax between the color image data and the monochrome image data is not greater than a parallax threshold, a chroma component of the enhanced image data equal to a chroma component of the color image data. 21. A device configured to capture color image data, the device comprising: means for capturing monochrome image data of a scene; means for capturing color image data of the scene; means for matching features of the color image data to features of the monochrome image data; means for computing a finite number of shift values based on the matched features of the color image data and the monochrome image data, the finite number of shift values mapping pixels of the color image data to pixels of the monochrome image data; and means for shifting the color image data based on the finite number of shift values to generate enhanced color image data. 22. A non-transitory computer-readable medium having stored thereon instructions that, when executed, cause one or more processors of a device to: interface with a monochrome camera to capture monochrome image data of a scene; interface with a color camera to capture color image data of the scene; match features of the color image data to features of the monochrome image data; compute a finite number of shift values based on the matched features of the color image data and the monochrome image data, the finite number of shift values mapping pixels of the color image data to pixels of the monochrome image data; and shift the color image data based on the finite number of shift values to generate enhanced color image data.
In general, techniques are described that facilitate processing of color image data using both a mono image data and a color image data. A device comprising a monochrome camera, a color camera, and a processor may be configured to perform the techniques. The monochrome camera may be configured to capture monochrome image data of a scene. The color camera may be configured to capture color image data of the scene. A processor may be configured to match features of the color image data to features of the monochrome image data, and compute a finite number of shift values based on the matched features of the color image data and the monochrome image data. The processor may further be configured to shift the color image data based on the finite number of shift values to generate enhanced color image data.1. A method of capturing color image data, the method comprising: capturing, by a monochrome camera of a device, monochrome image data of a scene; capturing, by a color camera of the device, color image data of the scene; matching, by a processor of the device, features of the color image data to features of the monochrome image data; computing, by the processor, a finite number of shift values based on the matched features of the color image data and the monochrome image data, the finite number of shift values mapping pixels of the color image data to pixels of the monochrome image data; and shifting, by the processor, the color image data based on the finite number of shift values to generate enhanced color image data. 2. The method of claim 1, wherein computing the finite number of shift values comprises: computing a weighted histogram of distances between the matched features of the color image data and the monochrome image data; and selecting the finite number of shift values as a threshold number of largest values of the weighted histogram of distances. 3. The method of claim 1, further comprising performing, prior to matching the features of the color image data to the features of the monochrome image data, intensity equalization with respect to a luma component of the color image data and a luma component of the monochrome image data to correct for differences in intensity between the color camera and the monochrome camera. 4. The method of claim 1, further comprising, prior to matching the features of the color image data to the features of the monochrome image data, determining a parallax value indicative of a level of parallax between the monochrome image data and the color image data, and determining whether the parallax value is greater than a parallax threshold, wherein matching the features of the color image data to the features of the monochrome image data comprises matching, in response to the determination that the parallax value is greater than the parallax threshold, the features of the color image data to the features of the monochrome image data comprises matching. 5. The method of claim 4, wherein performing intensity equalization comprises applying the trained regressor to the luma component of the color image data or the shifted color image data and the luma component of the monochrome image data to adapt the luma component of the monochrome image data or the luma component of the color image data. 6. The method of claim 5, wherein the regressor comprises a ridge regressor. 7. The method of claim 1, wherein capturing the monochrome image data comprises capturing two or more different monochrome image data over a period of time, wherein the method further comprises processing the two or more different monochrome image data to generate a single combined monochrome image data, and wherein matching the features of the color image data to the features of the luma image data comprises matching the features of the color image data to the features of the single combined luma image data. 8. The method of claim 1, wherein capturing the color image data comprises capturing a two or more sets of color image data over a period of time, wherein the method further comprises processing the two or more different color image data to generate a single combined color image data, and wherein matching the features of the color image data to the features of the luma image data comprises matching the features of the single combined color image data to the features of the luma image data. 9. The method of claim 1, wherein capturing the monochrome image data comprises capturing two or more different monochrome image data over a period of time, wherein capturing the color image data comprises capturing a two or more sets of color image data over the period of time, and wherein the method further comprises: processing the two or more different monochrome image data to generate a single combined monochrome image data; and processing the two or more different color image data to generate a single combined color image data, and wherein matching the features of the color image data to the features of the luma image data comprises matching the features of the single combined color image data to the features of the single combined luma image data 10. The method of claim 1, further comprising setting, in response to a determination that a parallax value indicative of a level of parallax between the monochrome image data and the color image data is not greater than a parallax threshold, a chroma component of the enhanced image data equal to a chroma component of the color image data. 11. A device configured to capture color image data, the device comprising: a monochrome camera configured to capture monochrome image data of a scene; a color camera configured to capture color image data of the scene; and a processor configured to: match features of the color image data to features of the monochrome image data, compute a finite number of shift values based on the matched features of the color image data and the monochrome image data, the finite number of shift values mapping pixels of the color image data to pixels of the monochrome image data; and shift the color image data based on the finite number of shift values to generate enhanced color image data. 12. The device of claim 11, wherein the processor is configured to: compute a weighted histogram of distances between the matched features of the color image data and the monochrome image data; and select the finite number of shift values as a threshold number of largest values of the weighted histogram of distances. 13. The device of claim 11, wherein the processor is further configured to perform, prior to matching the features of the color image data to the features of the monochrome image data, intensity equalization with respect to a luma component of the color image data and a luma component of the monochrome image data to correct for differences in intensity between the color camera and the monochrome camera. 14. The device of claim 11, wherein the processor is further configured to, prior to matching the features of the color image data to the features of the monochrome image data, determine a parallax value indicative of a level of parallax between the monochrome image data and the color image data, and determine whether the parallax value is greater than a parallax threshold, wherein the processor is configured to match, in response to the determination that the parallax value is greater than the parallax threshold, the features of the color image data to the features of the monochrome image data comprises matching. 15. The device of claim 14, wherein the processor is configured to apply the trained regressor to the luma component of the color image data or the shifted color image data and the luma component of the monochrome image data to adapt the luma component of the monochrome image data. 16. The device of claim 15, wherein the regressor comprises a ridge regressor. 17. The device of claim 11, wherein the monochrome camera is configured to capture two or more different monochrome image data over a period of time, wherein the processor is further configured to process the two or more different monochrome image data to generate a single combined monochrome image data, and wherein the processor is configured to match the features of the color image data to the features of the single combined luma image data 18. The device of claim 11, wherein the color camera is configured to capture a two or more sets of color image data over a period of time, wherein the processor is further configured to process the two or more different color image data to generate a single combined color image data, and wherein the processor is configured to match the features of the single combined color image data to the features of the luma image data. 19. The device of claim 11, wherein the monochrome camera is configured to capture two or more different monochrome image data over a period of time, wherein the color camera is configured to capture a two or more sets of color image data over the period of time, and wherein the processor is further configured to: process the two or more different monochrome image data to generate a single combined monochrome image data; and process the two or more different color image data to generate a single combined color image data, and wherein the processor is configured to match the features of the single combined color image data to the features of the single combined luma image data 20. The device of claim 11, wherein the processor is further configured to set, in response to a determination that a parallax value indicative of a level of parallax between the color image data and the monochrome image data is not greater than a parallax threshold, a chroma component of the enhanced image data equal to a chroma component of the color image data. 21. A device configured to capture color image data, the device comprising: means for capturing monochrome image data of a scene; means for capturing color image data of the scene; means for matching features of the color image data to features of the monochrome image data; means for computing a finite number of shift values based on the matched features of the color image data and the monochrome image data, the finite number of shift values mapping pixels of the color image data to pixels of the monochrome image data; and means for shifting the color image data based on the finite number of shift values to generate enhanced color image data. 22. A non-transitory computer-readable medium having stored thereon instructions that, when executed, cause one or more processors of a device to: interface with a monochrome camera to capture monochrome image data of a scene; interface with a color camera to capture color image data of the scene; match features of the color image data to features of the monochrome image data; compute a finite number of shift values based on the matched features of the color image data and the monochrome image data, the finite number of shift values mapping pixels of the color image data to pixels of the monochrome image data; and shift the color image data based on the finite number of shift values to generate enhanced color image data.
2,600
10,424
10,424
14,706,838
2,623
An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems.
1. An augmented reality display system, comprising: an optical apparatus to project light associated with one or more virtual objects to a user, wherein the one or more virtual object is a virtual user interface; a user interface component to receive user input in response to an interaction of the user with at least a component of the virtual user interface; and a processor to receive the user input, to determine an action to be performed based at least in part on the received user input. 2. The augmented reality display system of claim 1, wherein the user interface component comprises a tracking module to track at least one characteristic of the user. 3. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to the user's eyes. 4. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to the user's hands. 5. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to a totem of the user. 6. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to a head pose of the user. 7. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to a natural feature pose of the user. 8. The augmented reality display system of claim 1, wherein the virtual user interface is rendered relative to a predetermined reference frame. 9. The augmented reality display system of claim 8, wherein the predetermined reference frame is head-centered. 10. The augmented reality display system of claim 8, wherein the predetermined reference frame is body-centered. 11. The augmented reality display system of claim 8, wherein the predetermined reference frame is world-centered. 12. The augmented reality display system of claim 8, wherein the predetermined reference frame is hand-centered. 13. The augmented reality display system of claim 1, wherein the projection of the virtual user interface is based at least in part on an environmental data. 14. The augmented reality display system of claim 1, further comprising a database to store a map of the real world, wherein the map comprises coordinates of real objects of the real world, and wherein the projection of the virtual user interface is based at least in part on the stored map. 15. The augmented reality display system of claim 1, wherein the user interface component comprises one or more sensors. 16. The augmented reality display system of claim 15, wherein the one or more sensors is a camera. 17. The augmented reality display system of claim 15, wherein the one or more sensors is a haptic sensor. 18. The augmented reality display system of claim 15, wherein the one or more sensors is a motion-based sensor. 19. The augmented reality display system of claim 15, wherein the one or more sensors is a voice-based sensor. 20. The augmented reality display system of claim 1, wherein the user interface component comprises a gesture detector.
An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems.1. An augmented reality display system, comprising: an optical apparatus to project light associated with one or more virtual objects to a user, wherein the one or more virtual object is a virtual user interface; a user interface component to receive user input in response to an interaction of the user with at least a component of the virtual user interface; and a processor to receive the user input, to determine an action to be performed based at least in part on the received user input. 2. The augmented reality display system of claim 1, wherein the user interface component comprises a tracking module to track at least one characteristic of the user. 3. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to the user's eyes. 4. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to the user's hands. 5. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to a totem of the user. 6. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to a head pose of the user. 7. The augmented reality display system of claim 2, wherein the at least one characteristic pertains to a natural feature pose of the user. 8. The augmented reality display system of claim 1, wherein the virtual user interface is rendered relative to a predetermined reference frame. 9. The augmented reality display system of claim 8, wherein the predetermined reference frame is head-centered. 10. The augmented reality display system of claim 8, wherein the predetermined reference frame is body-centered. 11. The augmented reality display system of claim 8, wherein the predetermined reference frame is world-centered. 12. The augmented reality display system of claim 8, wherein the predetermined reference frame is hand-centered. 13. The augmented reality display system of claim 1, wherein the projection of the virtual user interface is based at least in part on an environmental data. 14. The augmented reality display system of claim 1, further comprising a database to store a map of the real world, wherein the map comprises coordinates of real objects of the real world, and wherein the projection of the virtual user interface is based at least in part on the stored map. 15. The augmented reality display system of claim 1, wherein the user interface component comprises one or more sensors. 16. The augmented reality display system of claim 15, wherein the one or more sensors is a camera. 17. The augmented reality display system of claim 15, wherein the one or more sensors is a haptic sensor. 18. The augmented reality display system of claim 15, wherein the one or more sensors is a motion-based sensor. 19. The augmented reality display system of claim 15, wherein the one or more sensors is a voice-based sensor. 20. The augmented reality display system of claim 1, wherein the user interface component comprises a gesture detector.
2,600
10,425
10,425
15,482,941
2,622
A method of operating a plurality of electrodes, and a related processing system and input device are disclosed. The method comprises driving, within a first time period, a plurality of sensor electrodes with a first signal. A first portion of the plurality of sensor electrodes defines a first sensing region within a first region, and a second portion of the plurality of sensor electrodes defines a first border region within the first region. The method further comprises driving a plurality of mitigation electrodes with a second, opposite polarity signal to mitigate electromagnetic emissions resulting from driving the plurality of sensor electrodes. The plurality of mitigation electrodes defines a second region adjacent to the first border region. The method further comprises acquiring, responsive to driving the plurality of sensor electrodes, first capacitive measurements with the first portion.
1. An input device comprising: a first plurality of sensor electrodes defining a first region, wherein a first portion of the first plurality of sensor electrodes defines a sensing region within the first region, and wherein a second portion of the first plurality of sensor electrodes defines a border region within the first region; a plurality of mitigation electrodes defining a second region that is adjacent to the border region; and a processing system configured to: drive, while driving the first plurality of sensor electrodes with a first signal, the plurality of mitigation electrodes with a second signal having an opposite polarity to the first signal, to mitigate electromagnetic emissions resulting from driving the first plurality of sensor electrodes; and acquire, responsive to driving the first plurality of sensor electrodes with the first signal, capacitive measurements with the first portion of the first plurality of sensor electrodes. 2. The input device of claim 1, wherein the second signal has an amplitude that is selected to provide a desired mitigation of electromagnetic emissions. 3. The input device of claim 2, wherein the first region corresponds to a first area and the second region corresponds to a second area that is less than the first area, and wherein the amplitude is greater than an amplitude of the first signal. 4. The input device of claim 1, wherein the sensing region includes a third portion of the first plurality of sensor electrodes, and wherein the sensor electrodes of the third portion are configured to operate as guarding sensor electrodes for sensor electrodes of the first portion. 5. The input device of claim 1, wherein the plurality of mitigation electrodes comprise a second plurality of sensor electrodes. 6. The input device of claim 5, wherein the first plurality of sensor electrodes are arranged in a repeating grid pattern defining a plurality of rows and a plurality of columns, and wherein, during a first time period: the first portion of the first plurality of sensor electrodes includes at least a first row of the plurality of rows, the second portion of the first plurality of sensor electrodes includes at least a second row of the plurality of rows, and the second plurality of sensor electrodes includes at least a third row of the plurality of rows. 7. The input device of claim 6, wherein sensor electrodes included in the plurality of rows are distinct from sensor electrodes included in the plurality of columns, and wherein, during the first time period: the sensor electrodes included in the plurality of columns are configured to operate as guarding sensor electrodes. 8. The input device of claim 6, wherein, during a different second time period: the first portion of the first plurality of sensor electrodes includes at least a first column of the plurality of columns, the second portion of the first plurality of sensor electrodes includes at least a second column of the plurality of columns, and the second plurality of sensor electrodes includes at least a third column of the plurality of columns. 9. The input device of claim 1, wherein at least one of (i) a first number of sensor electrodes included in the second portion and (ii) a second number of the plurality of mitigation electrodes is selected to provide the desired mitigation of electromagnetic emissions resulting from driving the first plurality of sensor electrodes. 10. The input device of claim 1, wherein the capacitive measurement comprises absolute capacitive sensing measurements for sensor electrodes of the first portion. 11. A processing system comprising: sensor circuitry for operating a plurality of electrodes, the sensor circuitry configured to: drive, within a first time period, a first plurality of sensor electrodes of the plurality of electrodes with a first signal, the first plurality of sensor electrodes defining a first region, wherein a first portion of the first plurality of sensor electrodes defines a first sensing region within the first region, and wherein a second portion of the first plurality of sensor electrodes defines a first border region within the first region; drive, while driving the first plurality of sensor electrodes within the first time period, a plurality of mitigation electrodes of the plurality of electrodes with a second signal having an opposite polarity to the first signal, the plurality of mitigation electrodes defining a second region that is adjacent to the first border region, to mitigate electromagnetic emissions resulting from driving the first plurality of sensor electrodes; and acquire, within the first time period and responsive to driving the first plurality of sensor electrodes, first capacitive measurements with the first portion of the first plurality of sensor electrodes. 12. The processing system of claim 11, wherein during the first time period the plurality of mitigation electrodes comprise a second plurality of sensor electrodes of the plurality of electrodes, wherein the sensor circuitry is further configured to: drive, within a different second time period, a third plurality of sensor electrodes of the plurality of electrodes with a third signal, the third plurality of sensor electrodes defining a third region different from the first region, wherein a third portion of the third plurality of sensor electrodes defines a second sensing region within the third region, and wherein a fourth portion of the third plurality of sensor electrodes defines a second border region within the third region; drive, while driving the third plurality of sensor electrodes within the second time period, a fourth plurality of sensor electrodes of the plurality of electrodes with a fourth signal having an opposite polarity to the third signal, the fourth plurality of sensor electrodes defining a fourth region that is adjacent to the second border region, to mitigate electromagnetic emissions resulting from driving the third plurality of sensor electrodes; and acquire, within the second time period and responsive to driving the third plurality of sensor electrodes, second capacitive measurements with the third portion of the third plurality of sensor electrodes. 13. The processing system of claim 12, wherein the first sensing region corresponds to a first sensing axis, and wherein the second sensing region corresponds to the first sensing axis or to a second sensing axis substantially orthogonal to the first sensing axis. 14. The processing system of claim 11, wherein the plurality of electrodes are arranged in a repeating grid pattern defining a plurality of rows and a plurality of columns, wherein the first portion of the first plurality of sensor electrodes includes at least a first row of the plurality of rows, wherein the second portion of the first plurality of sensor electrodes includes at least a second row of the plurality of rows, and wherein the plurality of mitigation electrodes includes at least a third row of the plurality of rows. 15. The processing system of claim 14, wherein sensor electrodes included in the plurality of rows are distinct from sensor electrodes included in the plurality of columns, and wherein, within the first time period, sensor electrodes included in the plurality of columns are configured to be operated as guarding sensor electrodes. 16. The processing system of claim 11, wherein the sensor circuitry is further configured to select at least one of (i) a first number of sensor electrodes included in the second portion, (ii) a second number of the plurality of mitigation electrodes, and (iii) an amplitude of the second signal to provide a desired mitigation of electromagnetic emissions resulting from driving the first plurality of sensor electrodes. 17. A method of operating a plurality of electrodes, the method comprising: driving, within a first time period, a first plurality of sensor electrodes of the plurality of electrodes with a first signal, the first plurality of sensor electrodes defining a first region, wherein a first portion of the first plurality of sensor electrodes defines a first sensing region within the first region, and wherein a second portion of the first plurality of sensor electrodes defines a first border region within the first region; driving, while driving the first plurality of sensor electrodes within the first time period, a plurality of mitigation electrodes of the plurality of electrodes with a second signal having an opposite polarity to the first signal, the plurality of mitigation electrodes defining a second region that is adjacent to the first border region, to mitigate electromagnetic emissions resulting from driving the first plurality of sensor electrodes; and acquiring, responsive to driving the first plurality of sensor electrodes, first capacitive measurements with the first portion of the first plurality of sensor electrodes. 18. The method of claim 17, wherein during the first time period the plurality of mitigation electrodes comprise a second plurality of sensor electrodes of the plurality of electrodes, the method further comprising: driving, within a different second time period, a third plurality of sensor electrodes of the plurality of electrodes with a third signal, the third plurality of sensor electrodes defining a third region different from the first region, wherein a third portion of the third plurality of sensor electrodes defines a second sensing region within the third region, and wherein a fourth portion of the third plurality of sensor electrodes defines a second border region within the third region; driving, while driving the third plurality of sensor electrodes within the second time period, a fourth plurality of sensor electrodes of the plurality of electrodes with a fourth signal having an opposite polarity to the third signal, the fourth plurality of sensor electrodes defining a fourth region that is adjacent to the second border region, wherein driving the fourth plurality of sensor electrodes provides a desired mitigation of electromagnetic emissions resulting from driving the third plurality of sensor electrodes; and acquiring, responsive to driving the third plurality of sensor electrodes, second capacitive measurements with the third portion of the third plurality of sensor electrodes. 19. The method of claim 18, wherein the first sensing region corresponds to a first sensing axis, and wherein the second sensing region corresponds to the first sensing axis or to a second sensing axis substantially orthogonal to the first sensing axis. 20. The method of claim 17, further comprising: selecting at least one of (i) a first number of sensor electrodes included in the second portion, (ii) a second number of the plurality of mitigation electrodes, and (iii) an amplitude of the second signal to provide the desired mitigation of electromagnetic emissions resulting from driving the first plurality of sensor electrodes.
A method of operating a plurality of electrodes, and a related processing system and input device are disclosed. The method comprises driving, within a first time period, a plurality of sensor electrodes with a first signal. A first portion of the plurality of sensor electrodes defines a first sensing region within a first region, and a second portion of the plurality of sensor electrodes defines a first border region within the first region. The method further comprises driving a plurality of mitigation electrodes with a second, opposite polarity signal to mitigate electromagnetic emissions resulting from driving the plurality of sensor electrodes. The plurality of mitigation electrodes defines a second region adjacent to the first border region. The method further comprises acquiring, responsive to driving the plurality of sensor electrodes, first capacitive measurements with the first portion.1. An input device comprising: a first plurality of sensor electrodes defining a first region, wherein a first portion of the first plurality of sensor electrodes defines a sensing region within the first region, and wherein a second portion of the first plurality of sensor electrodes defines a border region within the first region; a plurality of mitigation electrodes defining a second region that is adjacent to the border region; and a processing system configured to: drive, while driving the first plurality of sensor electrodes with a first signal, the plurality of mitigation electrodes with a second signal having an opposite polarity to the first signal, to mitigate electromagnetic emissions resulting from driving the first plurality of sensor electrodes; and acquire, responsive to driving the first plurality of sensor electrodes with the first signal, capacitive measurements with the first portion of the first plurality of sensor electrodes. 2. The input device of claim 1, wherein the second signal has an amplitude that is selected to provide a desired mitigation of electromagnetic emissions. 3. The input device of claim 2, wherein the first region corresponds to a first area and the second region corresponds to a second area that is less than the first area, and wherein the amplitude is greater than an amplitude of the first signal. 4. The input device of claim 1, wherein the sensing region includes a third portion of the first plurality of sensor electrodes, and wherein the sensor electrodes of the third portion are configured to operate as guarding sensor electrodes for sensor electrodes of the first portion. 5. The input device of claim 1, wherein the plurality of mitigation electrodes comprise a second plurality of sensor electrodes. 6. The input device of claim 5, wherein the first plurality of sensor electrodes are arranged in a repeating grid pattern defining a plurality of rows and a plurality of columns, and wherein, during a first time period: the first portion of the first plurality of sensor electrodes includes at least a first row of the plurality of rows, the second portion of the first plurality of sensor electrodes includes at least a second row of the plurality of rows, and the second plurality of sensor electrodes includes at least a third row of the plurality of rows. 7. The input device of claim 6, wherein sensor electrodes included in the plurality of rows are distinct from sensor electrodes included in the plurality of columns, and wherein, during the first time period: the sensor electrodes included in the plurality of columns are configured to operate as guarding sensor electrodes. 8. The input device of claim 6, wherein, during a different second time period: the first portion of the first plurality of sensor electrodes includes at least a first column of the plurality of columns, the second portion of the first plurality of sensor electrodes includes at least a second column of the plurality of columns, and the second plurality of sensor electrodes includes at least a third column of the plurality of columns. 9. The input device of claim 1, wherein at least one of (i) a first number of sensor electrodes included in the second portion and (ii) a second number of the plurality of mitigation electrodes is selected to provide the desired mitigation of electromagnetic emissions resulting from driving the first plurality of sensor electrodes. 10. The input device of claim 1, wherein the capacitive measurement comprises absolute capacitive sensing measurements for sensor electrodes of the first portion. 11. A processing system comprising: sensor circuitry for operating a plurality of electrodes, the sensor circuitry configured to: drive, within a first time period, a first plurality of sensor electrodes of the plurality of electrodes with a first signal, the first plurality of sensor electrodes defining a first region, wherein a first portion of the first plurality of sensor electrodes defines a first sensing region within the first region, and wherein a second portion of the first plurality of sensor electrodes defines a first border region within the first region; drive, while driving the first plurality of sensor electrodes within the first time period, a plurality of mitigation electrodes of the plurality of electrodes with a second signal having an opposite polarity to the first signal, the plurality of mitigation electrodes defining a second region that is adjacent to the first border region, to mitigate electromagnetic emissions resulting from driving the first plurality of sensor electrodes; and acquire, within the first time period and responsive to driving the first plurality of sensor electrodes, first capacitive measurements with the first portion of the first plurality of sensor electrodes. 12. The processing system of claim 11, wherein during the first time period the plurality of mitigation electrodes comprise a second plurality of sensor electrodes of the plurality of electrodes, wherein the sensor circuitry is further configured to: drive, within a different second time period, a third plurality of sensor electrodes of the plurality of electrodes with a third signal, the third plurality of sensor electrodes defining a third region different from the first region, wherein a third portion of the third plurality of sensor electrodes defines a second sensing region within the third region, and wherein a fourth portion of the third plurality of sensor electrodes defines a second border region within the third region; drive, while driving the third plurality of sensor electrodes within the second time period, a fourth plurality of sensor electrodes of the plurality of electrodes with a fourth signal having an opposite polarity to the third signal, the fourth plurality of sensor electrodes defining a fourth region that is adjacent to the second border region, to mitigate electromagnetic emissions resulting from driving the third plurality of sensor electrodes; and acquire, within the second time period and responsive to driving the third plurality of sensor electrodes, second capacitive measurements with the third portion of the third plurality of sensor electrodes. 13. The processing system of claim 12, wherein the first sensing region corresponds to a first sensing axis, and wherein the second sensing region corresponds to the first sensing axis or to a second sensing axis substantially orthogonal to the first sensing axis. 14. The processing system of claim 11, wherein the plurality of electrodes are arranged in a repeating grid pattern defining a plurality of rows and a plurality of columns, wherein the first portion of the first plurality of sensor electrodes includes at least a first row of the plurality of rows, wherein the second portion of the first plurality of sensor electrodes includes at least a second row of the plurality of rows, and wherein the plurality of mitigation electrodes includes at least a third row of the plurality of rows. 15. The processing system of claim 14, wherein sensor electrodes included in the plurality of rows are distinct from sensor electrodes included in the plurality of columns, and wherein, within the first time period, sensor electrodes included in the plurality of columns are configured to be operated as guarding sensor electrodes. 16. The processing system of claim 11, wherein the sensor circuitry is further configured to select at least one of (i) a first number of sensor electrodes included in the second portion, (ii) a second number of the plurality of mitigation electrodes, and (iii) an amplitude of the second signal to provide a desired mitigation of electromagnetic emissions resulting from driving the first plurality of sensor electrodes. 17. A method of operating a plurality of electrodes, the method comprising: driving, within a first time period, a first plurality of sensor electrodes of the plurality of electrodes with a first signal, the first plurality of sensor electrodes defining a first region, wherein a first portion of the first plurality of sensor electrodes defines a first sensing region within the first region, and wherein a second portion of the first plurality of sensor electrodes defines a first border region within the first region; driving, while driving the first plurality of sensor electrodes within the first time period, a plurality of mitigation electrodes of the plurality of electrodes with a second signal having an opposite polarity to the first signal, the plurality of mitigation electrodes defining a second region that is adjacent to the first border region, to mitigate electromagnetic emissions resulting from driving the first plurality of sensor electrodes; and acquiring, responsive to driving the first plurality of sensor electrodes, first capacitive measurements with the first portion of the first plurality of sensor electrodes. 18. The method of claim 17, wherein during the first time period the plurality of mitigation electrodes comprise a second plurality of sensor electrodes of the plurality of electrodes, the method further comprising: driving, within a different second time period, a third plurality of sensor electrodes of the plurality of electrodes with a third signal, the third plurality of sensor electrodes defining a third region different from the first region, wherein a third portion of the third plurality of sensor electrodes defines a second sensing region within the third region, and wherein a fourth portion of the third plurality of sensor electrodes defines a second border region within the third region; driving, while driving the third plurality of sensor electrodes within the second time period, a fourth plurality of sensor electrodes of the plurality of electrodes with a fourth signal having an opposite polarity to the third signal, the fourth plurality of sensor electrodes defining a fourth region that is adjacent to the second border region, wherein driving the fourth plurality of sensor electrodes provides a desired mitigation of electromagnetic emissions resulting from driving the third plurality of sensor electrodes; and acquiring, responsive to driving the third plurality of sensor electrodes, second capacitive measurements with the third portion of the third plurality of sensor electrodes. 19. The method of claim 18, wherein the first sensing region corresponds to a first sensing axis, and wherein the second sensing region corresponds to the first sensing axis or to a second sensing axis substantially orthogonal to the first sensing axis. 20. The method of claim 17, further comprising: selecting at least one of (i) a first number of sensor electrodes included in the second portion, (ii) a second number of the plurality of mitigation electrodes, and (iii) an amplitude of the second signal to provide the desired mitigation of electromagnetic emissions resulting from driving the first plurality of sensor electrodes.
2,600
10,426
10,426
14,665,413
2,621
Techniques for ink for interaction are described. According to various embodiments, ink and touch input may be combined to provide diverse input scenarios. According to various embodiments, ink can be used to reconfigure a document. According to various embodiments, ink can be employed to interact with a map in various ways.
1. A system comprising: one or more processors; and one or more computer-readable storage media storing computer-executable instructions that, responsive to execution by the one or more processors, cause the system to perform operations including: detecting that freehand ink content is applied to a display via a pen; identifying a touch gesture that is applied to the display; and modifying the ink content based on the touch gesture by mapping the touch gesture to a particular operation to be performed on the ink content. 2. The system as described in claim 1, wherein the touch gesture is identified as touch input via at least one finger and while the pen is in contact with the display. 3. The system as described in claim 1, wherein said modifying comprises converting the freehand ink content into a machine-encoded shape. 4. The system as described in claim 1, wherein the touch gesture comprises one or more fingers in contact with the display, and wherein touch gestures with different numbers of fingers in contact with the display are mapped to different respective operations to be performed on the ink content. 5. The system as described in claim 1, wherein the touch gesture comprises a finger motion on the display, and wherein touch gestures with different finger motions are mapped to different respective operations to be performed on the ink content. 6. The system as described in claim 1, wherein freehand ink content comprises a freehand line with at least some curvature, and wherein said modifying comprises converting the freehand line to a straight line. 7. The system as described in claim 1, wherein the touch gesture comprises a finger in contact with the freehand line, said modifying comprises converting the freehand line to a straight line, and wherein the operations further include causing the straight line to pivot about a point on the display with which the finger is in contact in response to user input to the straight line. 8. The system as described in claim 1, wherein said modifying comprises converting the freehand ink content into a machine-encoded shape, and wherein the operations further include: identifying a further touch gesture that is applied to the display; and performing an operation on the machine-encoded shape based on the further touch gesture. 9. A system comprising: one or more processors; and one or more computer-readable storage media storing computer-executable instructions that, responsive to execution by the one or more processors, cause the system to perform operations including: detecting that an ink line is applied to a document displayed on a display device via a pen; converting by a computing system the ink line into a document divider that divides the document into a first portion on a first side of the document divider, and a second portion on a second side of the document divider; and receiving input to one or more of the first portion or the second portion to cause a space to be inserted between the first portion and the second portion. 10. A system as recited in claim 9, wherein the operations further include causing a visual affordance to be displayed that indicates that one or more of the first portion or the second portion of the document are manipulable to cause the space to be inserted. 11. A system as recited in claim 9, wherein said converting is performed in response to a recognition operation that recognizes the ink line as a command to generate the document divider and that is performed independent of user input after applying the ink line. 12. A system as recited in claim 9, wherein the input comprises a drag operation to drag the one or more of the first portion or the second portion within the display device, and wherein a size of the space is proportional to a size of the drag operation. 13. A system comprising: one or more processors; and one or more computer-readable storage media storing computer-executable instructions that, responsive to execution by the one or more processors, cause the system to perform operations including: receiving ink input via a pen to trace a travel route within a map displayed on a display device; retrieving information about the travel route; and presenting the information on the display device. 14. A system as recited in claim 13, wherein the ink input traces the travel route over one or more travel paths displayed within the map, and wherein the information includes information about the one or more travel paths. 15. A system as recited in claim 13, wherein said retrieving and said presenting are performed automatically in response to said receiving and independent of user input after tracing the travel route. 16. A system as recited in claim 13, wherein said retrieving comprises: recognizing, independent of user input after tracing the travel route, that the travel route occurs between two locations, wherein the information identifies the two locations and includes information about navigating the travel route between the two locations. 17. A system as recited in claim 13, wherein operations further include, prior to said receiving ink input: receiving a user selection of a travel path within the map; and highlighting the travel path for ink input within the map. 18. A system as recited in claim 13, wherein operations further include, prior to said receiving ink input: receiving a user selection of a travel path within the map; and highlighting the travel path for ink input within the map, wherein the travel route is traced over less than a highlighted portion of the travel path. 19. A system as recited in claim 13, wherein operations further include, prior to said receiving ink input: receiving user selections of different travel paths within the map; and highlighting the different travel paths for ink input within the map, wherein the travel route is traced over portions of the different travel paths. 20. A system as recited in claim 13, wherein operations further include: propagating the ink input tracing the travel route to a transient ink layer; and enabling the transient ink layer to be accessible to retrieve the ink input.
Techniques for ink for interaction are described. According to various embodiments, ink and touch input may be combined to provide diverse input scenarios. According to various embodiments, ink can be used to reconfigure a document. According to various embodiments, ink can be employed to interact with a map in various ways.1. A system comprising: one or more processors; and one or more computer-readable storage media storing computer-executable instructions that, responsive to execution by the one or more processors, cause the system to perform operations including: detecting that freehand ink content is applied to a display via a pen; identifying a touch gesture that is applied to the display; and modifying the ink content based on the touch gesture by mapping the touch gesture to a particular operation to be performed on the ink content. 2. The system as described in claim 1, wherein the touch gesture is identified as touch input via at least one finger and while the pen is in contact with the display. 3. The system as described in claim 1, wherein said modifying comprises converting the freehand ink content into a machine-encoded shape. 4. The system as described in claim 1, wherein the touch gesture comprises one or more fingers in contact with the display, and wherein touch gestures with different numbers of fingers in contact with the display are mapped to different respective operations to be performed on the ink content. 5. The system as described in claim 1, wherein the touch gesture comprises a finger motion on the display, and wherein touch gestures with different finger motions are mapped to different respective operations to be performed on the ink content. 6. The system as described in claim 1, wherein freehand ink content comprises a freehand line with at least some curvature, and wherein said modifying comprises converting the freehand line to a straight line. 7. The system as described in claim 1, wherein the touch gesture comprises a finger in contact with the freehand line, said modifying comprises converting the freehand line to a straight line, and wherein the operations further include causing the straight line to pivot about a point on the display with which the finger is in contact in response to user input to the straight line. 8. The system as described in claim 1, wherein said modifying comprises converting the freehand ink content into a machine-encoded shape, and wherein the operations further include: identifying a further touch gesture that is applied to the display; and performing an operation on the machine-encoded shape based on the further touch gesture. 9. A system comprising: one or more processors; and one or more computer-readable storage media storing computer-executable instructions that, responsive to execution by the one or more processors, cause the system to perform operations including: detecting that an ink line is applied to a document displayed on a display device via a pen; converting by a computing system the ink line into a document divider that divides the document into a first portion on a first side of the document divider, and a second portion on a second side of the document divider; and receiving input to one or more of the first portion or the second portion to cause a space to be inserted between the first portion and the second portion. 10. A system as recited in claim 9, wherein the operations further include causing a visual affordance to be displayed that indicates that one or more of the first portion or the second portion of the document are manipulable to cause the space to be inserted. 11. A system as recited in claim 9, wherein said converting is performed in response to a recognition operation that recognizes the ink line as a command to generate the document divider and that is performed independent of user input after applying the ink line. 12. A system as recited in claim 9, wherein the input comprises a drag operation to drag the one or more of the first portion or the second portion within the display device, and wherein a size of the space is proportional to a size of the drag operation. 13. A system comprising: one or more processors; and one or more computer-readable storage media storing computer-executable instructions that, responsive to execution by the one or more processors, cause the system to perform operations including: receiving ink input via a pen to trace a travel route within a map displayed on a display device; retrieving information about the travel route; and presenting the information on the display device. 14. A system as recited in claim 13, wherein the ink input traces the travel route over one or more travel paths displayed within the map, and wherein the information includes information about the one or more travel paths. 15. A system as recited in claim 13, wherein said retrieving and said presenting are performed automatically in response to said receiving and independent of user input after tracing the travel route. 16. A system as recited in claim 13, wherein said retrieving comprises: recognizing, independent of user input after tracing the travel route, that the travel route occurs between two locations, wherein the information identifies the two locations and includes information about navigating the travel route between the two locations. 17. A system as recited in claim 13, wherein operations further include, prior to said receiving ink input: receiving a user selection of a travel path within the map; and highlighting the travel path for ink input within the map. 18. A system as recited in claim 13, wherein operations further include, prior to said receiving ink input: receiving a user selection of a travel path within the map; and highlighting the travel path for ink input within the map, wherein the travel route is traced over less than a highlighted portion of the travel path. 19. A system as recited in claim 13, wherein operations further include, prior to said receiving ink input: receiving user selections of different travel paths within the map; and highlighting the different travel paths for ink input within the map, wherein the travel route is traced over portions of the different travel paths. 20. A system as recited in claim 13, wherein operations further include: propagating the ink input tracing the travel route to a transient ink layer; and enabling the transient ink layer to be accessible to retrieve the ink input.
2,600
10,427
10,427
15,830,802
2,622
A system and method for generating motion commands based on detected motion of a user, the system including: a processing circuitry; an orientation sensor; a communication interface; and a housing having the processing circuitry, the orientation sensor, and the communication interface disposed therein, the housing further configured to be securely fastened to a body of a user; wherein the orientation sensor is configured to provide sensor readings indicative of a three-dimensional motion of a user; wherein the processing circuitry is configured to receive the sensor readings from the orientation sensor; determine an initial reference position based on the sensor readings; determine a current user position based on the sensor readings; and determine the motion commands based on an angle between the current user position and the initial reference position; and wherein the communication interface is configured to relay the motion commands to a VR device.
1. A virtual reality (VR) motion controller, comprising: a processing circuitry; an orientation sensor; a communication interface; and a housing having the processing circuitry, the orientation sensor, and the communication interface disposed therein, the housing further configured to be securely fastened to a body of a user; wherein the orientation sensor is configured to provide sensor readings indicative of a three-dimensional motion of a user; wherein the processing circuitry is configured to: receive the sensor readings from the orientation sensor; determine an initial reference position based on the sensor readings; determine a current user position based on the sensor readings; and translate the initial reference position and the current user position into motion commands; and wherein the communication interface is configured to relay the motion commands to a VR device. 2. The VR motion controller of claim 1, further comprising: a fastener attached to the housing and configured to secure the housing to the chest of the user. 3. The VR motion controller of claim 1, further comprising: a power source configured to power the processing circuitry, the orientation sensor, and the communication interface, wherein the power source is disposed in the housing. 4. The VR motion controller of claim 1, wherein the orientation sensor is further configured to: determine a current user position relative to an initial reference position, wherein the position of a user is determined based on the rotation of the orientation sensor along at least one axis. 5. The VR motion controller of claim 1, wherein the current position of a user is determined using quaternion values representing rotation from the initial reference position. 6. The VR motion controller of claim 5, wherein at least one of the quaternion values is computed along a single axis. 7. (canceled) 8. The VR motion controller of claim 1, wherein the motion commands include at least one of: forward walk, forward run, backward walk, backward run, strafe left walk, strafe left run, strafe right walk, strafe right run, rotate left walk, rotate left run, rotate right walk, and rotate right run. 9. The VR motion controller of claim 1, wherein the processing circuitry is further configured to: recalibrate the initial reference position based on comparing a new reference position to the initial reference position, such that any subsequent determination of user position is calculated based on the new reference position. 10. The VR motion controller of claim 9, wherein the new reference position is determined by converting quaternion values of the initial reference position to quaternion values of the new reference position. 11. The VR motion controller of claim 1, wherein the processing circuitry is further configured to: determine the motion commands based on the motion of the user when the sensor readings exceed a predetermined threshold. 12. The VR motion controller of claim 11, wherein the threshold is adjustable. 13. A method for generating motion commands based on detected motion of a user, comprising: determining an initial reference position; determining a current user position based on the initial reference position; translating the initial reference position and the current user position into motion commands; and sending the motion commands to a VR device. 14. The method of claim 13, wherein the motion commands include at least one of: forward walk, forward run, backward walk, backward run, strafe left walk, strafe left run, strafe right walk, strafe right run, rotate left walk, rotate left run, rotate right walk, and rotate right run. 15. The method of claim 13, further comprising: computing the difference between the current user position and the initial reference position using quaternion values to determine current user position is determined. 16. The method of claim 15, wherein at least one of the quaternion values is calculated along a single axis. 17. The method of claim 15, further comprising: recalibrating the initial reference position based on comparing a new reference position to the initial reference position, such that any subsequent determination of user position is calculated based on the new reference position. 18. The method of claim 17, wherein the new reference position is determined by converting quaternion values of the initial reference position to quaternion values of the new reference position. 19. The method of claim 13, further comprising: determining the motion commands based on the motion of the user when the sensor readings exceed a predetermined threshold. 20. The method of claim 19, wherein the threshold is adjustable. 21. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute a process for generating motion commands based on detected motion of a user, the process comprising: determining an initial reference position; determining a current user position based on the initial reference position; determining motion commands based on the determined current user position; translating the initial reference position and the current user position into motion commands; and sending the motion commands to a VR device.
A system and method for generating motion commands based on detected motion of a user, the system including: a processing circuitry; an orientation sensor; a communication interface; and a housing having the processing circuitry, the orientation sensor, and the communication interface disposed therein, the housing further configured to be securely fastened to a body of a user; wherein the orientation sensor is configured to provide sensor readings indicative of a three-dimensional motion of a user; wherein the processing circuitry is configured to receive the sensor readings from the orientation sensor; determine an initial reference position based on the sensor readings; determine a current user position based on the sensor readings; and determine the motion commands based on an angle between the current user position and the initial reference position; and wherein the communication interface is configured to relay the motion commands to a VR device.1. A virtual reality (VR) motion controller, comprising: a processing circuitry; an orientation sensor; a communication interface; and a housing having the processing circuitry, the orientation sensor, and the communication interface disposed therein, the housing further configured to be securely fastened to a body of a user; wherein the orientation sensor is configured to provide sensor readings indicative of a three-dimensional motion of a user; wherein the processing circuitry is configured to: receive the sensor readings from the orientation sensor; determine an initial reference position based on the sensor readings; determine a current user position based on the sensor readings; and translate the initial reference position and the current user position into motion commands; and wherein the communication interface is configured to relay the motion commands to a VR device. 2. The VR motion controller of claim 1, further comprising: a fastener attached to the housing and configured to secure the housing to the chest of the user. 3. The VR motion controller of claim 1, further comprising: a power source configured to power the processing circuitry, the orientation sensor, and the communication interface, wherein the power source is disposed in the housing. 4. The VR motion controller of claim 1, wherein the orientation sensor is further configured to: determine a current user position relative to an initial reference position, wherein the position of a user is determined based on the rotation of the orientation sensor along at least one axis. 5. The VR motion controller of claim 1, wherein the current position of a user is determined using quaternion values representing rotation from the initial reference position. 6. The VR motion controller of claim 5, wherein at least one of the quaternion values is computed along a single axis. 7. (canceled) 8. The VR motion controller of claim 1, wherein the motion commands include at least one of: forward walk, forward run, backward walk, backward run, strafe left walk, strafe left run, strafe right walk, strafe right run, rotate left walk, rotate left run, rotate right walk, and rotate right run. 9. The VR motion controller of claim 1, wherein the processing circuitry is further configured to: recalibrate the initial reference position based on comparing a new reference position to the initial reference position, such that any subsequent determination of user position is calculated based on the new reference position. 10. The VR motion controller of claim 9, wherein the new reference position is determined by converting quaternion values of the initial reference position to quaternion values of the new reference position. 11. The VR motion controller of claim 1, wherein the processing circuitry is further configured to: determine the motion commands based on the motion of the user when the sensor readings exceed a predetermined threshold. 12. The VR motion controller of claim 11, wherein the threshold is adjustable. 13. A method for generating motion commands based on detected motion of a user, comprising: determining an initial reference position; determining a current user position based on the initial reference position; translating the initial reference position and the current user position into motion commands; and sending the motion commands to a VR device. 14. The method of claim 13, wherein the motion commands include at least one of: forward walk, forward run, backward walk, backward run, strafe left walk, strafe left run, strafe right walk, strafe right run, rotate left walk, rotate left run, rotate right walk, and rotate right run. 15. The method of claim 13, further comprising: computing the difference between the current user position and the initial reference position using quaternion values to determine current user position is determined. 16. The method of claim 15, wherein at least one of the quaternion values is calculated along a single axis. 17. The method of claim 15, further comprising: recalibrating the initial reference position based on comparing a new reference position to the initial reference position, such that any subsequent determination of user position is calculated based on the new reference position. 18. The method of claim 17, wherein the new reference position is determined by converting quaternion values of the initial reference position to quaternion values of the new reference position. 19. The method of claim 13, further comprising: determining the motion commands based on the motion of the user when the sensor readings exceed a predetermined threshold. 20. The method of claim 19, wherein the threshold is adjustable. 21. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute a process for generating motion commands based on detected motion of a user, the process comprising: determining an initial reference position; determining a current user position based on the initial reference position; determining motion commands based on the determined current user position; translating the initial reference position and the current user position into motion commands; and sending the motion commands to a VR device.
2,600
10,428
10,428
15,542,457
2,689
The disclosure relates to a method for detecting the passing of a motor vehicle through a road sign gantry, having the steps: receiving information on the surroundings, detecting road signs in the information on the surroundings, selecting a first road sign and a second road sign which together form a road sign gantry, acquiring position data for the first road sign and for the second road sign from the information on the surroundings, determining a gantry width between the first road sign and the second road sign, determining a first distance of the motor vehicle from the first road sign, determining a second distance of the motor vehicle from the second road sign, and detecting the passing through of the vehicle as a function of the gantry width, the first distance and the second distance.
1. A method for detecting the passage of a motor vehicle through a road-sign gantry, the method comprising: receiving environmental information; recognizing road signs in the environmental information; selecting a first road sign and a second road sign of the recognized road signs which together constitute the road-sign gantry; ascertaining position data for the first road sign and for the second road sign from the environmental information; determining a gantry width between the first road sign and the second road sign; determining a first distance of the motor vehicle from the first road sign; determining a second distance of the motor vehicle from the second road sign; and detecting the passage of the motor vehicle through the road-sign gantry as a function of the gantry width, the first distance and the second distance. 2. The method according to claim 1, further comprising: checking the gantry width for plausibility; and discarding the road-sign gantry in response to the gantry width being implausible. 3. The method according to claim 1, further comprising: ascertaining an angle between a straight line defined by the road-sign gantry and an axis of the motor vehicle; (12), and checking the angle for plausibility; and discarding the road-sign gantry in response to the angle being implausible. 4. The method according to claim 1, the detecting of the passage further comprising: detecting the passage of the motor vehicle through the road-sign gantry as a function of whether a sum of the first distance and the second distance as corresponds to the gantry width. 5. The method according to claim 1, the detecting of the passage further comprising: detecting the passage of the motor vehicle through the road-sign gantry in response to at least one of a minimum and a point of inflection being reached in a temporal progression of a sum of the first distance and the second distance. 6. The method according to claim 1, further comprising: determining the position data as a function of a trajectory of the motor vehicle. 7. The method according to claim 6, further comprising: ascertaining the trajectory of the motor vehicle using initial sensors. 8. The method according to claim 1, further comprising: predicting a future trajectory of the motor vehicle; and determining a future passage of the motor vehicle through the road-sign gantry as a function of the future trajectory. 9. A control-and-evaluation unit for detecting a passage of a motor vehicle through a road-sign gantry, the control-and-evaluation unit being configured to: receive environmental information; recognize road signs in the environmental information; select a first road sign and a second road of the recognized road signs which together constitute the road-sign gantry; ascertain position data for the first road sign and for the second road sign from the environmental information; determine a gantry width between the first road sign and the second road sign; determine a first distance of the motor vehicle from the first road sign; determine a second distance of the motor vehicle from the second road sign; and detect the passage of the motor vehicle through the road-sign gantry as a function of the gantry width, the first distance, and the second distance. 10. A non-transitory computer program product that, when executed by a control-and-evaluation unit, is configured to cause the control-and-evaluation unit to: receive environmental information; recognize road signs in the environmental information; select a first road sign and a second road of the recognized road signs which together constitute a road-sign gantry; ascertain position data for the first road sign and for the second road sign from the environmental information; determine a gantry width between the first road sign and the second road sign; determine a first distance of a motor vehicle from the first road sign; determine a second distance of the motor vehicle from the second road sign; and detect a passage of the motor vehicle through the road-sign gantry as a function of the gantry width, the first distance, and the second distance. 11. The non-transitory computer program product of claim 10, wherein the computer program product is stored on a machine-readable storage medium.
The disclosure relates to a method for detecting the passing of a motor vehicle through a road sign gantry, having the steps: receiving information on the surroundings, detecting road signs in the information on the surroundings, selecting a first road sign and a second road sign which together form a road sign gantry, acquiring position data for the first road sign and for the second road sign from the information on the surroundings, determining a gantry width between the first road sign and the second road sign, determining a first distance of the motor vehicle from the first road sign, determining a second distance of the motor vehicle from the second road sign, and detecting the passing through of the vehicle as a function of the gantry width, the first distance and the second distance.1. A method for detecting the passage of a motor vehicle through a road-sign gantry, the method comprising: receiving environmental information; recognizing road signs in the environmental information; selecting a first road sign and a second road sign of the recognized road signs which together constitute the road-sign gantry; ascertaining position data for the first road sign and for the second road sign from the environmental information; determining a gantry width between the first road sign and the second road sign; determining a first distance of the motor vehicle from the first road sign; determining a second distance of the motor vehicle from the second road sign; and detecting the passage of the motor vehicle through the road-sign gantry as a function of the gantry width, the first distance and the second distance. 2. The method according to claim 1, further comprising: checking the gantry width for plausibility; and discarding the road-sign gantry in response to the gantry width being implausible. 3. The method according to claim 1, further comprising: ascertaining an angle between a straight line defined by the road-sign gantry and an axis of the motor vehicle; (12), and checking the angle for plausibility; and discarding the road-sign gantry in response to the angle being implausible. 4. The method according to claim 1, the detecting of the passage further comprising: detecting the passage of the motor vehicle through the road-sign gantry as a function of whether a sum of the first distance and the second distance as corresponds to the gantry width. 5. The method according to claim 1, the detecting of the passage further comprising: detecting the passage of the motor vehicle through the road-sign gantry in response to at least one of a minimum and a point of inflection being reached in a temporal progression of a sum of the first distance and the second distance. 6. The method according to claim 1, further comprising: determining the position data as a function of a trajectory of the motor vehicle. 7. The method according to claim 6, further comprising: ascertaining the trajectory of the motor vehicle using initial sensors. 8. The method according to claim 1, further comprising: predicting a future trajectory of the motor vehicle; and determining a future passage of the motor vehicle through the road-sign gantry as a function of the future trajectory. 9. A control-and-evaluation unit for detecting a passage of a motor vehicle through a road-sign gantry, the control-and-evaluation unit being configured to: receive environmental information; recognize road signs in the environmental information; select a first road sign and a second road of the recognized road signs which together constitute the road-sign gantry; ascertain position data for the first road sign and for the second road sign from the environmental information; determine a gantry width between the first road sign and the second road sign; determine a first distance of the motor vehicle from the first road sign; determine a second distance of the motor vehicle from the second road sign; and detect the passage of the motor vehicle through the road-sign gantry as a function of the gantry width, the first distance, and the second distance. 10. A non-transitory computer program product that, when executed by a control-and-evaluation unit, is configured to cause the control-and-evaluation unit to: receive environmental information; recognize road signs in the environmental information; select a first road sign and a second road of the recognized road signs which together constitute a road-sign gantry; ascertain position data for the first road sign and for the second road sign from the environmental information; determine a gantry width between the first road sign and the second road sign; determine a first distance of a motor vehicle from the first road sign; determine a second distance of the motor vehicle from the second road sign; and detect a passage of the motor vehicle through the road-sign gantry as a function of the gantry width, the first distance, and the second distance. 11. The non-transitory computer program product of claim 10, wherein the computer program product is stored on a machine-readable storage medium.
2,600
10,429
10,429
15,937,150
2,613
Systems and methods are disclosed for secret sharing for secure collaborative graphical design. Graphical secret shares are generated from a three-dimensional graphical design and distributed to one or more contributor devices. Contributor graphical designs modifying graphical secret shares may be received from contributor devices. Various corresponding and related systems, methods, and software are described.
1. A system for secure collaborative graphical design using secret sharing, the system comprising: a secret owner device; a memory, operatively connected to the secret owner device, the memory configured to store a three-dimensional graphical design including: a first three-dimensional form including a first three-dimensional shape and a first dimension set in three dimensions; and at least a local geometric feature; a secret share generator executing on the secret owner device, the secret share generator designed and configured to generate the at least a graphical secret share, wherein generating the at least a graphical secret share further comprises: generating a three-dimensional geometric primitive; replacing the first three-dimensional form with the three-dimensional geometric primitive; generating at least a dummy feature, wherein the at least a dummy feature further comprises at least a duplicate of the at least a local geometric feature; and combining the three-dimensional geometric primitive with the at least a local geometric feature and the at least a dummy feature, wherein the at least a local geometric feature and the at least a dummy feature display concurrently in the at least a graphical secret share; a contributor interface executing on the secret owner device, the contributor interface designed and configured to provide the at least a graphical secret share to at least a contributor device. 2. (canceled) 3. The system of claim 1, wherein the secret share generator is further designed and configured to generate a secret share key as a function of the at least a graphical secret share and the at least a three-dimensional graphical design. 4. The system of claim 3, wherein the secret share generator is further designed and configured to generate a plurality of key shares using a secret sharing protocol. 5. The system of claim 1, wherein the contributor interface further comprises a graphical user interface. 6. The system of claim 1 further comprising an interrogation engine executing on the secret owner device, the interrogation engine designed and configured to extract the at least a local geometric feature from the three-dimensional graphical design. 7. The system of claim 6, wherein the interrogation engine is further designed and configured to evaluate a contributor graphical design received from the at least a contributor device for manufacturing feasibility. 8. The system of claim 6, wherein the three-dimensional graphical design file further comprises at least a global constraint. 9. The system of claim 8, wherein the interrogation engine is further designed and configured evaluate a contributor graphical design received from the at least a contributor device for compliance with the at least a global constraint. 10. The system of claim 8, wherein the secret share generator is further configured and designed to generate at least a secret share constraint as a function of the at least a global constraint. 11. The system of claim 10, wherein the interrogation engine is further designed and configured to evaluate a contributor graphical design received from the at least a contributor device for compliance with the at least a secret share constraint. 12. The system of claim 1 further comprising a merge engine executing on the secret owner device, the merge engine designed and configured to generate at least a combined three-dimensional graphical design as a function of the three-dimensional graphical design and the at least a contributor graphical design. 13. The system of claim 12 further comprising an interrogation engine designed and configured to evaluate the combined three-dimensional graphical design for manufacturing feasibility. 14. The system of claim 12, wherein the three-dimensional graphical design further comprises at least a global constraint, and further comprising an interrogation engine designed and configured to evaluate the combined three-dimensional graphical design for compliance with the at least a global constraint. 15. The system of claim 1 wherein the secret owner device is incorporated in an automated manufacturing system. 16. The system of claim 1 wherein the secret owner device is an automated manufacturing device. 17. The system of claim 1, wherein the memory is provided according to a cloud storage protocol.
Systems and methods are disclosed for secret sharing for secure collaborative graphical design. Graphical secret shares are generated from a three-dimensional graphical design and distributed to one or more contributor devices. Contributor graphical designs modifying graphical secret shares may be received from contributor devices. Various corresponding and related systems, methods, and software are described.1. A system for secure collaborative graphical design using secret sharing, the system comprising: a secret owner device; a memory, operatively connected to the secret owner device, the memory configured to store a three-dimensional graphical design including: a first three-dimensional form including a first three-dimensional shape and a first dimension set in three dimensions; and at least a local geometric feature; a secret share generator executing on the secret owner device, the secret share generator designed and configured to generate the at least a graphical secret share, wherein generating the at least a graphical secret share further comprises: generating a three-dimensional geometric primitive; replacing the first three-dimensional form with the three-dimensional geometric primitive; generating at least a dummy feature, wherein the at least a dummy feature further comprises at least a duplicate of the at least a local geometric feature; and combining the three-dimensional geometric primitive with the at least a local geometric feature and the at least a dummy feature, wherein the at least a local geometric feature and the at least a dummy feature display concurrently in the at least a graphical secret share; a contributor interface executing on the secret owner device, the contributor interface designed and configured to provide the at least a graphical secret share to at least a contributor device. 2. (canceled) 3. The system of claim 1, wherein the secret share generator is further designed and configured to generate a secret share key as a function of the at least a graphical secret share and the at least a three-dimensional graphical design. 4. The system of claim 3, wherein the secret share generator is further designed and configured to generate a plurality of key shares using a secret sharing protocol. 5. The system of claim 1, wherein the contributor interface further comprises a graphical user interface. 6. The system of claim 1 further comprising an interrogation engine executing on the secret owner device, the interrogation engine designed and configured to extract the at least a local geometric feature from the three-dimensional graphical design. 7. The system of claim 6, wherein the interrogation engine is further designed and configured to evaluate a contributor graphical design received from the at least a contributor device for manufacturing feasibility. 8. The system of claim 6, wherein the three-dimensional graphical design file further comprises at least a global constraint. 9. The system of claim 8, wherein the interrogation engine is further designed and configured evaluate a contributor graphical design received from the at least a contributor device for compliance with the at least a global constraint. 10. The system of claim 8, wherein the secret share generator is further configured and designed to generate at least a secret share constraint as a function of the at least a global constraint. 11. The system of claim 10, wherein the interrogation engine is further designed and configured to evaluate a contributor graphical design received from the at least a contributor device for compliance with the at least a secret share constraint. 12. The system of claim 1 further comprising a merge engine executing on the secret owner device, the merge engine designed and configured to generate at least a combined three-dimensional graphical design as a function of the three-dimensional graphical design and the at least a contributor graphical design. 13. The system of claim 12 further comprising an interrogation engine designed and configured to evaluate the combined three-dimensional graphical design for manufacturing feasibility. 14. The system of claim 12, wherein the three-dimensional graphical design further comprises at least a global constraint, and further comprising an interrogation engine designed and configured to evaluate the combined three-dimensional graphical design for compliance with the at least a global constraint. 15. The system of claim 1 wherein the secret owner device is incorporated in an automated manufacturing system. 16. The system of claim 1 wherein the secret owner device is an automated manufacturing device. 17. The system of claim 1, wherein the memory is provided according to a cloud storage protocol.
2,600
10,430
10,430
15,685,453
2,642
Techniques for managing discussion sharing on a mobile platform, comprising a power application. The power application may include, among other components, a power monitoring component to monitor at least one component and/or at least one application of a mobile device to determine device profile information, and a power management component to provide the determined device profile information of a mobile device to a server device and receive predicted information representative of forecasted power utilization of the mobile device and/or context sensitive recommendation information representative of one or more context sensitive recommendations for the mobile device.
1. A computer-implemented method, comprising: monitoring one or more of a component or an application of a mobile device to determine device profile information of a mobile device; transmitting the determined device profile information to a server device; and receiving predicted information representative of one or more of forecasted power utilization of the mobile device or context sensitive recommendation information representative of one or more context sensitive recommendations for the mobile device, wherein the predicted information or the context sensitive recommendation information is determined based at least partially on the device profile information of the mobile device. 2. The computer-implemented method of claim 1, wherein the device profile information further comprises one or more of device application information, device component information, device event information, or device location information, and the predicted information further comprises predicted context information, and predicted power event information. 3. The computer-implemented method of claim 1, wherein the predicted information comprises predicted power curve information generated based at least partially on one or more of future power curve information, past power curve information, or analytics power curve information. 4. The computer-implemented method of claim 3, wherein the future power curve information is generated based at least partially on a corresponding future location context information representing one or more of scheduled locations associated with the mobile device or corresponding future event context information representative of future event contexts that identify at least one scheduled device event. 5. The computer-implemented method of claim 1, further comprising: presenting context sensitive recommendation information representative of at least one context sensitive recommendation on a display screen of the mobile device, the at least one context sensitive recommendation comprising a first context sensitive recommendation for reducing power utilization of the mobile device based at least partially on one or more of a corresponding analytics context or a future context. 6. The computer-implemented method of claim 5, wherein the at least one context sensitive recommendation comprises one or more of a context sensitive application recommendation, a context sensitive component recommendation, or a context sensitive power recommendation. 7. The computer-implemented method of claim 1, wherein the device profile information is provided to the server device on a predefined interval. 8. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to: monitor one or more of a component or an application of a mobile device to determine device profile information of a mobile device; transmit the determined device profile information to a server device; and receive predicted information representative of one or more of forecasted power utilization of the mobile device or context sensitive recommendation information representative of one or more context sensitive recommendations for the mobile device, wherein the predicted information or the context sensitive recommendation information is determined based at least partially on the device profile information of the mobile device. 9. The medium of claim 8, wherein the device profile information further comprises one or more of device application information, device component information, device event information, or device location information, and the predicted information further comprises predicted context information, and predicted power event information. 10. The medium of claim 8, wherein the predicted information comprises predicted power curve information generated based at least partially on one or more of future power curve information, past power curve information, or analytics power curve information. 11. The medium of claim 10, wherein the future power curve information is generated based at least partially on a corresponding future location context information representing one or more of scheduled locations associated with the mobile device or corresponding future event context information representative of future event contexts that identify at least one scheduled device event. 12. The medium of claim 8, further storing instructions for: presenting context sensitive recommendation information representative of at least one context sensitive recommendation on a display screen of the mobile device, the at least one context sensitive recommendation comprising a first context sensitive recommendation for reducing power utilization of the mobile device based at least partially on one or more of a corresponding analytics context or a future context. 13. The medium of claim 12, wherein the at least one context sensitive recommendation comprises one or more of a context sensitive application recommendation, a context sensitive component recommendation, or a context sensitive power recommendation. 14. An apparatus, comprising: a processor circuit; memory operatively coupled to the processor circuit, the memory to store a mobile power application for execution by the processor circuit, the mobile power application comprising a power monitoring component to monitor one or more of a component or an application of a mobile device to determine device profile information of a mobile device; a power management component configured to provide the determined device profile information to a server device and receive predicted information representative of one or more of forecasted power utilization of the mobile device or context sensitive recommendation information representative of one or more context sensitive recommendations for the mobile device, wherein the predicted information or the context sensitive recommendation information is determined based at least partially on the device profile information of the mobile device. 15. The apparatus for claim 14, wherein the device profile information further comprises one or more of device application information, device component information, device event information, or device location information, and the predicted information comprises predicted context information, and predicted power event information. 16. The computer-implemented method of claim 14, wherein the predicted information comprises predicted power curve information generated based at least partially on one or more of past power curve information or analytics power curve information. 17. The apparatus for claim 14, wherein the predicted information comprises future power curve information generated based at least partially on device application event information. 18. The apparatus for claim 14, wherein the predicted information comprises future power curve information generated based at least partially on analytics model information for other mobile devices, and the application has not been previously executed by the mobile device. 19. The apparatus of claim 18, wherein the mobile device is associated with a user, and the processor circuit is configured to socially connect the user to other users of a social networking system based at least partially on user profile information associated with the user and the other users. 20. The apparatus of claim 14, further comprising a display configured to present, visually, context sensitive recommendation information representative of at least one context sensitive recommendation, the at least one context sensitive recommendation comprising a recommendation for reducing power utilization of the mobile device based at least partially on one or more of a corresponding analytics context or future context.
Techniques for managing discussion sharing on a mobile platform, comprising a power application. The power application may include, among other components, a power monitoring component to monitor at least one component and/or at least one application of a mobile device to determine device profile information, and a power management component to provide the determined device profile information of a mobile device to a server device and receive predicted information representative of forecasted power utilization of the mobile device and/or context sensitive recommendation information representative of one or more context sensitive recommendations for the mobile device.1. A computer-implemented method, comprising: monitoring one or more of a component or an application of a mobile device to determine device profile information of a mobile device; transmitting the determined device profile information to a server device; and receiving predicted information representative of one or more of forecasted power utilization of the mobile device or context sensitive recommendation information representative of one or more context sensitive recommendations for the mobile device, wherein the predicted information or the context sensitive recommendation information is determined based at least partially on the device profile information of the mobile device. 2. The computer-implemented method of claim 1, wherein the device profile information further comprises one or more of device application information, device component information, device event information, or device location information, and the predicted information further comprises predicted context information, and predicted power event information. 3. The computer-implemented method of claim 1, wherein the predicted information comprises predicted power curve information generated based at least partially on one or more of future power curve information, past power curve information, or analytics power curve information. 4. The computer-implemented method of claim 3, wherein the future power curve information is generated based at least partially on a corresponding future location context information representing one or more of scheduled locations associated with the mobile device or corresponding future event context information representative of future event contexts that identify at least one scheduled device event. 5. The computer-implemented method of claim 1, further comprising: presenting context sensitive recommendation information representative of at least one context sensitive recommendation on a display screen of the mobile device, the at least one context sensitive recommendation comprising a first context sensitive recommendation for reducing power utilization of the mobile device based at least partially on one or more of a corresponding analytics context or a future context. 6. The computer-implemented method of claim 5, wherein the at least one context sensitive recommendation comprises one or more of a context sensitive application recommendation, a context sensitive component recommendation, or a context sensitive power recommendation. 7. The computer-implemented method of claim 1, wherein the device profile information is provided to the server device on a predefined interval. 8. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to: monitor one or more of a component or an application of a mobile device to determine device profile information of a mobile device; transmit the determined device profile information to a server device; and receive predicted information representative of one or more of forecasted power utilization of the mobile device or context sensitive recommendation information representative of one or more context sensitive recommendations for the mobile device, wherein the predicted information or the context sensitive recommendation information is determined based at least partially on the device profile information of the mobile device. 9. The medium of claim 8, wherein the device profile information further comprises one or more of device application information, device component information, device event information, or device location information, and the predicted information further comprises predicted context information, and predicted power event information. 10. The medium of claim 8, wherein the predicted information comprises predicted power curve information generated based at least partially on one or more of future power curve information, past power curve information, or analytics power curve information. 11. The medium of claim 10, wherein the future power curve information is generated based at least partially on a corresponding future location context information representing one or more of scheduled locations associated with the mobile device or corresponding future event context information representative of future event contexts that identify at least one scheduled device event. 12. The medium of claim 8, further storing instructions for: presenting context sensitive recommendation information representative of at least one context sensitive recommendation on a display screen of the mobile device, the at least one context sensitive recommendation comprising a first context sensitive recommendation for reducing power utilization of the mobile device based at least partially on one or more of a corresponding analytics context or a future context. 13. The medium of claim 12, wherein the at least one context sensitive recommendation comprises one or more of a context sensitive application recommendation, a context sensitive component recommendation, or a context sensitive power recommendation. 14. An apparatus, comprising: a processor circuit; memory operatively coupled to the processor circuit, the memory to store a mobile power application for execution by the processor circuit, the mobile power application comprising a power monitoring component to monitor one or more of a component or an application of a mobile device to determine device profile information of a mobile device; a power management component configured to provide the determined device profile information to a server device and receive predicted information representative of one or more of forecasted power utilization of the mobile device or context sensitive recommendation information representative of one or more context sensitive recommendations for the mobile device, wherein the predicted information or the context sensitive recommendation information is determined based at least partially on the device profile information of the mobile device. 15. The apparatus for claim 14, wherein the device profile information further comprises one or more of device application information, device component information, device event information, or device location information, and the predicted information comprises predicted context information, and predicted power event information. 16. The computer-implemented method of claim 14, wherein the predicted information comprises predicted power curve information generated based at least partially on one or more of past power curve information or analytics power curve information. 17. The apparatus for claim 14, wherein the predicted information comprises future power curve information generated based at least partially on device application event information. 18. The apparatus for claim 14, wherein the predicted information comprises future power curve information generated based at least partially on analytics model information for other mobile devices, and the application has not been previously executed by the mobile device. 19. The apparatus of claim 18, wherein the mobile device is associated with a user, and the processor circuit is configured to socially connect the user to other users of a social networking system based at least partially on user profile information associated with the user and the other users. 20. The apparatus of claim 14, further comprising a display configured to present, visually, context sensitive recommendation information representative of at least one context sensitive recommendation, the at least one context sensitive recommendation comprising a recommendation for reducing power utilization of the mobile device based at least partially on one or more of a corresponding analytics context or future context.
2,600
10,431
10,431
15,783,822
2,616
In accordance with one implementation, a system for rendering dimensional surface content in a low-memory environment includes a dimensional surface content rendering tool to generate an animation object file defining inputs to a particle system, and an application that generates scene instructions based on output received from the particle system, the scene instructions including coordinate information for rendering an object at a series of positions. The system further includes a graphics engine that autonomously produces a series of draw commands responsive to receipt of the scene instructions to render multiple complete frames of an animation in a window of the application, the animation depicting the object at the series of positions.
1. A system comprising: memory; at least one processor; a dimensional surface content rendering tool stored in the memory and executable by the at least one processor to generate an animation object file defining inputs to a particle system; an application stored in the memory and executable by the at least one processor to generate scene instructions based on output received from the particle system, the scene instructions including coordinate information defining a time-dependent position function for rendering an object at a series of positions; and a graphics engine that receives the scene instructions from the application and utilizes the time-dependent position function to autonomously produce a series of draw commands responsive to receipt of the scene instructions to render multiple complete frames of an animation in a window of the application, the animation depicting the object at the series of positions. 2. The system of claim 1, wherein the scene instructions to the graphics engine are transmitted via a graphics layer application programming interface (API). 3. The system of claim 1, wherein the coordinate information includes information for rendering multiple objects that move with respect to one another throughout the animation. 4. The system of claim 1, wherein the animation object file defines at least one predefined behavior to be applied to a particle emitted by the particle system. 5. (canceled) 6. The system of claim 1, wherein the object corresponds to a first particle emitted by the particle system and the application is further configured to: receive additional coordinate information received from the particle system while the animation is being rendered in the window of the application, the additional coordinate information describing the time-dependent position function for a second particle emitted by the particle system; and communicate updated scene instructions to the graphics engine responsive to receipt of the additional coordinate information of the application, the updated scene instructions effective to add the second particle to the animation without disrupting the animation. 7. The system of claim 1, wherein the application is a low-memory application. 8. The system of claim 1, wherein the animation is an interactive animation. 9. A method comprising: receiving output from a particle system including coordinate information defining a time-dependent position function for rendering at least one object at a a series of positions; communicating scene instructions from an application to a graphics engine, the scene instructions including the coordinate information from the particle system and effective to autonomously generate a series of draw commands within the graphics engine to render multiple complete frames of an animation within a window of the application; and executing the communicated scene instructions within the graphics engine to render the animation within the window of the application, the animation including the at least one object moving through the series of positions defined by the time-dependent position function. 10. The method of claim 9, wherein the communicated scene instructions include coordinate information for rendering multiple objects that move with respect to one another throughout the animation. 11. The method of claim 9, further comprising: defining inputs to the particle system, the inputs specifying at least one predefined behavior affecting controlling movement of an associated particle throughout a predefined lifetime. 12. (canceled) 13. The method of claim 9, further comprising: receiving an animation object file generated by a dimensional surface content rendering tool, the animation object file defining one or more particle objects of a particle system; and initializing the particle system based on the particle objects defined in the animation object file. 14. The method of claim 9, wherein the at least one object corresponds to a first particle spawned by the particle system. 15. The method of claim 14, wherein the object corresponds to a first particle emitted by the particle system and the method further comprises: receiving additional coordinate information from the particle system while the animation is being rendered in the window of the application, the additional coordinate information including a time-dependent position function for a second particle emitted by the particle system; and communicating updated scene instructions to the graphics engine responsive to receipt of the additional coordinate information of the application, the updated scene instructions effective to add the second particle to the animation without disrupting the animation. 16. The method of claim 9, wherein the application is a low-memory application. 17. One or more computer-readable storage media of a tangible article of manufacture encoding computer-executable instructions for executing on a computer system a computer process comprising: receiving output from a particle system including coordinate information that defines at least one time-dependent position function useable to determine a series of positions for at least one object; communicating scene instructions from an application to a graphics engine, the scene instructions including the coordinate information from the particle system and effective to autonomously generate a series of draw commands within the graphics engine to render multiple complete frames of an animation within a window of the application; and executing the communicated scene instructions within the graphics engine to render the animation within the window of the application, the animation including the at least one object moving through the series of positions defined by the time-dependent position function. 18. The computer-readable storage media of claim 17, wherein the computer process further comprises: receiving an animation object file generated by a dimensional surface content rendering tool, the animation object file defining one or more particle objects of a particle system; and initializing the particle system based on the particle objects defined in the animation object file. 19. The computer-readable storage media of claim 17, wherein the object corresponds to a first particle emitted by the particle system and the computer process further comprises: receiving additional coordinate information from the particle system while the animation is being rendered in the window of the application, the additional coordinate information including a time-dependent position function for a second particle emitted by the particle system; and communicating updated scene instructions to the graphics engine responsive to receipt of the additional coordinate information at the application, the updated scene instructions effective to add the second particle to the animation without disrupting the animation. 20. The computer-readable storage media of claim 17, wherein the application is a low-memory application.
In accordance with one implementation, a system for rendering dimensional surface content in a low-memory environment includes a dimensional surface content rendering tool to generate an animation object file defining inputs to a particle system, and an application that generates scene instructions based on output received from the particle system, the scene instructions including coordinate information for rendering an object at a series of positions. The system further includes a graphics engine that autonomously produces a series of draw commands responsive to receipt of the scene instructions to render multiple complete frames of an animation in a window of the application, the animation depicting the object at the series of positions.1. A system comprising: memory; at least one processor; a dimensional surface content rendering tool stored in the memory and executable by the at least one processor to generate an animation object file defining inputs to a particle system; an application stored in the memory and executable by the at least one processor to generate scene instructions based on output received from the particle system, the scene instructions including coordinate information defining a time-dependent position function for rendering an object at a series of positions; and a graphics engine that receives the scene instructions from the application and utilizes the time-dependent position function to autonomously produce a series of draw commands responsive to receipt of the scene instructions to render multiple complete frames of an animation in a window of the application, the animation depicting the object at the series of positions. 2. The system of claim 1, wherein the scene instructions to the graphics engine are transmitted via a graphics layer application programming interface (API). 3. The system of claim 1, wherein the coordinate information includes information for rendering multiple objects that move with respect to one another throughout the animation. 4. The system of claim 1, wherein the animation object file defines at least one predefined behavior to be applied to a particle emitted by the particle system. 5. (canceled) 6. The system of claim 1, wherein the object corresponds to a first particle emitted by the particle system and the application is further configured to: receive additional coordinate information received from the particle system while the animation is being rendered in the window of the application, the additional coordinate information describing the time-dependent position function for a second particle emitted by the particle system; and communicate updated scene instructions to the graphics engine responsive to receipt of the additional coordinate information of the application, the updated scene instructions effective to add the second particle to the animation without disrupting the animation. 7. The system of claim 1, wherein the application is a low-memory application. 8. The system of claim 1, wherein the animation is an interactive animation. 9. A method comprising: receiving output from a particle system including coordinate information defining a time-dependent position function for rendering at least one object at a a series of positions; communicating scene instructions from an application to a graphics engine, the scene instructions including the coordinate information from the particle system and effective to autonomously generate a series of draw commands within the graphics engine to render multiple complete frames of an animation within a window of the application; and executing the communicated scene instructions within the graphics engine to render the animation within the window of the application, the animation including the at least one object moving through the series of positions defined by the time-dependent position function. 10. The method of claim 9, wherein the communicated scene instructions include coordinate information for rendering multiple objects that move with respect to one another throughout the animation. 11. The method of claim 9, further comprising: defining inputs to the particle system, the inputs specifying at least one predefined behavior affecting controlling movement of an associated particle throughout a predefined lifetime. 12. (canceled) 13. The method of claim 9, further comprising: receiving an animation object file generated by a dimensional surface content rendering tool, the animation object file defining one or more particle objects of a particle system; and initializing the particle system based on the particle objects defined in the animation object file. 14. The method of claim 9, wherein the at least one object corresponds to a first particle spawned by the particle system. 15. The method of claim 14, wherein the object corresponds to a first particle emitted by the particle system and the method further comprises: receiving additional coordinate information from the particle system while the animation is being rendered in the window of the application, the additional coordinate information including a time-dependent position function for a second particle emitted by the particle system; and communicating updated scene instructions to the graphics engine responsive to receipt of the additional coordinate information of the application, the updated scene instructions effective to add the second particle to the animation without disrupting the animation. 16. The method of claim 9, wherein the application is a low-memory application. 17. One or more computer-readable storage media of a tangible article of manufacture encoding computer-executable instructions for executing on a computer system a computer process comprising: receiving output from a particle system including coordinate information that defines at least one time-dependent position function useable to determine a series of positions for at least one object; communicating scene instructions from an application to a graphics engine, the scene instructions including the coordinate information from the particle system and effective to autonomously generate a series of draw commands within the graphics engine to render multiple complete frames of an animation within a window of the application; and executing the communicated scene instructions within the graphics engine to render the animation within the window of the application, the animation including the at least one object moving through the series of positions defined by the time-dependent position function. 18. The computer-readable storage media of claim 17, wherein the computer process further comprises: receiving an animation object file generated by a dimensional surface content rendering tool, the animation object file defining one or more particle objects of a particle system; and initializing the particle system based on the particle objects defined in the animation object file. 19. The computer-readable storage media of claim 17, wherein the object corresponds to a first particle emitted by the particle system and the computer process further comprises: receiving additional coordinate information from the particle system while the animation is being rendered in the window of the application, the additional coordinate information including a time-dependent position function for a second particle emitted by the particle system; and communicating updated scene instructions to the graphics engine responsive to receipt of the additional coordinate information at the application, the updated scene instructions effective to add the second particle to the animation without disrupting the animation. 20. The computer-readable storage media of claim 17, wherein the application is a low-memory application.
2,600
10,432
10,432
15,637,658
2,656
Apparatuses and systems for conserving power for a portable electronic device that monitors local audio for a wakeword are described herein. In a non-limiting embodiment, a portable electronic device may have two-phases. The first phase may be a first circuit that stores an audio input while determining whether human speech is present in the audio input. The second phase may be a second circuit that activates when the first circuit determines that human speech is present in the audio input. The second circuit may receive the audio input from the first circuit, store the audio input, and determine whether a wakeword is present within the audio input.
1. An electronic device comprising: a microphone operable to receive an analog audio input signal; a low-power circuit that utilizes less power to operate while active than power to operate the electronic device, the low power circuit comprising: an analog-to-digital converter operable to: receive the analog audio input signal; and convert the analog audio input to a digital signal; a voice activity detector operable to: receive the digital signal from the analog-to-digital converter; analyze the digital signal to determine that the digital signal includes a digital representation of spoken words; and output a first switch signal when the digital representation of the spoken words are present in the digital signal; a first memory buffer circuit operable to: receive the digital signal from the analog-to-digital converter; and output the digital signal in response to the voice activity detector determining that the digital signal includes a digital representation of spoken words; and a medium-power circuit that utilizes more power to operate than the low-power circuit, but less than the power to operate the electronic device, and that operates in standby mode until it receives an interrupt signal, the medium-power circuit comprising: an activation circuit that activates the medium-power circuit in response to receiving the first switch signal from the low-power circuit; a wakeword detection circuit operable to: receive the digital signal from the low-power circuit; and analyze the digital signal to determine that a digital representation of a wakeword is present in the digital signal, the wakeword being any keyword or phrase that, when detected, signals that the electronic device should be activated and results in the medium power circuit outputting the digital signal to a language processing system that analyzes the digital signal; and a second memory buffer circuit operable to: receive the digital signal from the low-power circuit; and output the digital signal in response to the wakeword detection circuit determining a digital representation of the wakeword is present in the digital signal. 2. The electronic device of claim 1, the low-power circuit further comprising: a pre-wakeword detection circuit that operates in standby mode until the first switch signal is received, the pre-wakeword detection circuit operable to: analyze the digital signal to determine that the digital signal includes a digital representation of the wakeword beyond a predetermined threshold, the predetermined threshold being set to a value such that the rate of false acceptances of digital representations of the wakeword per hour is limited while reducing the percentage of false rejections of digital representations of the wakeword; and provide a second signal that activates the medium-power circuit; and a pre-wakeword memory buffer circuit operable to: receive the digital signal from the first memory buffer circuit; and output the digital signal in response to the pre-wakeword detection circuit determining beyond a predetermined threshold that a digital representation of the wakeword is present in the digital signal. 3. The electronic device of claim 2, the pre-wakeword detection circuit being a digital signal processor and the predetermined threshold being set by varying the operational characteristics of the digital signal processor. 4. The electronic device of claim 1, further comprising: communications circuitry operable to: output the digital signal to a language processing system in response to the digital representation of the wakeword being present in the digital signal; receive a third signal that causes the electronic device to go from standby mode to active mode; receive content responsive to a request included within the digital signal; and a display screen operable to: remain in standby mode until a second interrupt signal is received; display the content; and return to standby mode after displaying the content. 5. An electronic device comprising: a microphone operable to receive an audio input; a first circuit that utilizes less power to operate while active than power to operate the electronic device, the first circuit comprising: a voice activity detector operable to: receive the audio input; analyze the audio input to determine that a digital representation of spoken words are present in the audio input; and output a first signal in response to determining that the digital representation of spoken words are present in the audio input; and a second circuit that utilizes more power than the first circuit but less than the power to operate the electronic device, and that operates in standby mode until it receives an interrupt signal, the second circuit comprising: an activation circuit that activates the second circuit in response to receiving the first signal from the first circuit; a wakeword detection circuit operable to: receive the audio input from the first circuit; and analyze the audio input to determine that a digital representation of a wakeword is present in the audio input. 6. (canceled) 7. The electronic device of claim 5, the first circuit further comprising: a sub-circuit comprising: a pre wakeword detection circuit operable to: analyze the audio input to determine that the digital signal comprises a digital representation of the wakeword beyond a predetermined threshold; and provide a second signal that activates the second circuit. 8. The electronic device of claim 7, the predetermined threshold being set to a value such that an utterance that comprises a digital representation of the wakeword is rejected less than 15% of instances where utterances comprise a digital representation of the wakeword. 9. The electronic device of claim 7, wherein the sub-circuit is configured to operate in a standby mode until the sub-circuit receives the first signal. 10. The electronic device of claim 5, the first signal being an interrupt signal that causes the second circuit to stop any action the second circuit is performing at the time the second circuit receives the interrupt signal. 11. A system comprising: an electronic device comprising: a microphone operable to receive an audio input; a first circuit that utilizes less power to operate while active than power to operate the electronic device, the first circuit comprising: a voice activity detector operable to: receive the audio input; analyze the audio input to determine that a digital representation of spoken words are present in the audio input; and output a first signal in response to determining that the digital representation of spoken words are present in the audio input; and a second circuit that utilizes more power than the first circuit but less than the power to operate the electronic device, and that operates in standby mode until it receives an interrupt signal, the second circuit comprising: an activation circuit that activates the second circuit in response to receiving the first signal from the first circuit; a wakeword detection circuit operable to: receive the audio input from the first circuit; and analyze the audio input to determine that a digital representation of a wakeword is present in the audio input; and provide a third signal that results in the second circuit outputting the digital signal to a language processing system that analyzes the digital signal; communications circuitry operable to output the audio input; a display screen operable to display visual content; and a speaker operable to output audio data; and a language processing system comprising: memory; communications circuitry; and at least one processor operable to: receive, from the second circuit, the audio input; generate first text data representing the audio input; determine, using the first text data, an intent of the audio input is to receive an answer; receive second text data representing the answer; generate audio data representing the second text data; and output the audio data. 12. (canceled) 13. The system of claim 11, the first circuit further comprising: a sub-circuit comprising a pre wakeword detection circuit operable to: analyze the audio input to determine that the digital signal comprises a digital representation of the wakeword beyond a predetermined threshold; and provide a second signal that activates the second circuit. 14. The system of claim 13, the predetermined threshold being set to a value such that an utterance that comprises the wakeword is rejected less than 15% of instances where utterances comprise the wakeword. 15. The system of claim 14, the predetermined threshold being set at a value such that the rate at which an utterance that does not comprise the wakeword is accepted is at a rate larger than that accepted by the second circuit. 16. The system of claim 13, wherein the sub-circuit is configured to operate in a standby mode until the sub-circuit receives the first signal. 17. The system of claim 11, the display screen further operable to operate in standby mode until the second circuit receives an interrupt signal. 18. The system of claim 11, the display screen further operable to operate in standby mode until the second circuit determines the wakeword is present. 19. The system of claim 11, the display screen further operable to return to standby mode after displaying the visual content for a predetermined amount of time. 20. The system of claim 19, the predetermined amount of time being based on the visual content.
Apparatuses and systems for conserving power for a portable electronic device that monitors local audio for a wakeword are described herein. In a non-limiting embodiment, a portable electronic device may have two-phases. The first phase may be a first circuit that stores an audio input while determining whether human speech is present in the audio input. The second phase may be a second circuit that activates when the first circuit determines that human speech is present in the audio input. The second circuit may receive the audio input from the first circuit, store the audio input, and determine whether a wakeword is present within the audio input.1. An electronic device comprising: a microphone operable to receive an analog audio input signal; a low-power circuit that utilizes less power to operate while active than power to operate the electronic device, the low power circuit comprising: an analog-to-digital converter operable to: receive the analog audio input signal; and convert the analog audio input to a digital signal; a voice activity detector operable to: receive the digital signal from the analog-to-digital converter; analyze the digital signal to determine that the digital signal includes a digital representation of spoken words; and output a first switch signal when the digital representation of the spoken words are present in the digital signal; a first memory buffer circuit operable to: receive the digital signal from the analog-to-digital converter; and output the digital signal in response to the voice activity detector determining that the digital signal includes a digital representation of spoken words; and a medium-power circuit that utilizes more power to operate than the low-power circuit, but less than the power to operate the electronic device, and that operates in standby mode until it receives an interrupt signal, the medium-power circuit comprising: an activation circuit that activates the medium-power circuit in response to receiving the first switch signal from the low-power circuit; a wakeword detection circuit operable to: receive the digital signal from the low-power circuit; and analyze the digital signal to determine that a digital representation of a wakeword is present in the digital signal, the wakeword being any keyword or phrase that, when detected, signals that the electronic device should be activated and results in the medium power circuit outputting the digital signal to a language processing system that analyzes the digital signal; and a second memory buffer circuit operable to: receive the digital signal from the low-power circuit; and output the digital signal in response to the wakeword detection circuit determining a digital representation of the wakeword is present in the digital signal. 2. The electronic device of claim 1, the low-power circuit further comprising: a pre-wakeword detection circuit that operates in standby mode until the first switch signal is received, the pre-wakeword detection circuit operable to: analyze the digital signal to determine that the digital signal includes a digital representation of the wakeword beyond a predetermined threshold, the predetermined threshold being set to a value such that the rate of false acceptances of digital representations of the wakeword per hour is limited while reducing the percentage of false rejections of digital representations of the wakeword; and provide a second signal that activates the medium-power circuit; and a pre-wakeword memory buffer circuit operable to: receive the digital signal from the first memory buffer circuit; and output the digital signal in response to the pre-wakeword detection circuit determining beyond a predetermined threshold that a digital representation of the wakeword is present in the digital signal. 3. The electronic device of claim 2, the pre-wakeword detection circuit being a digital signal processor and the predetermined threshold being set by varying the operational characteristics of the digital signal processor. 4. The electronic device of claim 1, further comprising: communications circuitry operable to: output the digital signal to a language processing system in response to the digital representation of the wakeword being present in the digital signal; receive a third signal that causes the electronic device to go from standby mode to active mode; receive content responsive to a request included within the digital signal; and a display screen operable to: remain in standby mode until a second interrupt signal is received; display the content; and return to standby mode after displaying the content. 5. An electronic device comprising: a microphone operable to receive an audio input; a first circuit that utilizes less power to operate while active than power to operate the electronic device, the first circuit comprising: a voice activity detector operable to: receive the audio input; analyze the audio input to determine that a digital representation of spoken words are present in the audio input; and output a first signal in response to determining that the digital representation of spoken words are present in the audio input; and a second circuit that utilizes more power than the first circuit but less than the power to operate the electronic device, and that operates in standby mode until it receives an interrupt signal, the second circuit comprising: an activation circuit that activates the second circuit in response to receiving the first signal from the first circuit; a wakeword detection circuit operable to: receive the audio input from the first circuit; and analyze the audio input to determine that a digital representation of a wakeword is present in the audio input. 6. (canceled) 7. The electronic device of claim 5, the first circuit further comprising: a sub-circuit comprising: a pre wakeword detection circuit operable to: analyze the audio input to determine that the digital signal comprises a digital representation of the wakeword beyond a predetermined threshold; and provide a second signal that activates the second circuit. 8. The electronic device of claim 7, the predetermined threshold being set to a value such that an utterance that comprises a digital representation of the wakeword is rejected less than 15% of instances where utterances comprise a digital representation of the wakeword. 9. The electronic device of claim 7, wherein the sub-circuit is configured to operate in a standby mode until the sub-circuit receives the first signal. 10. The electronic device of claim 5, the first signal being an interrupt signal that causes the second circuit to stop any action the second circuit is performing at the time the second circuit receives the interrupt signal. 11. A system comprising: an electronic device comprising: a microphone operable to receive an audio input; a first circuit that utilizes less power to operate while active than power to operate the electronic device, the first circuit comprising: a voice activity detector operable to: receive the audio input; analyze the audio input to determine that a digital representation of spoken words are present in the audio input; and output a first signal in response to determining that the digital representation of spoken words are present in the audio input; and a second circuit that utilizes more power than the first circuit but less than the power to operate the electronic device, and that operates in standby mode until it receives an interrupt signal, the second circuit comprising: an activation circuit that activates the second circuit in response to receiving the first signal from the first circuit; a wakeword detection circuit operable to: receive the audio input from the first circuit; and analyze the audio input to determine that a digital representation of a wakeword is present in the audio input; and provide a third signal that results in the second circuit outputting the digital signal to a language processing system that analyzes the digital signal; communications circuitry operable to output the audio input; a display screen operable to display visual content; and a speaker operable to output audio data; and a language processing system comprising: memory; communications circuitry; and at least one processor operable to: receive, from the second circuit, the audio input; generate first text data representing the audio input; determine, using the first text data, an intent of the audio input is to receive an answer; receive second text data representing the answer; generate audio data representing the second text data; and output the audio data. 12. (canceled) 13. The system of claim 11, the first circuit further comprising: a sub-circuit comprising a pre wakeword detection circuit operable to: analyze the audio input to determine that the digital signal comprises a digital representation of the wakeword beyond a predetermined threshold; and provide a second signal that activates the second circuit. 14. The system of claim 13, the predetermined threshold being set to a value such that an utterance that comprises the wakeword is rejected less than 15% of instances where utterances comprise the wakeword. 15. The system of claim 14, the predetermined threshold being set at a value such that the rate at which an utterance that does not comprise the wakeword is accepted is at a rate larger than that accepted by the second circuit. 16. The system of claim 13, wherein the sub-circuit is configured to operate in a standby mode until the sub-circuit receives the first signal. 17. The system of claim 11, the display screen further operable to operate in standby mode until the second circuit receives an interrupt signal. 18. The system of claim 11, the display screen further operable to operate in standby mode until the second circuit determines the wakeword is present. 19. The system of claim 11, the display screen further operable to return to standby mode after displaying the visual content for a predetermined amount of time. 20. The system of claim 19, the predetermined amount of time being based on the visual content.
2,600
10,433
10,433
15,587,496
2,685
An apparatus, according to an exemplary aspect of the present disclosure includes, among other things, a vehicle body member and at least one badge that identifies at least one service provider. The badge comprises an applique that is mounted to the vehicle body member. A control activates and deactivates illumination of the badge based on at least one of a driver or user input. A method according to an exemplary aspect of the present disclosure includes, among other things, mounting at least one badge to a vehicle body member, the badge comprising an applique that includes one or more identification logos, and illuminating at least one identification logo based on at least one of a driver or user input.
1. A method, comprising: providing a vehicle body member that comprises a body panel or trim for the body panel; molding at least one badge to form part of the vehicle body member, the badge comprising an applique that includes one or more identification logos; and illuminating at least one identification logo based on at least one of a driver or user input. 2. The method according to claim 1, including communicating driver and user input via one or more of a vehicle interface, a vehicle GPS system, a driver phone application, and/or a service requester phone application. 3. The method according to claim 1, including forming the badge in one or more pieces of vehicle trim. 4. The method according to claim 1, wherein the at least one identification logo comprises one or more service provider logos, and including changing color and/or blinking illumination of one or more of the logos to indicate one of a “for hire” or “not for hire” condition. 5. The method according to claim 4, wherein the one or more service provider logos comprise ride service logos, and including increasing illumination intensity of an active ride service logo when approaching a passenger pick-up point. 6. The method according to claim 1, including automatically deactivating illumination of all identification logos when the vehicle is not in service, or selectively deactivating illumination of all identification logos upon a driver deactivation request. 7. The method according to claim 1, including forming the applique as part of a film, printing the at least one identification logo on the film with ink to provide a printed film, and molding the printed film within a clear plastic material to provide a molded part. 8. The method according to claim 7, including positioning an illumination source on one side of the molded part and a clear cover on an opposite side of the molded part, connecting a circuit board to the illumination source, and attaching the illumination source and circuit board to a backing that includes an attachment interface to be selectively connected to a vehicle. 9. The method according to claim 1, providing the at least one badge with a wireless communication capability, and including wirelessly communicating the driver and user input via mobile phone applications to the at least one badge to control illumination of the at least one badge. 10. An apparatus, comprising: a vehicle body member that comprises a body panel or trim for the body panel; at least one badge that identifies at least one service provider, the badge comprising an applique that is mounted to the vehicle body member, and wherein the badge is molded to form part of the body panel and/or trim; and a control to activate and deactivate illumination of the badge based on at least one of a driver or user input. 11. The apparatus according to claim 10, wherein the at least one badge comprises a plurality of badges that each include one or more service provider logos, and wherein the control only illuminates the service provider logo associated with a current active service provider. 12. The apparatus according to claim 11, wherein the at least one service provider comprises a plurality of ride service providers each having a unique service provider logo, and wherein the at least one badge includes a badge wireless communication device, and wherein the user input comprises a ride request that is generated via a phone wireless communication device and that is communicated to the badge wireless communication device. 13. The apparatus according to claim 12, wherein the control changes color, intensity, and/or blinks illumination of the badge to indicate one of a “for hire” or “not for hire” condition. 14. The apparatus according to claim 10, wherein the vehicle body member comprises a vehicle trim piece. 15. The apparatus according to claim 10, wherein the at least one badge comprises a plurality of badges that are mounted at a plurality of different locations on the vehicle body member which comprises one or more of a side passenger door, a vehicle front panel, and a vehicle rear panel. 16. The apparatus according to claim 10, wherein the user input comprises a user request of an associated service, and wherein the control illuminates the badge based on a defined distance between a service provider vehicle and the user. 17. The apparatus according to claim 10, wherein the applique comprises a film having one or more service provider logos printed on the film, and wherein the control is configured to only illuminate the service provider logo for a current active service provider. 18. The apparatus according to claim 17, wherein the film having the service provider logos is encased within a clear plastic body. 19. The apparatus according to claim 18, including an illumination source positioned on one side of the plastic body and a clear cover positioned on an opposite side of the plastic body, and including a circuit board connected to the illumination source, and wherein the circuit board is attached to a backing that includes an attachment interface to be selectively connected to a vehicle. 20. The apparatus according to claim 10, wherein the driver or user input comprises a request for a desired service that is communicated via one or more of a vehicle interface, a vehicle GPS system, a driver phone application, and/or a service requester phone application. 21. The apparatus according to claim 10, wherein the control is configured to automatically deactivate illumination of the badge based on a specified geo-fenced area. 22. The apparatus according to claim 10, the badge includes an attachment interface that is selectively attachable and removable from the body panel and/or trim to change or repair the at least one badge. 23. The apparatus according to claim 10, wherein, subsequent to deactivation of illumination of the badge, the badge is configured to re-illuminate when a control signal from the control or a driver device recognizes a recipient signal in a device of a destination source. 24. The method according to claim 1, including automatically deactivating illumination of the badge based on a specified geo-fenced area. 25. The method according to claim 1, providing the badge with an attachment interface that is selectively attachable and removable from the body panel and/or trim to change or repair the at least one badge. 26. The method according to claim 1, wherein, subsequent to deactivation of illumination of the badge, the method includes re-illuminating the badge when a control signal from the control or a driver device recognizes a recipient signal in a device of a destination source.
An apparatus, according to an exemplary aspect of the present disclosure includes, among other things, a vehicle body member and at least one badge that identifies at least one service provider. The badge comprises an applique that is mounted to the vehicle body member. A control activates and deactivates illumination of the badge based on at least one of a driver or user input. A method according to an exemplary aspect of the present disclosure includes, among other things, mounting at least one badge to a vehicle body member, the badge comprising an applique that includes one or more identification logos, and illuminating at least one identification logo based on at least one of a driver or user input.1. A method, comprising: providing a vehicle body member that comprises a body panel or trim for the body panel; molding at least one badge to form part of the vehicle body member, the badge comprising an applique that includes one or more identification logos; and illuminating at least one identification logo based on at least one of a driver or user input. 2. The method according to claim 1, including communicating driver and user input via one or more of a vehicle interface, a vehicle GPS system, a driver phone application, and/or a service requester phone application. 3. The method according to claim 1, including forming the badge in one or more pieces of vehicle trim. 4. The method according to claim 1, wherein the at least one identification logo comprises one or more service provider logos, and including changing color and/or blinking illumination of one or more of the logos to indicate one of a “for hire” or “not for hire” condition. 5. The method according to claim 4, wherein the one or more service provider logos comprise ride service logos, and including increasing illumination intensity of an active ride service logo when approaching a passenger pick-up point. 6. The method according to claim 1, including automatically deactivating illumination of all identification logos when the vehicle is not in service, or selectively deactivating illumination of all identification logos upon a driver deactivation request. 7. The method according to claim 1, including forming the applique as part of a film, printing the at least one identification logo on the film with ink to provide a printed film, and molding the printed film within a clear plastic material to provide a molded part. 8. The method according to claim 7, including positioning an illumination source on one side of the molded part and a clear cover on an opposite side of the molded part, connecting a circuit board to the illumination source, and attaching the illumination source and circuit board to a backing that includes an attachment interface to be selectively connected to a vehicle. 9. The method according to claim 1, providing the at least one badge with a wireless communication capability, and including wirelessly communicating the driver and user input via mobile phone applications to the at least one badge to control illumination of the at least one badge. 10. An apparatus, comprising: a vehicle body member that comprises a body panel or trim for the body panel; at least one badge that identifies at least one service provider, the badge comprising an applique that is mounted to the vehicle body member, and wherein the badge is molded to form part of the body panel and/or trim; and a control to activate and deactivate illumination of the badge based on at least one of a driver or user input. 11. The apparatus according to claim 10, wherein the at least one badge comprises a plurality of badges that each include one or more service provider logos, and wherein the control only illuminates the service provider logo associated with a current active service provider. 12. The apparatus according to claim 11, wherein the at least one service provider comprises a plurality of ride service providers each having a unique service provider logo, and wherein the at least one badge includes a badge wireless communication device, and wherein the user input comprises a ride request that is generated via a phone wireless communication device and that is communicated to the badge wireless communication device. 13. The apparatus according to claim 12, wherein the control changes color, intensity, and/or blinks illumination of the badge to indicate one of a “for hire” or “not for hire” condition. 14. The apparatus according to claim 10, wherein the vehicle body member comprises a vehicle trim piece. 15. The apparatus according to claim 10, wherein the at least one badge comprises a plurality of badges that are mounted at a plurality of different locations on the vehicle body member which comprises one or more of a side passenger door, a vehicle front panel, and a vehicle rear panel. 16. The apparatus according to claim 10, wherein the user input comprises a user request of an associated service, and wherein the control illuminates the badge based on a defined distance between a service provider vehicle and the user. 17. The apparatus according to claim 10, wherein the applique comprises a film having one or more service provider logos printed on the film, and wherein the control is configured to only illuminate the service provider logo for a current active service provider. 18. The apparatus according to claim 17, wherein the film having the service provider logos is encased within a clear plastic body. 19. The apparatus according to claim 18, including an illumination source positioned on one side of the plastic body and a clear cover positioned on an opposite side of the plastic body, and including a circuit board connected to the illumination source, and wherein the circuit board is attached to a backing that includes an attachment interface to be selectively connected to a vehicle. 20. The apparatus according to claim 10, wherein the driver or user input comprises a request for a desired service that is communicated via one or more of a vehicle interface, a vehicle GPS system, a driver phone application, and/or a service requester phone application. 21. The apparatus according to claim 10, wherein the control is configured to automatically deactivate illumination of the badge based on a specified geo-fenced area. 22. The apparatus according to claim 10, the badge includes an attachment interface that is selectively attachable and removable from the body panel and/or trim to change or repair the at least one badge. 23. The apparatus according to claim 10, wherein, subsequent to deactivation of illumination of the badge, the badge is configured to re-illuminate when a control signal from the control or a driver device recognizes a recipient signal in a device of a destination source. 24. The method according to claim 1, including automatically deactivating illumination of the badge based on a specified geo-fenced area. 25. The method according to claim 1, providing the badge with an attachment interface that is selectively attachable and removable from the body panel and/or trim to change or repair the at least one badge. 26. The method according to claim 1, wherein, subsequent to deactivation of illumination of the badge, the method includes re-illuminating the badge when a control signal from the control or a driver device recognizes a recipient signal in a device of a destination source.
2,600
10,434
10,434
15,800,314
2,646
A method is disclosed where a user equipment (“UE”) determines that the UE is in a connected mode. The UE then determines a connection type of the connected mode and prioritizes a measurement report based on the connection type. The method may also be performed by an integrated circuit of the UE.
1. A method, comprising: at a user equipment (“UE”), determining that the UE is in a connected mode; determining a connection type of the connected mode; and prioritizing a measurement report based on the connection type. 2. The method of claim 1, wherein the connection type is one of a data transfer, a voice call, an emergency call or a signaling. 3. The method of claim 1, wherein the connected mode is an Radio Resource Control (“RRC”) connected mode. 4. The method of claim 1, wherein the measurement report is related to least one of an inter-frequency handover, an intra-frequency handover, an Inter-Radio Access Technology (Inter-RAT) handover or Gap measurements. 5. The method of claim 1, further comprising; deprioritizing a further measurement report based on the connection type. 6. The method of claim 1, wherein the prioritized measurement report is placed in a group assigned to a first cell. 7. The method of claim 1, wherein the prioritizing is further based on a list of cells maintained by the UE. 8. The method of claim 1, further comprising: determining a connection sub-type of the connection type of the connected mode, wherein the prioritizing the measurement report is further based on the connection sub-type. 9. A user equipment (“UE”), comprising: a transceiver configured to connect to a base station of a network; and a processor configured to: determine that the UE is in a connected mode with the base station; determine a connection type of the connected mode; and prioritize a measurement report based on the connection type. 10. The UE of claim 9, wherein the connection type is one of a data transfer, a voice call, an emergency call or a signaling. 11. The UE of claim 9, wherein the connected mode is an Radio Resource Control (“RRC”) connected mode. 12. The UE of claim 9, wherein the measurement report is related to least one of an inter-frequency handover, an intra-frequency handover, an Inter-Radio Access Technology (Inter-RAT) handover or Gap measurements. 13. The UE of claim 9, wherein the processor is further configured to; determine when a criteria is satisfied; and deprioritize a further measurement report based on the connection type. 14. The UE of claim 9, wherein the prioritized measurement report is placed in a group assigned to a first cell. 15. The UE of claim 9, wherein the prioritizing is further based on a list of cells maintained by the UE, wherein the list comprises a capability of each of the cells. 16. The UE of claim 15, wherein the capability is one of VoLTE or IMS-emergency-setup. 17. The UE of claim 9, wherein the processor is further configured to: determine a connection sub-type of the connection type of the connected mode, wherein the prioritizing the measurement report is further based on the connection sub-type. 18. An integrated circuit, comprising: circuitry to determine that a user equipment (“UE”) is in a connected mode; circuitry to determine a connection type of the connected mode; and circuitry to prioritize a measurement report based on the connection type. 19. The integrated circuit of claim 18, wherein the connection type is one of a data transfer, a voice call, an emergency call or a signaling. 20. The integrated circuit of claim 18, wherein the measurement report is related to least one of an inter-frequency handover, an intra-frequency handover, an Inter-Radio Access Technology (Inter-RAT) handover or Gap measurements.
A method is disclosed where a user equipment (“UE”) determines that the UE is in a connected mode. The UE then determines a connection type of the connected mode and prioritizes a measurement report based on the connection type. The method may also be performed by an integrated circuit of the UE.1. A method, comprising: at a user equipment (“UE”), determining that the UE is in a connected mode; determining a connection type of the connected mode; and prioritizing a measurement report based on the connection type. 2. The method of claim 1, wherein the connection type is one of a data transfer, a voice call, an emergency call or a signaling. 3. The method of claim 1, wherein the connected mode is an Radio Resource Control (“RRC”) connected mode. 4. The method of claim 1, wherein the measurement report is related to least one of an inter-frequency handover, an intra-frequency handover, an Inter-Radio Access Technology (Inter-RAT) handover or Gap measurements. 5. The method of claim 1, further comprising; deprioritizing a further measurement report based on the connection type. 6. The method of claim 1, wherein the prioritized measurement report is placed in a group assigned to a first cell. 7. The method of claim 1, wherein the prioritizing is further based on a list of cells maintained by the UE. 8. The method of claim 1, further comprising: determining a connection sub-type of the connection type of the connected mode, wherein the prioritizing the measurement report is further based on the connection sub-type. 9. A user equipment (“UE”), comprising: a transceiver configured to connect to a base station of a network; and a processor configured to: determine that the UE is in a connected mode with the base station; determine a connection type of the connected mode; and prioritize a measurement report based on the connection type. 10. The UE of claim 9, wherein the connection type is one of a data transfer, a voice call, an emergency call or a signaling. 11. The UE of claim 9, wherein the connected mode is an Radio Resource Control (“RRC”) connected mode. 12. The UE of claim 9, wherein the measurement report is related to least one of an inter-frequency handover, an intra-frequency handover, an Inter-Radio Access Technology (Inter-RAT) handover or Gap measurements. 13. The UE of claim 9, wherein the processor is further configured to; determine when a criteria is satisfied; and deprioritize a further measurement report based on the connection type. 14. The UE of claim 9, wherein the prioritized measurement report is placed in a group assigned to a first cell. 15. The UE of claim 9, wherein the prioritizing is further based on a list of cells maintained by the UE, wherein the list comprises a capability of each of the cells. 16. The UE of claim 15, wherein the capability is one of VoLTE or IMS-emergency-setup. 17. The UE of claim 9, wherein the processor is further configured to: determine a connection sub-type of the connection type of the connected mode, wherein the prioritizing the measurement report is further based on the connection sub-type. 18. An integrated circuit, comprising: circuitry to determine that a user equipment (“UE”) is in a connected mode; circuitry to determine a connection type of the connected mode; and circuitry to prioritize a measurement report based on the connection type. 19. The integrated circuit of claim 18, wherein the connection type is one of a data transfer, a voice call, an emergency call or a signaling. 20. The integrated circuit of claim 18, wherein the measurement report is related to least one of an inter-frequency handover, an intra-frequency handover, an Inter-Radio Access Technology (Inter-RAT) handover or Gap measurements.
2,600
10,435
10,435
15,542,918
2,656
Examples associated with reading difficulty level based resource recommendation are disclosed. One example may involve instructions stored on a computer readable medium. The instructions, when executed on a computer, may cause the computer to obtain a set of candidate resources related to a source document. The candidate resources may be obtained based on content extracted from the source document. The instructions may also cause the computer to identify reading difficulty levels of members of the set of candidate resources. The instructions may also cause the computer to recommend a selected candidate resource to a user. The selected candidate resource may be recommended based on subject matter similarity between the selected candidate resource and the source document. The selected candidate resource may also be recommended based on reading difficulty level similarity between the selected candidate resource and the source document.
1. A non-transitory computer-readable medium storing computer-executable instructions that when executed by a computer cause the computer to: obtain, based on content extracted from a source document, a set of candidate resources related to the source document; identify reading difficulty levels of members of the set of candidate resources; and recommend a selected candidate resource to a user based on subject matter similarity between the selected candidate resource and the source document, and based on reading difficulty level similarity between the selected candidate resource and the source document. 2. The non-transitory computer-readable medium, where the instructions further cause the computer to: extract the content from the source document. 3. The non-transitory computer-readable medium of claim 1, where the instructions for identifying reading difficulty levels of members of the set of candidate resources cause the computer to: determine at least one of a subject associated with the source document and a subject associated with a member of the set of candidate resources; and select a specialized reading difficulty model based on at least one of the subject associated with the source resource and the subject associated with the member of the set of candidate resource; and apply the specialized reading difficulty model to content in the member of the set of candidate resources to evaluate reading difficulty of the member of the set of candidate resources. 4. The non-transitory computer-readable medium of claim 1, where the instructions for identifying reading difficulty levels of members of the set of candidate resources further cause the computer to: apply a generic reading difficulty model to content in the member of the set of candidate resources to evaluate reading difficulty of the member of the set of candidate resources. 5. The non-transitory computer-readable medium of claim 1, where the candidate resources are obtained from one or more of a search engine and a database. 6. The non-transitory computer-readable medium of claim 1, where subject matter similarity between the selected candidate resource and the source document is evaluated by: representing the selected candidate resource as a first feature vector; representing the source document as a second feature vector; and calculating similarity between the first feature vector and the second feature vector. 7. A system, comprising: a document acquisition module to obtain candidate resources based on a source document; a reading difficulty level module to generate reading difficulty scores for candidate resources; a subject matter similarity module to generate similarity scores between candidate resources and the source document; and a recommendation module to recommend a subset of the candidate resources based on the reading difficulty scores and the similarity scores. 8. The system of claim 7, where the reading difficulty level module is a member of a set of reading difficulty level modules, and where the set of reading difficulty level modules comprises a specialized reading difficulty level module to generate reading difficulty scores for candidate resources associated with specialized subject matter. 9. The system of claim 8, further comprising a subject identification module to control the specialized reading difficulty level module to generate a reading difficulty score for a candidate resource when the candidate resource is associated with the specialized subject matter. 10. The system of claim 7, further comprising a data store to store content difficulty data organized by grade level, and where the reading difficulty module generates the reading difficulty scores based on the content difficulty data. 11. The system of claim 7, further comprising: a topic extraction module to extract topics from the source document, and where the document acquisition module obtains the candidate resources using the topics. 12. The system of claim 11, where the topic extraction module also extracts topics from the candidate resources, where the subject matter similarity module generates the similarity scores for the candidate resources by comparing the topics extracted from the source document to the topics extracted from respective candidate resources. 13. The system of claim 7, further comprising: a preprocessing module to obtain a user query indicating a passage from the source document and to prepare the source document for processing by the document acquisition module. 14. A method, comprising: extracting content from a source document in response to a user interaction with the source document; obtain, using the content, candidate resources related to the source document; evaluate the reading difficulty level of the candidate resources based on content extracted from the candidate resources; present, to the user, a recommended candidate resource, where the recommended candidate resource is selected based on the reading difficulty level of the candidate resources, a reading difficulty level of the source document, the content of the source document, and content of the candidate resources. 15. The method of claim 14, where the reading difficulty level of a candidate resource is evaluated using one or more of a module designed to evaluate reading difficulty levels of documents relating to a specialized topic associated with the source document, a module designed to evaluate a reading difficulty level of documents having a specialized topic associated with the candidate resource, and a generic reading difficulty level evaluation module.
Examples associated with reading difficulty level based resource recommendation are disclosed. One example may involve instructions stored on a computer readable medium. The instructions, when executed on a computer, may cause the computer to obtain a set of candidate resources related to a source document. The candidate resources may be obtained based on content extracted from the source document. The instructions may also cause the computer to identify reading difficulty levels of members of the set of candidate resources. The instructions may also cause the computer to recommend a selected candidate resource to a user. The selected candidate resource may be recommended based on subject matter similarity between the selected candidate resource and the source document. The selected candidate resource may also be recommended based on reading difficulty level similarity between the selected candidate resource and the source document.1. A non-transitory computer-readable medium storing computer-executable instructions that when executed by a computer cause the computer to: obtain, based on content extracted from a source document, a set of candidate resources related to the source document; identify reading difficulty levels of members of the set of candidate resources; and recommend a selected candidate resource to a user based on subject matter similarity between the selected candidate resource and the source document, and based on reading difficulty level similarity between the selected candidate resource and the source document. 2. The non-transitory computer-readable medium, where the instructions further cause the computer to: extract the content from the source document. 3. The non-transitory computer-readable medium of claim 1, where the instructions for identifying reading difficulty levels of members of the set of candidate resources cause the computer to: determine at least one of a subject associated with the source document and a subject associated with a member of the set of candidate resources; and select a specialized reading difficulty model based on at least one of the subject associated with the source resource and the subject associated with the member of the set of candidate resource; and apply the specialized reading difficulty model to content in the member of the set of candidate resources to evaluate reading difficulty of the member of the set of candidate resources. 4. The non-transitory computer-readable medium of claim 1, where the instructions for identifying reading difficulty levels of members of the set of candidate resources further cause the computer to: apply a generic reading difficulty model to content in the member of the set of candidate resources to evaluate reading difficulty of the member of the set of candidate resources. 5. The non-transitory computer-readable medium of claim 1, where the candidate resources are obtained from one or more of a search engine and a database. 6. The non-transitory computer-readable medium of claim 1, where subject matter similarity between the selected candidate resource and the source document is evaluated by: representing the selected candidate resource as a first feature vector; representing the source document as a second feature vector; and calculating similarity between the first feature vector and the second feature vector. 7. A system, comprising: a document acquisition module to obtain candidate resources based on a source document; a reading difficulty level module to generate reading difficulty scores for candidate resources; a subject matter similarity module to generate similarity scores between candidate resources and the source document; and a recommendation module to recommend a subset of the candidate resources based on the reading difficulty scores and the similarity scores. 8. The system of claim 7, where the reading difficulty level module is a member of a set of reading difficulty level modules, and where the set of reading difficulty level modules comprises a specialized reading difficulty level module to generate reading difficulty scores for candidate resources associated with specialized subject matter. 9. The system of claim 8, further comprising a subject identification module to control the specialized reading difficulty level module to generate a reading difficulty score for a candidate resource when the candidate resource is associated with the specialized subject matter. 10. The system of claim 7, further comprising a data store to store content difficulty data organized by grade level, and where the reading difficulty module generates the reading difficulty scores based on the content difficulty data. 11. The system of claim 7, further comprising: a topic extraction module to extract topics from the source document, and where the document acquisition module obtains the candidate resources using the topics. 12. The system of claim 11, where the topic extraction module also extracts topics from the candidate resources, where the subject matter similarity module generates the similarity scores for the candidate resources by comparing the topics extracted from the source document to the topics extracted from respective candidate resources. 13. The system of claim 7, further comprising: a preprocessing module to obtain a user query indicating a passage from the source document and to prepare the source document for processing by the document acquisition module. 14. A method, comprising: extracting content from a source document in response to a user interaction with the source document; obtain, using the content, candidate resources related to the source document; evaluate the reading difficulty level of the candidate resources based on content extracted from the candidate resources; present, to the user, a recommended candidate resource, where the recommended candidate resource is selected based on the reading difficulty level of the candidate resources, a reading difficulty level of the source document, the content of the source document, and content of the candidate resources. 15. The method of claim 14, where the reading difficulty level of a candidate resource is evaluated using one or more of a module designed to evaluate reading difficulty levels of documents relating to a specialized topic associated with the source document, a module designed to evaluate a reading difficulty level of documents having a specialized topic associated with the candidate resource, and a generic reading difficulty level evaluation module.
2,600
10,436
10,436
14,788,552
2,611
A system for building information modeling comprising a plurality of 2D documentation sets associated with a building. A 3D model associated with the building, the 3D model including a plurality of user-selectable controls, wherein each user-selectable control comprises an icon and an associated balloon that is generated when the icon is selected. The balloon for each user-selectable control comprising a first selection control that causes one of the plurality of 2D documentation sets to be shown after an animation sequence. The balloon for each user-selectable control comprising a second selection control that causes a transition to a 3D model having an overlay of the 2D documentation sets.
1. A system for building information modeling comprising: a plurality of 2D documentation sets associated with a building: a 3D model associated with the building, the 3D model including a plurality of user-selectable controls operating on a processor, wherein each user-selectable control comprises an icon and an associated balloon that is generated by the processor when the icon is selected; the balloon for each user-selectable control comprising a first selection control that causes the processor to generate an animation sequence that ends with one of the plurality of 2D documentation sets; and the balloon for each user-selectable control comprising a second selection control that causes the processor to generate an animation sequence that ends with a view of the 3D model having an overlay of one of the 2D documentation sets. 2. The system of claim 1 further comprising a 2D documentation set system operating on the processor that is configured to generate one or more user controls to allow a user to edit one of the plurality of 2D documentation sets while maintaining an associated user-selectable control for the edited 2D documentation set in the 3D model. 3. The system of claim 1 further comprising a 3D model system with embedded controls operating on the processor that is configured to generate one or more user controls to allow a user to edit the 3D model while maintaining the plurality of associated user-selectable controls in the 3D model. 4. The system of claim 1 further comprising a walking control that receives a user-entered location within the 3D model and that causes the processor to generate an animation sequence that simulates walking through the 3D model from a starting location to the user-entered location. 5. The system of claim 1 further comprising a touchscreen movement control that receives a user-entered input and that causes the processor to generate an animation sequence that simulates walking through the 3D model from a starting location as a function of the user-entered input. 6. The system of claim 5 wherein the touchscreen movement control further comprises: a first circle icon associated with a rest state; and a second circle icon encircling the first circle icon associated with a plurality of first movement states. 7. The system of claim 6 wherein the touchscreen movement control further comprises a third area outside of the second circle icon associated with a plurality of second movement states. 8. The system of claim 1 further comprising a transition animation sequence system operating on the processor and configured to generate one or more user controls to allow a user to generate an animation transition from one of the 2D documentation data sets to the 3D model. 9. The system of claim 1 further comprising a model license system operating on the processor and configured to transmit the plurality of 2D documentation sets and the 3D model in a format compatible with a viewer system to a predetermined licensed processor. 10. A method for modeling building information comprising: generating a plurality of 2D documentation sets associated with a building using a processor: generating a 3D model associated with the building using the processor, the 3D model including a plurality of user-selectable controls operating on the processor, wherein each user-selectable control comprises an icon and an associated balloon that is generated by the processor when the icon is selected; generating a first selection control for the balloon for each user-selectable control that causes the processor to generate an animation sequence that ends with one of the plurality of 2D documentation sets; and generating a second selection control for the balloon for each user-selectable control that causes the processor to generate an animation sequence that ends with a view of the 3D model having an overlay of one of the 2D documentation sets. 11. The method of claim 10 further comprising generating one or more user controls to allow a user to edit one of the plurality of 2D documentation sets using the processor while maintaining an associated user-selectable control for the edited 2D documentation set in the 3D model. 12. The method of claim 10 further comprising generating one or more user controls to allow a user to edit the 3D model while maintaining the plurality of associated user-selectable controls in the 3D model. 13. The method of claim 10 further comprising receiving a user-entered location within the 3D model and generating an animation sequence that simulates walking through the 3D model from a starting location to the user-entered location. 14. The method of claim 10 further comprising receiving a user-entered input and generating an animation sequence that simulates walking through the 3D model from a starting location as a function of the user-entered input. 15. The method of claim 14 further comprising: generating a first circle icon associated with a rest state; and generating a second circle icon encircling the first circle icon associated with a plurality of first movement states. 16. The method of claim 15 further comprising designating a third area outside of the second circle icon associated with a plurality of second movement states. 17. The method of claim 10 further comprising generating one or more user controls to allow a user to generate an animation transition from one of the 2D documentation data sets to the 3D model. 18. The method of claim 10 further comprising transmitting the plurality of 2D documentation sets and the 3D model in a format compatible with a viewer system to a predetermined licensed processor. 19. In a system for building information modeling having a plurality of 2D documentation sets associated with a building, a 3D model associated with the building, the 3D model including a plurality of user-selectable controls operating on a processor, wherein each user-selectable control comprises an icon and an associated balloon that is generated by the processor when the icon is selected, the balloon for each user-selectable control comprising a first selection control that causes the processor to generate an animation sequence that ends with one of the plurality of 2D documentation sets, the balloon for each user-selectable control comprising a second selection control that causes the processor to generate an animation sequence that ends with a view of the 3D model having an overlay of one of the 2D documentation sets, a 2D documentation set system operating on the processor that is configured to generate one or more user controls to allow a user to edit one of the plurality of 2D documentation sets while maintaining an associated user-selectable control for the edited 2D documentation set in the 3D model, a 3D model system with embedded controls operating on the processor that is configured to generate one or more user controls to allow the user to edit the 3D model while maintaining the plurality of associated user-selectable controls in the 3D model, a walking control that receives a user-entered location within the 3D model and that causes the processor to generate an animation sequence that simulates walking through the 3D model from a starting location to the user-entered location, a touchscreen movement control that receives a user-entered input and that causes the processor to generate an animation sequence that simulates walking through the 3D model from the starting location as a function of the user-entered input, wherein the touchscreen movement control includes a first circle icon associated with a rest state, a second circle icon encircling the first circle icon associated with a plurality of first movement states and a third area outside of the second circle icon associated with a plurality of second movement states, a transition animation sequence system operating on the processor and configured to generate one or more user controls to allow the user to generate an animation transition from one of the 2D documentation data sets to the 3D model, and a model license system operating on the processor and configured to transmit the plurality of 2D documentation sets and the 3D model in a format compatible with a viewer system to a predetermined licensed processor, a method comprising: generating the plurality of 2D documentation sets associated with the building using the processor: generating the 3D model associated with the building using the processor, the 3D model including the plurality of user-selectable controls operating on the processor, wherein each user-selectable control comprises the icon and the associated balloon that is generated by the processor when the icon is selected; generating the first selection control for the balloon for each user-selectable control that causes the processor to generate the animation sequence that ends with one of the plurality of 2D documentation sets; generating the second selection control for the balloon for each user-selectable control that causes the processor to generate the animation sequence that ends with the view of the 3D model having the overlay of one of the 2D documentation sets; generating the one or more user controls to allow the user to edit one of the plurality of 2D documentation sets using the processor while maintaining the associated user-selectable control for the edited 2D documentation set in the 3D model; generating the one or more user controls to allow the user to edit the 3D model while maintaining the plurality of associated user-selectable controls in the 3D model; generating the user-entered location within the 3D model and generating the animation sequence that simulates walking through the 3D model from the starting location to the user-entered location; receiving the user-entered input and generating the animation sequence that simulates walking through the 3D model from a starting location as a function of the user-entered input; generating the first circle icon associated with the rest state; generating the second circle icon encircling the first circle icon associated with the plurality of first movement states; designating the third area outside of the second circle icon associated with the plurality of second movement states; generating the one or more user controls to allow the user to generate the animation transition from one of the 2D documentation data sets to the 3D model; transmitting the plurality of 2D documentation sets and the 3D model in the format compatible with the viewer system to the predetermined licensed processor.
A system for building information modeling comprising a plurality of 2D documentation sets associated with a building. A 3D model associated with the building, the 3D model including a plurality of user-selectable controls, wherein each user-selectable control comprises an icon and an associated balloon that is generated when the icon is selected. The balloon for each user-selectable control comprising a first selection control that causes one of the plurality of 2D documentation sets to be shown after an animation sequence. The balloon for each user-selectable control comprising a second selection control that causes a transition to a 3D model having an overlay of the 2D documentation sets.1. A system for building information modeling comprising: a plurality of 2D documentation sets associated with a building: a 3D model associated with the building, the 3D model including a plurality of user-selectable controls operating on a processor, wherein each user-selectable control comprises an icon and an associated balloon that is generated by the processor when the icon is selected; the balloon for each user-selectable control comprising a first selection control that causes the processor to generate an animation sequence that ends with one of the plurality of 2D documentation sets; and the balloon for each user-selectable control comprising a second selection control that causes the processor to generate an animation sequence that ends with a view of the 3D model having an overlay of one of the 2D documentation sets. 2. The system of claim 1 further comprising a 2D documentation set system operating on the processor that is configured to generate one or more user controls to allow a user to edit one of the plurality of 2D documentation sets while maintaining an associated user-selectable control for the edited 2D documentation set in the 3D model. 3. The system of claim 1 further comprising a 3D model system with embedded controls operating on the processor that is configured to generate one or more user controls to allow a user to edit the 3D model while maintaining the plurality of associated user-selectable controls in the 3D model. 4. The system of claim 1 further comprising a walking control that receives a user-entered location within the 3D model and that causes the processor to generate an animation sequence that simulates walking through the 3D model from a starting location to the user-entered location. 5. The system of claim 1 further comprising a touchscreen movement control that receives a user-entered input and that causes the processor to generate an animation sequence that simulates walking through the 3D model from a starting location as a function of the user-entered input. 6. The system of claim 5 wherein the touchscreen movement control further comprises: a first circle icon associated with a rest state; and a second circle icon encircling the first circle icon associated with a plurality of first movement states. 7. The system of claim 6 wherein the touchscreen movement control further comprises a third area outside of the second circle icon associated with a plurality of second movement states. 8. The system of claim 1 further comprising a transition animation sequence system operating on the processor and configured to generate one or more user controls to allow a user to generate an animation transition from one of the 2D documentation data sets to the 3D model. 9. The system of claim 1 further comprising a model license system operating on the processor and configured to transmit the plurality of 2D documentation sets and the 3D model in a format compatible with a viewer system to a predetermined licensed processor. 10. A method for modeling building information comprising: generating a plurality of 2D documentation sets associated with a building using a processor: generating a 3D model associated with the building using the processor, the 3D model including a plurality of user-selectable controls operating on the processor, wherein each user-selectable control comprises an icon and an associated balloon that is generated by the processor when the icon is selected; generating a first selection control for the balloon for each user-selectable control that causes the processor to generate an animation sequence that ends with one of the plurality of 2D documentation sets; and generating a second selection control for the balloon for each user-selectable control that causes the processor to generate an animation sequence that ends with a view of the 3D model having an overlay of one of the 2D documentation sets. 11. The method of claim 10 further comprising generating one or more user controls to allow a user to edit one of the plurality of 2D documentation sets using the processor while maintaining an associated user-selectable control for the edited 2D documentation set in the 3D model. 12. The method of claim 10 further comprising generating one or more user controls to allow a user to edit the 3D model while maintaining the plurality of associated user-selectable controls in the 3D model. 13. The method of claim 10 further comprising receiving a user-entered location within the 3D model and generating an animation sequence that simulates walking through the 3D model from a starting location to the user-entered location. 14. The method of claim 10 further comprising receiving a user-entered input and generating an animation sequence that simulates walking through the 3D model from a starting location as a function of the user-entered input. 15. The method of claim 14 further comprising: generating a first circle icon associated with a rest state; and generating a second circle icon encircling the first circle icon associated with a plurality of first movement states. 16. The method of claim 15 further comprising designating a third area outside of the second circle icon associated with a plurality of second movement states. 17. The method of claim 10 further comprising generating one or more user controls to allow a user to generate an animation transition from one of the 2D documentation data sets to the 3D model. 18. The method of claim 10 further comprising transmitting the plurality of 2D documentation sets and the 3D model in a format compatible with a viewer system to a predetermined licensed processor. 19. In a system for building information modeling having a plurality of 2D documentation sets associated with a building, a 3D model associated with the building, the 3D model including a plurality of user-selectable controls operating on a processor, wherein each user-selectable control comprises an icon and an associated balloon that is generated by the processor when the icon is selected, the balloon for each user-selectable control comprising a first selection control that causes the processor to generate an animation sequence that ends with one of the plurality of 2D documentation sets, the balloon for each user-selectable control comprising a second selection control that causes the processor to generate an animation sequence that ends with a view of the 3D model having an overlay of one of the 2D documentation sets, a 2D documentation set system operating on the processor that is configured to generate one or more user controls to allow a user to edit one of the plurality of 2D documentation sets while maintaining an associated user-selectable control for the edited 2D documentation set in the 3D model, a 3D model system with embedded controls operating on the processor that is configured to generate one or more user controls to allow the user to edit the 3D model while maintaining the plurality of associated user-selectable controls in the 3D model, a walking control that receives a user-entered location within the 3D model and that causes the processor to generate an animation sequence that simulates walking through the 3D model from a starting location to the user-entered location, a touchscreen movement control that receives a user-entered input and that causes the processor to generate an animation sequence that simulates walking through the 3D model from the starting location as a function of the user-entered input, wherein the touchscreen movement control includes a first circle icon associated with a rest state, a second circle icon encircling the first circle icon associated with a plurality of first movement states and a third area outside of the second circle icon associated with a plurality of second movement states, a transition animation sequence system operating on the processor and configured to generate one or more user controls to allow the user to generate an animation transition from one of the 2D documentation data sets to the 3D model, and a model license system operating on the processor and configured to transmit the plurality of 2D documentation sets and the 3D model in a format compatible with a viewer system to a predetermined licensed processor, a method comprising: generating the plurality of 2D documentation sets associated with the building using the processor: generating the 3D model associated with the building using the processor, the 3D model including the plurality of user-selectable controls operating on the processor, wherein each user-selectable control comprises the icon and the associated balloon that is generated by the processor when the icon is selected; generating the first selection control for the balloon for each user-selectable control that causes the processor to generate the animation sequence that ends with one of the plurality of 2D documentation sets; generating the second selection control for the balloon for each user-selectable control that causes the processor to generate the animation sequence that ends with the view of the 3D model having the overlay of one of the 2D documentation sets; generating the one or more user controls to allow the user to edit one of the plurality of 2D documentation sets using the processor while maintaining the associated user-selectable control for the edited 2D documentation set in the 3D model; generating the one or more user controls to allow the user to edit the 3D model while maintaining the plurality of associated user-selectable controls in the 3D model; generating the user-entered location within the 3D model and generating the animation sequence that simulates walking through the 3D model from the starting location to the user-entered location; receiving the user-entered input and generating the animation sequence that simulates walking through the 3D model from a starting location as a function of the user-entered input; generating the first circle icon associated with the rest state; generating the second circle icon encircling the first circle icon associated with the plurality of first movement states; designating the third area outside of the second circle icon associated with the plurality of second movement states; generating the one or more user controls to allow the user to generate the animation transition from one of the 2D documentation data sets to the 3D model; transmitting the plurality of 2D documentation sets and the 3D model in the format compatible with the viewer system to the predetermined licensed processor.
2,600
10,437
10,437
15,720,735
2,624
A human interface technique is disclosed for industrial automation systems. The technique allows for visualizations to be shared between devices, such as human machine interfaces, thin client interfaces, and so forth. The initial access to the visualization content may be based upon policies stored in a visualization manager, such as user identification, user role, user or device location, event triggers, and so forth. Once the user has access to the visualization content, it may be shared with other users in accordance with the policies, or in at least partial circumvention of the policies.
1. A system comprising: a visualization manager that, in operation, communicates with a thin client HMI to permit the thin client HMI to access and display a visualization from an industrial automation visualization source of a controlled machine or process, wherein the visualization manager is configured to permit access to the visualization by a first user of a first thin client HMI, and thereafter to permit sharing of the visualization by the first user with at least one second user of a second HMI in real or near-real time during operation of the controlled machine or process. 2. The system of claim 1, wherein the visualization manager is configured to permit multicasting of the visualization by the first user of the first thin client HMI to multiple other HMIs by permitting access to the same visualization by the multiple other HMIs. 3. The system of claim 2, wherein the visualization manager is configured to permit the user of the first thin client HMI to select a group of users of the multiple other HMIs for multicasting. 4. The system of claim 3, wherein the group is selected based upon a common role and/or location of the users of the group. 5. The system of claim 1, wherein once permitted by the visualization manager, the visualization is provided by the industrial automation visualization source to the first thin client HMI and from the first thin client HMI to the second HMI. 6. The system of claim 1, wherein the industrial automation visualization source comprises an automation controller, and during sharing of the visualization only one of the first and second users may interact with the respective HMI for control of the controlled machine or process via the automation controller. 7. The system of claim 1, wherein access by the first user is based upon policies stored in the visualization manager, and wherein access by the second user avoids at least one of the policies for access by the second user. 8. The system of claim 1, wherein access by the second user is based upon interaction of the second user with the second HMI to accept the access. 9. The system of claim 1, wherein the first thin client HMI and the second HMI are configured to share visualizations with one another under permission from the visualization manager. 10. The system of claim 1, wherein the first thin client HMI is configured to share less than the entire visualization with the second HMI. 11. The system of claim 1, wherein the shared visualization comprises voice and/or camera data. 12. The system of claim 1, wherein the second HMI is a thin client HMI. 13. A system comprising: an industrial automation visualization source associated with a controlled machine or process; a first thin client HMI configured to display a visualization from the industrial automation visualization source in real or near-real time during operation of the controlled machine or process; a second thin client HMI configured to display at least a portion of the visualization from the industrial automation visualization source in real or near-real time during operation of the controlled machine or process; and a visualization manager that, in operation, communicates with the first and second thin client HMIs to permit the first thin client HMI to access and display the visualization from the industrial automation visualization source of a controlled machine or process, and to permit the first thin client HMI to share the visualization with the second thin client HMI in real or near-real time during operation of the controlled machine or process. 14. The system of claim 13, wherein the visualization manager is configured to permit multicasting of the visualization by the first thin client HMI to multiple other HMIs by permitting access to the same visualization by the multiple other HMIs. 15. The system of claim 14, wherein the visualization manager is configured to permit a user of the first thin client HMI to select a group of users of the multiple other HMIs for multicasting. 16. The system of claim 15, wherein the group is selected based upon a common role and/or location of the users of the group. 17. The system of claim 13, wherein the industrial automation visualization source comprises an automation controller, and during sharing of the visualization only one of the first and second thin clients HMIs may accept interact by a user for control of the controlled machine or process via the automation controller. 18. The system of claim 13, wherein the first thin client HMI is configured to share less than the entire visualization with the second thin client HMI. 19. The system of claim 13, wherein the first thin client HMI and the second thin client HMI are configured to share visualizations with one another under permission from the visualization manager. 20. The system of claim 13, wherein the shared visualization comprises voice and/or camera data.
A human interface technique is disclosed for industrial automation systems. The technique allows for visualizations to be shared between devices, such as human machine interfaces, thin client interfaces, and so forth. The initial access to the visualization content may be based upon policies stored in a visualization manager, such as user identification, user role, user or device location, event triggers, and so forth. Once the user has access to the visualization content, it may be shared with other users in accordance with the policies, or in at least partial circumvention of the policies.1. A system comprising: a visualization manager that, in operation, communicates with a thin client HMI to permit the thin client HMI to access and display a visualization from an industrial automation visualization source of a controlled machine or process, wherein the visualization manager is configured to permit access to the visualization by a first user of a first thin client HMI, and thereafter to permit sharing of the visualization by the first user with at least one second user of a second HMI in real or near-real time during operation of the controlled machine or process. 2. The system of claim 1, wherein the visualization manager is configured to permit multicasting of the visualization by the first user of the first thin client HMI to multiple other HMIs by permitting access to the same visualization by the multiple other HMIs. 3. The system of claim 2, wherein the visualization manager is configured to permit the user of the first thin client HMI to select a group of users of the multiple other HMIs for multicasting. 4. The system of claim 3, wherein the group is selected based upon a common role and/or location of the users of the group. 5. The system of claim 1, wherein once permitted by the visualization manager, the visualization is provided by the industrial automation visualization source to the first thin client HMI and from the first thin client HMI to the second HMI. 6. The system of claim 1, wherein the industrial automation visualization source comprises an automation controller, and during sharing of the visualization only one of the first and second users may interact with the respective HMI for control of the controlled machine or process via the automation controller. 7. The system of claim 1, wherein access by the first user is based upon policies stored in the visualization manager, and wherein access by the second user avoids at least one of the policies for access by the second user. 8. The system of claim 1, wherein access by the second user is based upon interaction of the second user with the second HMI to accept the access. 9. The system of claim 1, wherein the first thin client HMI and the second HMI are configured to share visualizations with one another under permission from the visualization manager. 10. The system of claim 1, wherein the first thin client HMI is configured to share less than the entire visualization with the second HMI. 11. The system of claim 1, wherein the shared visualization comprises voice and/or camera data. 12. The system of claim 1, wherein the second HMI is a thin client HMI. 13. A system comprising: an industrial automation visualization source associated with a controlled machine or process; a first thin client HMI configured to display a visualization from the industrial automation visualization source in real or near-real time during operation of the controlled machine or process; a second thin client HMI configured to display at least a portion of the visualization from the industrial automation visualization source in real or near-real time during operation of the controlled machine or process; and a visualization manager that, in operation, communicates with the first and second thin client HMIs to permit the first thin client HMI to access and display the visualization from the industrial automation visualization source of a controlled machine or process, and to permit the first thin client HMI to share the visualization with the second thin client HMI in real or near-real time during operation of the controlled machine or process. 14. The system of claim 13, wherein the visualization manager is configured to permit multicasting of the visualization by the first thin client HMI to multiple other HMIs by permitting access to the same visualization by the multiple other HMIs. 15. The system of claim 14, wherein the visualization manager is configured to permit a user of the first thin client HMI to select a group of users of the multiple other HMIs for multicasting. 16. The system of claim 15, wherein the group is selected based upon a common role and/or location of the users of the group. 17. The system of claim 13, wherein the industrial automation visualization source comprises an automation controller, and during sharing of the visualization only one of the first and second thin clients HMIs may accept interact by a user for control of the controlled machine or process via the automation controller. 18. The system of claim 13, wherein the first thin client HMI is configured to share less than the entire visualization with the second thin client HMI. 19. The system of claim 13, wherein the first thin client HMI and the second thin client HMI are configured to share visualizations with one another under permission from the visualization manager. 20. The system of claim 13, wherein the shared visualization comprises voice and/or camera data.
2,600
10,438
10,438
15,567,074
2,656
The gain of an amplifier in a receiver operating in a cellular communication system is controlled by determining one or more gain variability metrics, which are then used to produce first and second threshold values. A frequency difference between a current carrier frequency and a target carrier frequency is ascertained and then compared to the threshold values. Target gain setting production is based on comparison results: If the frequency difference is larger than the first threshold, a first automatic gain control algorithm is performed; if the frequency difference is smaller than the first threshold and larger than the second threshold, a second automatic gain control algorithm is performed, wherein the second automatic gain control algorithm uses a current gain setting as a starting point; and if the frequency difference is smaller than both the first and second thresholds, the current gain setting is used as the target gain setting.
1. A method of controlling gain of an amplifier in a receiver operating in a cellular communication system, the method comprising: ascertaining a frequency difference between a current carrier frequency and a target carrier frequency; comparing the frequency difference to a first threshold value; in response to satisfaction of first criteria that include the frequency difference being larger than the first threshold value, performing a first automatic gain control algorithm to produce a target gain setting; in response to satisfaction of second criteria that include the frequency difference being smaller than the first threshold, performing a second automatic gain control algorithm to produce the target gain setting; and using the target gain setting to control gain of the amplifier. 2. The method of claim 1, wherein the second criteria further include the frequency difference being larger than a second threshold value. 3. The method of claim 2, comprising: using one or more gain variability metrics to produce the second criteria. 4. The method of claim 2, comprising: in response to satisfaction of third criteria that include the frequency difference being smaller than both the first and second thresholds, using the current gain setting as the target gain setting. 5. The method of claim 1, wherein the second automatic gain control algorithm uses a current gain setting as a starting point. 6. The method of claim 1, comprising: determining one or more gain variability metrics. 7. The method of claim 6, wherein determining one or more gain variability metrics comprises one or more of: determining a current degree of coverage of the receiver; determining whether the current carrier frequency and the target carrier frequency are within a downlink system bandwidth of a same cell of the cellular communication system; determining whether a source cell and a target cell are associated with each other, wherein the source cell is transmitting on the current carrier frequency and the target cell is transmitting on the target carrier frequency; and determining propagation conditions of a signal reaching the receiver. 8. The method of claim 6, comprising: using one or more gain variability metrics to produce the first criteria. 9. The method of claim 8, wherein using the one or more gain variability metrics to produce the first criteria comprises: using a channel model and the one or more gain variability metrics to produce the first criteria, or using static information and the one or more gain variability metrics to produce the first criteria. 10. (canceled) 11. The method of claim 8, wherein using the one or more gain variability metrics to produce the first criteria comprises: ascertaining whether historical gain variability data is available; and in response to historical gain variability data being available, using the historical gain variability data and the one or more gain variability metrics to produce the first criteria. 12-13. (canceled) 14. An apparatus for controlling gain of an amplifier in a receiver operating in a cellular communication system, the apparatus comprising: circuitry configured to ascertain a frequency difference between a current carrier frequency and a target carrier frequency; circuitry configured to compare the frequency difference to a first threshold value; circuitry configured to perform a first automatic gain control algorithm to produce a target gain setting in response to satisfaction of first criteria that include the frequency difference being larger than the first threshold value; circuitry configured to perform a second automatic gain control algorithm to produce the target gain setting in response to satisfaction of second criteria that include the frequency difference being smaller than the first threshold; and circuitry configured to use the target gain setting to control gain of the amplifier. 15-27. (canceled) 28. A receiver comprising an amplifier; and the apparatus of any one of claim 14 arranged to control the amplifier. 29. A User Equipment (UE) device comprising the receiver of claim 28. 30. The User Equipment device of claim 29 wherein the User Equipment device is a Machine Type Communication (MTC) device. 31. The method of claim 1, wherein the first automatic gain control algorithm is a robust automatic gain control algorithm that does not rely on any assumptions regarding received power. 32. The method of claim 1, wherein the second automatic gain control algorithm takes less time to produce the target gain setting than does the first automatic gain control algorithm. 33. The method of claim 1, wherein the second automatic gain control algorithm uses an existing gain setting as a starting point from which further adjustments are made to produce the target gain setting. 34. The method of claim 33, wherein the first automatic gain control algorithm does not rely on any assumptions regarding received power when the first automatic gain control algorithm is started.
The gain of an amplifier in a receiver operating in a cellular communication system is controlled by determining one or more gain variability metrics, which are then used to produce first and second threshold values. A frequency difference between a current carrier frequency and a target carrier frequency is ascertained and then compared to the threshold values. Target gain setting production is based on comparison results: If the frequency difference is larger than the first threshold, a first automatic gain control algorithm is performed; if the frequency difference is smaller than the first threshold and larger than the second threshold, a second automatic gain control algorithm is performed, wherein the second automatic gain control algorithm uses a current gain setting as a starting point; and if the frequency difference is smaller than both the first and second thresholds, the current gain setting is used as the target gain setting.1. A method of controlling gain of an amplifier in a receiver operating in a cellular communication system, the method comprising: ascertaining a frequency difference between a current carrier frequency and a target carrier frequency; comparing the frequency difference to a first threshold value; in response to satisfaction of first criteria that include the frequency difference being larger than the first threshold value, performing a first automatic gain control algorithm to produce a target gain setting; in response to satisfaction of second criteria that include the frequency difference being smaller than the first threshold, performing a second automatic gain control algorithm to produce the target gain setting; and using the target gain setting to control gain of the amplifier. 2. The method of claim 1, wherein the second criteria further include the frequency difference being larger than a second threshold value. 3. The method of claim 2, comprising: using one or more gain variability metrics to produce the second criteria. 4. The method of claim 2, comprising: in response to satisfaction of third criteria that include the frequency difference being smaller than both the first and second thresholds, using the current gain setting as the target gain setting. 5. The method of claim 1, wherein the second automatic gain control algorithm uses a current gain setting as a starting point. 6. The method of claim 1, comprising: determining one or more gain variability metrics. 7. The method of claim 6, wherein determining one or more gain variability metrics comprises one or more of: determining a current degree of coverage of the receiver; determining whether the current carrier frequency and the target carrier frequency are within a downlink system bandwidth of a same cell of the cellular communication system; determining whether a source cell and a target cell are associated with each other, wherein the source cell is transmitting on the current carrier frequency and the target cell is transmitting on the target carrier frequency; and determining propagation conditions of a signal reaching the receiver. 8. The method of claim 6, comprising: using one or more gain variability metrics to produce the first criteria. 9. The method of claim 8, wherein using the one or more gain variability metrics to produce the first criteria comprises: using a channel model and the one or more gain variability metrics to produce the first criteria, or using static information and the one or more gain variability metrics to produce the first criteria. 10. (canceled) 11. The method of claim 8, wherein using the one or more gain variability metrics to produce the first criteria comprises: ascertaining whether historical gain variability data is available; and in response to historical gain variability data being available, using the historical gain variability data and the one or more gain variability metrics to produce the first criteria. 12-13. (canceled) 14. An apparatus for controlling gain of an amplifier in a receiver operating in a cellular communication system, the apparatus comprising: circuitry configured to ascertain a frequency difference between a current carrier frequency and a target carrier frequency; circuitry configured to compare the frequency difference to a first threshold value; circuitry configured to perform a first automatic gain control algorithm to produce a target gain setting in response to satisfaction of first criteria that include the frequency difference being larger than the first threshold value; circuitry configured to perform a second automatic gain control algorithm to produce the target gain setting in response to satisfaction of second criteria that include the frequency difference being smaller than the first threshold; and circuitry configured to use the target gain setting to control gain of the amplifier. 15-27. (canceled) 28. A receiver comprising an amplifier; and the apparatus of any one of claim 14 arranged to control the amplifier. 29. A User Equipment (UE) device comprising the receiver of claim 28. 30. The User Equipment device of claim 29 wherein the User Equipment device is a Machine Type Communication (MTC) device. 31. The method of claim 1, wherein the first automatic gain control algorithm is a robust automatic gain control algorithm that does not rely on any assumptions regarding received power. 32. The method of claim 1, wherein the second automatic gain control algorithm takes less time to produce the target gain setting than does the first automatic gain control algorithm. 33. The method of claim 1, wherein the second automatic gain control algorithm uses an existing gain setting as a starting point from which further adjustments are made to produce the target gain setting. 34. The method of claim 33, wherein the first automatic gain control algorithm does not rely on any assumptions regarding received power when the first automatic gain control algorithm is started.
2,600
10,439
10,439
14,209,487
2,668
Provided are a system, method, and computer readable storage medium in which data is received from a dental imaging system. The received data is analyzed to adjust one or more imaging parameters of the dental imaging system.
1. A method for using a dental imaging system, the method comprising receiving data from the dental imaging system; and analyzing, via a processor, the received data to adjust one or more imaging parameters of the dental imaging system. 2. The method of claim 1, wherein: the received data corresponds to measured output variables of a feedback control mechanism; and the one or more imaging parameters correspond to manipulated variables of the feedback control mechanism, wherein the one or more imaging parameters are adjusted iteratively in real-time via the feedback control mechanism by reducing tracking error between measured and targeted values of the received data. 3. The method of claim 1, wherein the dental imaging system has an illumination source that illuminates one or more teeth at an illumination level, the method further comprising: adjusting the illumination level, based on analyzing at least one of reflectivity and distribution of pixels corresponding to grayscales, in one or more images of the one or more teeth. 4. The method of claim 1, wherein an exposure parameter of the dental imaging system is a function of an illumination level and an integration time, the method further comprising: adjusting the integration time of the dental imaging system based on analyzing one or more images of one or more teeth acquired by the dental imaging system. 5. The method of claim 1, wherein the dental imaging system acquires images at a frame rate, the method further comprising: adjusting the frame rate of the dental imaging system, based on analyzing movements during acquisition of the images. 6. The method of claim 1, further comprising: adjusting a frequency of a signal used tor imaging one or more teeth in the dental imaging system, based on analyzing one or more images of the one or more teeth. 7. The method of claim 1, wherein the dental imaging system has a mirror coupled to a heating element that controls temperature of the mirror, the method further comprising: adjusting the heating element of the mirror to raise or lower the temperature, based on measuring fogging in the dental imaging system. 8. The method of claim 1, the method further comprising: adjusting an image cropping window, based on determining an area of interest in one or more images of one of more teeth acquired by the dental imaging system. 9. The method of claim 1, the method further comprising: adjusting gain of an imaging sensor of the dental imaging system, based on determining whether one or more images of one of more teeth acquired by the dental imaging system lie within a dynamic range of the imaging sensor. 10. The method of claim 1, the method further comprising: adjusting spatial resolution of the dental imaging system, based a quality measure of one or more images of one of more teeth acquired by the dental imaging system. 11. A control system for controlling a scanning wand, the control system comprising: a measurement component that measures an output variable; a manipulated variable adjustment component that adjusts one or more imaging parameters based on a tracking error between a measured value of the output variable and a targeted value of the output variable; and a tracking error adjustment component that determines the tracking error. 12. The control system of claim 11, wherein: the manipulated variable adjustment component adjusts the one or more imaging parameters iteratively in real-time by reducing the tracking error between measured and targeted values of the output variable. 13. The control system of claim 11, wherein: the manipulated variable adjustment component adjusts an illumination level, based on analyzing at least one of reflectivity and distribution of pixels corresponding to grayscales, in one or more images of one or more teeth. 14. The control system of claim 11, wherein: the manipulated variable adjustment component adjusts an integration time based on analyzing one or more images of one or more teeth. 15. The control system of claim 11, wherein: the manipulated variable adjustment component adjusts a heating element of a mirror to raise or lower a temperature of the mirror, based on measurements of fogging. 16. A computer readable storage medium for using a dental imaging system, wherein code stored in the computer readable storage medium when executed by a processor causes operations, the operations comprising: receiving data from the dental imaging system; and analyzing the received data to adjust one or more imaging parameters of the dental imaging system. 17. The computer readable storage medium of claim 16, wherein: the received data corresponds to measured output variables of a feedback control mechanism; and the one or more imaging parameters correspond to manipulated variables of the feedback control mechanism, wherein the one or more imaging parameters are adjusted iteratively in real-time via the feedback control mechanism by reducing tracking error between measured and targeted values of the received data. 18. The computer readable storage medium of claim 16, wherein the dental imaging system has an illumination source that illuminates one or more teeth at an illumination level, the operations further comprising: adjusting the illumination level, based on analyzing at least one of reflectivity and distribution of pixels corresponding to grayscales, in one or more images of the one or more teeth. 19. The computer readable storage medium of claim 16, wherein an exposure parameter of the dental imaging system is a function of an illumination level and an integration time, the operations further comprising: adjusting the integration time of the dental imaging system based on analyzing one or more images of one or more teeth acquired by the dental imaging system. 20. The computer readable storage medium of claim 16, wherein the dental imaging system has a mirror coupled to a heating element that controls temperature of the mirror, the operations further comprising: adjusting the heating element of the mirror to raise or lower the temperature, based on measuring fogging in the dental imaging system. 21. An imaging system, comprising: an illumination source that illuminates one or more teeth at an illumination level: an illumination source adjustment mechanism to adjust the illumination level, based on analyzing at least one of reflectivity and distribution of pixels corresponding to grayscales, in one or more images of one or more teeth. 22. A dental imaging system, comprising: a mirror; and a heating element, wherein the heating element is adjusted to raise or lower a temperature of the mirror, based on a measurement of an extent of fogging. 23. A dental imaging system, comprising: an illumination source that illuminates one or more teeth at an illumination level; an integration time adjustment mechanism to adjust integration time, wherein an exposure parameter of the dental imaging system is a function of the illumination level and the integration time, and wherein the integration time of the dental imaging system is adjusted based on analyzing one or more images of one or more teeth acquired by the dental imaging system. 24. A dental imaging system, comprising: an imaging sensor to acquire images at a frame rate; and a frame rate adjustment mechanism to adjust the frame rate, based on an analysis of movements during acquisition of the images by the imaging sensor. 25. A dental imaging system, comprising: a signal generator that generates signal at a frequency; an imaging sensor to acquire one or more images of one or more teeth using the signal; and a signal frequency adjustment mechanism to adjust the frequency of the signal, based on analyzing the one or more images of the one or more teeth. 26. A dental imaging system, comprising: an imaging sensor to acquire one or more images of one or more teeth; and a gain adjustment mechanism to adjust a gain of the imaging sensor, based on determining whether the one or more images of one of more teeth acquired by the imaging sensor lie within a dynamic range of the imaging sensor.
Provided are a system, method, and computer readable storage medium in which data is received from a dental imaging system. The received data is analyzed to adjust one or more imaging parameters of the dental imaging system.1. A method for using a dental imaging system, the method comprising receiving data from the dental imaging system; and analyzing, via a processor, the received data to adjust one or more imaging parameters of the dental imaging system. 2. The method of claim 1, wherein: the received data corresponds to measured output variables of a feedback control mechanism; and the one or more imaging parameters correspond to manipulated variables of the feedback control mechanism, wherein the one or more imaging parameters are adjusted iteratively in real-time via the feedback control mechanism by reducing tracking error between measured and targeted values of the received data. 3. The method of claim 1, wherein the dental imaging system has an illumination source that illuminates one or more teeth at an illumination level, the method further comprising: adjusting the illumination level, based on analyzing at least one of reflectivity and distribution of pixels corresponding to grayscales, in one or more images of the one or more teeth. 4. The method of claim 1, wherein an exposure parameter of the dental imaging system is a function of an illumination level and an integration time, the method further comprising: adjusting the integration time of the dental imaging system based on analyzing one or more images of one or more teeth acquired by the dental imaging system. 5. The method of claim 1, wherein the dental imaging system acquires images at a frame rate, the method further comprising: adjusting the frame rate of the dental imaging system, based on analyzing movements during acquisition of the images. 6. The method of claim 1, further comprising: adjusting a frequency of a signal used tor imaging one or more teeth in the dental imaging system, based on analyzing one or more images of the one or more teeth. 7. The method of claim 1, wherein the dental imaging system has a mirror coupled to a heating element that controls temperature of the mirror, the method further comprising: adjusting the heating element of the mirror to raise or lower the temperature, based on measuring fogging in the dental imaging system. 8. The method of claim 1, the method further comprising: adjusting an image cropping window, based on determining an area of interest in one or more images of one of more teeth acquired by the dental imaging system. 9. The method of claim 1, the method further comprising: adjusting gain of an imaging sensor of the dental imaging system, based on determining whether one or more images of one of more teeth acquired by the dental imaging system lie within a dynamic range of the imaging sensor. 10. The method of claim 1, the method further comprising: adjusting spatial resolution of the dental imaging system, based a quality measure of one or more images of one of more teeth acquired by the dental imaging system. 11. A control system for controlling a scanning wand, the control system comprising: a measurement component that measures an output variable; a manipulated variable adjustment component that adjusts one or more imaging parameters based on a tracking error between a measured value of the output variable and a targeted value of the output variable; and a tracking error adjustment component that determines the tracking error. 12. The control system of claim 11, wherein: the manipulated variable adjustment component adjusts the one or more imaging parameters iteratively in real-time by reducing the tracking error between measured and targeted values of the output variable. 13. The control system of claim 11, wherein: the manipulated variable adjustment component adjusts an illumination level, based on analyzing at least one of reflectivity and distribution of pixels corresponding to grayscales, in one or more images of one or more teeth. 14. The control system of claim 11, wherein: the manipulated variable adjustment component adjusts an integration time based on analyzing one or more images of one or more teeth. 15. The control system of claim 11, wherein: the manipulated variable adjustment component adjusts a heating element of a mirror to raise or lower a temperature of the mirror, based on measurements of fogging. 16. A computer readable storage medium for using a dental imaging system, wherein code stored in the computer readable storage medium when executed by a processor causes operations, the operations comprising: receiving data from the dental imaging system; and analyzing the received data to adjust one or more imaging parameters of the dental imaging system. 17. The computer readable storage medium of claim 16, wherein: the received data corresponds to measured output variables of a feedback control mechanism; and the one or more imaging parameters correspond to manipulated variables of the feedback control mechanism, wherein the one or more imaging parameters are adjusted iteratively in real-time via the feedback control mechanism by reducing tracking error between measured and targeted values of the received data. 18. The computer readable storage medium of claim 16, wherein the dental imaging system has an illumination source that illuminates one or more teeth at an illumination level, the operations further comprising: adjusting the illumination level, based on analyzing at least one of reflectivity and distribution of pixels corresponding to grayscales, in one or more images of the one or more teeth. 19. The computer readable storage medium of claim 16, wherein an exposure parameter of the dental imaging system is a function of an illumination level and an integration time, the operations further comprising: adjusting the integration time of the dental imaging system based on analyzing one or more images of one or more teeth acquired by the dental imaging system. 20. The computer readable storage medium of claim 16, wherein the dental imaging system has a mirror coupled to a heating element that controls temperature of the mirror, the operations further comprising: adjusting the heating element of the mirror to raise or lower the temperature, based on measuring fogging in the dental imaging system. 21. An imaging system, comprising: an illumination source that illuminates one or more teeth at an illumination level: an illumination source adjustment mechanism to adjust the illumination level, based on analyzing at least one of reflectivity and distribution of pixels corresponding to grayscales, in one or more images of one or more teeth. 22. A dental imaging system, comprising: a mirror; and a heating element, wherein the heating element is adjusted to raise or lower a temperature of the mirror, based on a measurement of an extent of fogging. 23. A dental imaging system, comprising: an illumination source that illuminates one or more teeth at an illumination level; an integration time adjustment mechanism to adjust integration time, wherein an exposure parameter of the dental imaging system is a function of the illumination level and the integration time, and wherein the integration time of the dental imaging system is adjusted based on analyzing one or more images of one or more teeth acquired by the dental imaging system. 24. A dental imaging system, comprising: an imaging sensor to acquire images at a frame rate; and a frame rate adjustment mechanism to adjust the frame rate, based on an analysis of movements during acquisition of the images by the imaging sensor. 25. A dental imaging system, comprising: a signal generator that generates signal at a frequency; an imaging sensor to acquire one or more images of one or more teeth using the signal; and a signal frequency adjustment mechanism to adjust the frequency of the signal, based on analyzing the one or more images of the one or more teeth. 26. A dental imaging system, comprising: an imaging sensor to acquire one or more images of one or more teeth; and a gain adjustment mechanism to adjust a gain of the imaging sensor, based on determining whether the one or more images of one of more teeth acquired by the imaging sensor lie within a dynamic range of the imaging sensor.
2,600
10,440
10,440
14,050,004
2,641
A method and an apparatus for identifying a UE in an SAE network, and an MME are provided herein. The method includes: receiving an SAE-TMSI which is allocated to a UE that accesses an SAE network and includes at least: a pool-ID, an MME-ID, and a UE temporary identifier; using the SAE-TMSI to temporarily identify the UE in the SAE network. The apparatus includes: a receiving unit and a temporary identifying unit. The MME includes a temporary identifier allocating unit. Moreover, a method for transmitting and allocating a temporary identifier, and a method for receiving and transmitting information according to the temporary identifier are disclosed herein.
1. A method for selecting a mobility management entity (MME), the method comprising: receiving, by a radio access network (RAN) entity, a radio resource control (RRC) message from a user equipment (UE), wherein the RRC message comprises a first resource pool identifier (pool-ID) and a first mobility management entity identifier (MME-ID), wherein the first pool-ID is unique in a public land mobile network (PLMN) and the first MME-ID is unique in a resource pool; and selecting, by the RAN entity, an MME in a system architecture evolved (SAE) network according to the first pool-ID and the first MME-ID. 2. The method according to claim 1, wherein the selecting the MME in the SAE network by the RAN entity comprises: selecting, by the RAN entity, a first MME identified by the first pool-ID and the first MME-ID. 3. The method according to claim 2, wherein the step of the RAN entity selecting the first MME identified by the first pool-ID and the first MME-ID comprises: determining, by the RAN entity, that the first pool-ID and the first MME-ID in the RRC message match with the first MME in a resource pool corresponding to the RAN entity, and based on the determination, selecting, by the RAN entity, the first MME. 4. The method according to claim 1, wherein the selecting the MME in the system SAE network by the RAN entity comprises: selecting, by the RAN entity, a second MME in a resource pool corresponding to the RAN entity according to a principle. 5. The method according to claim 4, wherein the step of the RAN entity selecting the second MME in the resource pool corresponding to the RAN entity according to the principle comprises: determining, by the RAN entity, that the first pool-ID and the first MME-ID in the RRC message do not match with any MME in the resource pool corresponding to the RAN entity, and based on the determination, selecting the second MME according to the principle. 6. The method according to claim 4, wherein the principle comprises a load balancing principle. 7. A radio access network (RAN) entity, comprising: a receiver, configured to receive a radio resource control (RRC) message from a user equipment (UE), wherein the RRC message comprises a first resource pool identifier (pool-ID) and a first mobility management entity identifier (MME-ID), wherein the first pool-ID is unique in a public land mobile network (PLMN) and the first MME-ID is unique in a resource pool; and a processor, configured to select an MME in a system architecture evolved (SAE) network according to the first pool-ID and the first MME-ID. 8. The RAN entity according to claim 7, wherein the processor is further configured to select a first MME identified by the first pool-ID and the first MME-ID. 9. The RAN entity according to claim 8, wherein the processor is further configured to determine that the first pool-ID and the first MME-ID in the RRC message match with the first MME in a resource pool corresponding to the RAN entity, and select the first MME based on the determination. 10. The RAN entity according to claim 7, wherein the processor is further configured to select a second MME in a resource pool corresponding to the RAN entity according to a principle. 11. The RAN entity according to claim 10, wherein the processor is further configured to determine that the first pool-ID and the first MME-ID in the RRC message do not match with any MME in the resource pool corresponding to the RAN entity, and based on the determination, select the second MME corresponding to the RAN entity according to the principle. 12. The RAN entity according to claim 10, wherein the principle comprises a load balancing principle. 13. A method for selecting a mobility management entity (MME), comprising: generating, by a user equipment (UE), a radio resource control (RRC) message, when the UE accesses a system architecture evolved (SAE) network; and sending, by the UE, the RRC message to a radio access network (RAN) entity, wherein the RRC message comprises a first resource pool identifier (pool-ID) and a first mobility management entity identifier (MME-ID), wherein the first pool-ID is unique in a public land mobile network (PLMN) and the first MME-ID is unique in a resource pool; wherein the first pool-ID and the first MME-ID are used to select an MME in the SAE network by the RAN entity. 14. The method according to claim 13, wherein the first pool-ID and the first MME-ID are used to select a first MME identified by the first pool-ID and the first MME-ID. 15. The method according to claim 13, wherein a principle is used to select a second MME corresponding to the RAN entity. 16. The method according to claim 15, wherein the principle comprises a load balancing principle. 17. A user equipment (UE), comprising: a processor, configured to generate a radio resource control (RRC) message when the UE accesses a system architecture evolved (SAE) network; and a transmitter, configured to send the RRC message to a radio access network (RAN) entity, wherein the RRC message comprises a first resource pool identifier (pool-ID) and a first mobility management entity identifier (MME-ID), wherein the first pool-ID is unique in a public land mobile network (PLMN) and the first MME-ID is unique in a resource pool; wherein the first pool-ID and the first MME-ID are used to select an MME in the SAE network by the RAN entity. 18. The UE according to claim 17, wherein the first pool-ID and the first MME-ID are used to select a first MME identified by the first pool-ID and the first MME-ID. 19. The UE according to claim 17, wherein a principle is used to select a second MME corresponding to the RAN entity. 20. The UE according to claim 19, wherein the principle comprises a load balancing principle.
A method and an apparatus for identifying a UE in an SAE network, and an MME are provided herein. The method includes: receiving an SAE-TMSI which is allocated to a UE that accesses an SAE network and includes at least: a pool-ID, an MME-ID, and a UE temporary identifier; using the SAE-TMSI to temporarily identify the UE in the SAE network. The apparatus includes: a receiving unit and a temporary identifying unit. The MME includes a temporary identifier allocating unit. Moreover, a method for transmitting and allocating a temporary identifier, and a method for receiving and transmitting information according to the temporary identifier are disclosed herein.1. A method for selecting a mobility management entity (MME), the method comprising: receiving, by a radio access network (RAN) entity, a radio resource control (RRC) message from a user equipment (UE), wherein the RRC message comprises a first resource pool identifier (pool-ID) and a first mobility management entity identifier (MME-ID), wherein the first pool-ID is unique in a public land mobile network (PLMN) and the first MME-ID is unique in a resource pool; and selecting, by the RAN entity, an MME in a system architecture evolved (SAE) network according to the first pool-ID and the first MME-ID. 2. The method according to claim 1, wherein the selecting the MME in the SAE network by the RAN entity comprises: selecting, by the RAN entity, a first MME identified by the first pool-ID and the first MME-ID. 3. The method according to claim 2, wherein the step of the RAN entity selecting the first MME identified by the first pool-ID and the first MME-ID comprises: determining, by the RAN entity, that the first pool-ID and the first MME-ID in the RRC message match with the first MME in a resource pool corresponding to the RAN entity, and based on the determination, selecting, by the RAN entity, the first MME. 4. The method according to claim 1, wherein the selecting the MME in the system SAE network by the RAN entity comprises: selecting, by the RAN entity, a second MME in a resource pool corresponding to the RAN entity according to a principle. 5. The method according to claim 4, wherein the step of the RAN entity selecting the second MME in the resource pool corresponding to the RAN entity according to the principle comprises: determining, by the RAN entity, that the first pool-ID and the first MME-ID in the RRC message do not match with any MME in the resource pool corresponding to the RAN entity, and based on the determination, selecting the second MME according to the principle. 6. The method according to claim 4, wherein the principle comprises a load balancing principle. 7. A radio access network (RAN) entity, comprising: a receiver, configured to receive a radio resource control (RRC) message from a user equipment (UE), wherein the RRC message comprises a first resource pool identifier (pool-ID) and a first mobility management entity identifier (MME-ID), wherein the first pool-ID is unique in a public land mobile network (PLMN) and the first MME-ID is unique in a resource pool; and a processor, configured to select an MME in a system architecture evolved (SAE) network according to the first pool-ID and the first MME-ID. 8. The RAN entity according to claim 7, wherein the processor is further configured to select a first MME identified by the first pool-ID and the first MME-ID. 9. The RAN entity according to claim 8, wherein the processor is further configured to determine that the first pool-ID and the first MME-ID in the RRC message match with the first MME in a resource pool corresponding to the RAN entity, and select the first MME based on the determination. 10. The RAN entity according to claim 7, wherein the processor is further configured to select a second MME in a resource pool corresponding to the RAN entity according to a principle. 11. The RAN entity according to claim 10, wherein the processor is further configured to determine that the first pool-ID and the first MME-ID in the RRC message do not match with any MME in the resource pool corresponding to the RAN entity, and based on the determination, select the second MME corresponding to the RAN entity according to the principle. 12. The RAN entity according to claim 10, wherein the principle comprises a load balancing principle. 13. A method for selecting a mobility management entity (MME), comprising: generating, by a user equipment (UE), a radio resource control (RRC) message, when the UE accesses a system architecture evolved (SAE) network; and sending, by the UE, the RRC message to a radio access network (RAN) entity, wherein the RRC message comprises a first resource pool identifier (pool-ID) and a first mobility management entity identifier (MME-ID), wherein the first pool-ID is unique in a public land mobile network (PLMN) and the first MME-ID is unique in a resource pool; wherein the first pool-ID and the first MME-ID are used to select an MME in the SAE network by the RAN entity. 14. The method according to claim 13, wherein the first pool-ID and the first MME-ID are used to select a first MME identified by the first pool-ID and the first MME-ID. 15. The method according to claim 13, wherein a principle is used to select a second MME corresponding to the RAN entity. 16. The method according to claim 15, wherein the principle comprises a load balancing principle. 17. A user equipment (UE), comprising: a processor, configured to generate a radio resource control (RRC) message when the UE accesses a system architecture evolved (SAE) network; and a transmitter, configured to send the RRC message to a radio access network (RAN) entity, wherein the RRC message comprises a first resource pool identifier (pool-ID) and a first mobility management entity identifier (MME-ID), wherein the first pool-ID is unique in a public land mobile network (PLMN) and the first MME-ID is unique in a resource pool; wherein the first pool-ID and the first MME-ID are used to select an MME in the SAE network by the RAN entity. 18. The UE according to claim 17, wherein the first pool-ID and the first MME-ID are used to select a first MME identified by the first pool-ID and the first MME-ID. 19. The UE according to claim 17, wherein a principle is used to select a second MME corresponding to the RAN entity. 20. The UE according to claim 19, wherein the principle comprises a load balancing principle.
2,600
10,441
10,441
15,867,530
2,633
Certain aspects of the present disclosure provide methods and apparatus for signaling a receive antenna mode to use during beamforming training procedure. For example, an apparatus for wireless communications, may generally include a processing system configured to generate a first frame having one or more beamforming training fields to be output for transmission to a wireless device using a directional transmit antenna mode and an indication whether the wireless device is to be in an Omni-directional receive antenna mode to receive the one or more beamforming training fields, and a first interface configured to output the first frame for transmission.
1. An apparatus for wireless communications, comprising: a processing system configured to generate a first frame having one or more beamforming training fields to be output for transmission to at least one wireless device using a directional transmit antenna mode and an indication whether the at least one wireless device is to be in an Omni-directional receive antenna mode to receive the one or more beamforming training fields; and a first interface configured to output the first frame for transmission. 2. The apparatus of claim 1, wherein the indication is provided in a header of the first frame. 3. The apparatus of claim 1, wherein: the processing system is further configured to perform a beamforming training procedure to establish a link between the apparatus and the at least one wireless device; and the first frame is output for transmission after establishing the link between the apparatus and the at least one wireless device. 4. The apparatus of claim 3, wherein: the beamforming training procedure comprises a sector level sweep (SLS) procedure; and the one or more beamforming training fields are output for transmission in directions based on results of the SLS procedure. 5. The apparatus of claim 1, wherein: the processing system is further configured to at least begin performing a sector level sweep (SLS) procedure to establish a link between the apparatus and the at least one wireless device; and the first frame is output for transmission after performing at least a portion of the SLS procedure without establishing a link between the apparatus and the at least one wireless device. 6. The apparatus of claim 1, wherein: the at least one wireless device comprises a first type of device having enhanced capability relative to a second type of device; and the indication is provided in a header field of the first frame, wherein the indication is decodable by both the first type of device and the second type of device. 7. The apparatus of claim 1, wherein: the at least one wireless device comprises a first type of device having enhanced capability relative to a second type of device; and the indication is provided in a header field of the first frame, wherein the indication is decodable by the first type of device but not by the second type of device. 8. The apparatus of claim 1, wherein: the indication is provided via one or more bits in a scrambler initialization field of the first frame. 9. An apparatus for wireless communications, comprising: a first interface configured to obtain, from a wireless device, a first frame; and a processing system configured, based on an indication in a first portion of the first frame, to cause the apparatus to switch to or stay in an Omni-directional receive antenna mode to obtain one or more beamforming training fields in a second portion of the first frame. 10. The apparatus of claim 9, wherein the first portion of the first frame comprises a header. 11. The apparatus of claim 9, wherein: the processing system is further configured to perform a beamforming training procedure to establish a link between the apparatus and the wireless device; and the first frame is obtained after establishing the link between the apparatus and the wireless device. 12. The apparatus of claim 9, wherein: the processing system is further configured to at least begin performing a sector level sweep (SLS) procedure to establish a link between the apparatus and the wireless device; and the first frame is obtained after performing at least a portion of the SLS procedure without establishing a link between the apparatus and the wireless device. 13. The apparatus of claim 9, wherein: the apparatus comprises a first type of device having enhanced capability relative to a second type of device; and the indication is obtained in a header field of the first portion of the first frame, wherein the indication is decodable by both the first type of device and the second type of device. 14. The apparatus of claim 9, wherein: the apparatus comprises a first type of device having enhanced capability relative to a second type of device; and the indication is obtained in a header field of the first portion of the first frame, wherein the indication is decodable by the first type of device but not by the second type of device. 15. The apparatus of claim 9, wherein: the indication is obtained via one or more bits in a scrambler initialization field of the first portion of the first frame. 16.-45. (canceled) 46. A wireless station, comprising: a processing system configured to generate a first frame having one or more beamforming training fields to be output for transmission to at least one wireless device using a directional transmit antenna mode and an indication whether the at least one wireless device is to be in an Omni-directional receive antenna mode to receive the one or more beamforming training fields; and a transmitter configured to transmit the first frame for transmission. 47.-49. (canceled) 50. The apparatus of claim 9, further comprising at least one antenna via which the first frame is obtained, wherein the apparatus is configured as a wireless station.
Certain aspects of the present disclosure provide methods and apparatus for signaling a receive antenna mode to use during beamforming training procedure. For example, an apparatus for wireless communications, may generally include a processing system configured to generate a first frame having one or more beamforming training fields to be output for transmission to a wireless device using a directional transmit antenna mode and an indication whether the wireless device is to be in an Omni-directional receive antenna mode to receive the one or more beamforming training fields, and a first interface configured to output the first frame for transmission.1. An apparatus for wireless communications, comprising: a processing system configured to generate a first frame having one or more beamforming training fields to be output for transmission to at least one wireless device using a directional transmit antenna mode and an indication whether the at least one wireless device is to be in an Omni-directional receive antenna mode to receive the one or more beamforming training fields; and a first interface configured to output the first frame for transmission. 2. The apparatus of claim 1, wherein the indication is provided in a header of the first frame. 3. The apparatus of claim 1, wherein: the processing system is further configured to perform a beamforming training procedure to establish a link between the apparatus and the at least one wireless device; and the first frame is output for transmission after establishing the link between the apparatus and the at least one wireless device. 4. The apparatus of claim 3, wherein: the beamforming training procedure comprises a sector level sweep (SLS) procedure; and the one or more beamforming training fields are output for transmission in directions based on results of the SLS procedure. 5. The apparatus of claim 1, wherein: the processing system is further configured to at least begin performing a sector level sweep (SLS) procedure to establish a link between the apparatus and the at least one wireless device; and the first frame is output for transmission after performing at least a portion of the SLS procedure without establishing a link between the apparatus and the at least one wireless device. 6. The apparatus of claim 1, wherein: the at least one wireless device comprises a first type of device having enhanced capability relative to a second type of device; and the indication is provided in a header field of the first frame, wherein the indication is decodable by both the first type of device and the second type of device. 7. The apparatus of claim 1, wherein: the at least one wireless device comprises a first type of device having enhanced capability relative to a second type of device; and the indication is provided in a header field of the first frame, wherein the indication is decodable by the first type of device but not by the second type of device. 8. The apparatus of claim 1, wherein: the indication is provided via one or more bits in a scrambler initialization field of the first frame. 9. An apparatus for wireless communications, comprising: a first interface configured to obtain, from a wireless device, a first frame; and a processing system configured, based on an indication in a first portion of the first frame, to cause the apparatus to switch to or stay in an Omni-directional receive antenna mode to obtain one or more beamforming training fields in a second portion of the first frame. 10. The apparatus of claim 9, wherein the first portion of the first frame comprises a header. 11. The apparatus of claim 9, wherein: the processing system is further configured to perform a beamforming training procedure to establish a link between the apparatus and the wireless device; and the first frame is obtained after establishing the link between the apparatus and the wireless device. 12. The apparatus of claim 9, wherein: the processing system is further configured to at least begin performing a sector level sweep (SLS) procedure to establish a link between the apparatus and the wireless device; and the first frame is obtained after performing at least a portion of the SLS procedure without establishing a link between the apparatus and the wireless device. 13. The apparatus of claim 9, wherein: the apparatus comprises a first type of device having enhanced capability relative to a second type of device; and the indication is obtained in a header field of the first portion of the first frame, wherein the indication is decodable by both the first type of device and the second type of device. 14. The apparatus of claim 9, wherein: the apparatus comprises a first type of device having enhanced capability relative to a second type of device; and the indication is obtained in a header field of the first portion of the first frame, wherein the indication is decodable by the first type of device but not by the second type of device. 15. The apparatus of claim 9, wherein: the indication is obtained via one or more bits in a scrambler initialization field of the first portion of the first frame. 16.-45. (canceled) 46. A wireless station, comprising: a processing system configured to generate a first frame having one or more beamforming training fields to be output for transmission to at least one wireless device using a directional transmit antenna mode and an indication whether the at least one wireless device is to be in an Omni-directional receive antenna mode to receive the one or more beamforming training fields; and a transmitter configured to transmit the first frame for transmission. 47.-49. (canceled) 50. The apparatus of claim 9, further comprising at least one antenna via which the first frame is obtained, wherein the apparatus is configured as a wireless station.
2,600
10,442
10,442
15,479,485
2,658
A transcription provider is presented with an audio recording created using one or more recording devices. A transcriptionist using proprietary computer software records at discrete intervals both the position of the audio playing for the transcriptionist and the position of the cursor in the document being typed by the transcriptionist, thereby creating both a completely typed document and an audio map. The completed document may be further processed such that each word is matched to its corresponding audio position using the information acquired from the audio map and the matched word may then be put into a separate document as a hyperlink containing meta-data that points to the exact matching audio position. By simultaneously tracking the progress of audio playback and transcriptionist progress within a document, the transcriptionist is then able to display an interactive version of the completed document.
1. A method for transcribing audio, comprising: delivering an audio recording to a transcription provider; providing the audio recording to a transcriptionist; transcribing, by the transcriptionist, the audio recording to create a transcription and recording both a position of the audio recording playing for the transcriptionist and a corresponding position of a cursor in the document being transcribed by the transcriptionist at discrete intervals to create an audio map; providing the transcription and the audio map to the transcription provider; and using the transcription and the audio map to create a final document in which each word is mapped at its corresponding audio position. 2. The method of claim 1, wherein the transcriptionist is an employee of the transcription provider. 3. The method of claim 1, wherein the transcriptionist is not an employee of the transcription provider. 4. The method of claim 1, wherein the transcriptionist is a stenographer listening to spoken language from the audio recording and converting spoken language to text using a stenograph. 5. The method of claim 1, wherein the transcriptionist is a speech-to-text software application together with hardware required to operate the software. 6. The method of claim 1, wherein after the final document has been created, placing the location of unintelligible words in the final document into a separate document with hyperlinks to the corresponding location in the final document. 7. The method of claim 1, wherein after the final document has been created, placing the location of a plurality of words from the final document into a separate document with hyperlinks to the corresponding location in the final document. 8. The method of claim 1, wherein a viewing tool allows the audio recording to be played while viewing the final document and a highlighting bar indicates the section of text in the final document corresponding to the location in the audio recording. 9. A system for transcribing audio, comprising: an audio recording provided to a transcriptionist; software for transcribing audio recordings, wherein the transcriptionist uses the software to transcribe the audio recording to create a transcription and records both a position of the audio recording playing for the transcriptionist and a corresponding position of a cursor in the document being transcribed by the transcriptionist at discrete intervals to create an audio map; and wherein the transcription and the audio map are used to create a final document in which each word is mapped at its corresponding audio position. 10. The system of claim 9, wherein the transcriptionist is a stenographer listening to spoken language from the audio recording and converting spoken language to text using a stenograph. 11. The system of claim 9, wherein the transcriptionist is a speech-to-text software application together with hardware required to operate the software. 12. The system of claim 9, wherein after the final document has been created, placing the location of unintelligible words in the final document into a separate document with hyperlinks to the corresponding location in the final document. 13. The system of claim 9, wherein after the final document has been created, placing the location of a plurality of words from the final document into a separate document with hyperlinks to the corresponding location in the final document.
A transcription provider is presented with an audio recording created using one or more recording devices. A transcriptionist using proprietary computer software records at discrete intervals both the position of the audio playing for the transcriptionist and the position of the cursor in the document being typed by the transcriptionist, thereby creating both a completely typed document and an audio map. The completed document may be further processed such that each word is matched to its corresponding audio position using the information acquired from the audio map and the matched word may then be put into a separate document as a hyperlink containing meta-data that points to the exact matching audio position. By simultaneously tracking the progress of audio playback and transcriptionist progress within a document, the transcriptionist is then able to display an interactive version of the completed document.1. A method for transcribing audio, comprising: delivering an audio recording to a transcription provider; providing the audio recording to a transcriptionist; transcribing, by the transcriptionist, the audio recording to create a transcription and recording both a position of the audio recording playing for the transcriptionist and a corresponding position of a cursor in the document being transcribed by the transcriptionist at discrete intervals to create an audio map; providing the transcription and the audio map to the transcription provider; and using the transcription and the audio map to create a final document in which each word is mapped at its corresponding audio position. 2. The method of claim 1, wherein the transcriptionist is an employee of the transcription provider. 3. The method of claim 1, wherein the transcriptionist is not an employee of the transcription provider. 4. The method of claim 1, wherein the transcriptionist is a stenographer listening to spoken language from the audio recording and converting spoken language to text using a stenograph. 5. The method of claim 1, wherein the transcriptionist is a speech-to-text software application together with hardware required to operate the software. 6. The method of claim 1, wherein after the final document has been created, placing the location of unintelligible words in the final document into a separate document with hyperlinks to the corresponding location in the final document. 7. The method of claim 1, wherein after the final document has been created, placing the location of a plurality of words from the final document into a separate document with hyperlinks to the corresponding location in the final document. 8. The method of claim 1, wherein a viewing tool allows the audio recording to be played while viewing the final document and a highlighting bar indicates the section of text in the final document corresponding to the location in the audio recording. 9. A system for transcribing audio, comprising: an audio recording provided to a transcriptionist; software for transcribing audio recordings, wherein the transcriptionist uses the software to transcribe the audio recording to create a transcription and records both a position of the audio recording playing for the transcriptionist and a corresponding position of a cursor in the document being transcribed by the transcriptionist at discrete intervals to create an audio map; and wherein the transcription and the audio map are used to create a final document in which each word is mapped at its corresponding audio position. 10. The system of claim 9, wherein the transcriptionist is a stenographer listening to spoken language from the audio recording and converting spoken language to text using a stenograph. 11. The system of claim 9, wherein the transcriptionist is a speech-to-text software application together with hardware required to operate the software. 12. The system of claim 9, wherein after the final document has been created, placing the location of unintelligible words in the final document into a separate document with hyperlinks to the corresponding location in the final document. 13. The system of claim 9, wherein after the final document has been created, placing the location of a plurality of words from the final document into a separate document with hyperlinks to the corresponding location in the final document.
2,600
10,443
10,443
14,943,421
2,621
A device for dexterous interaction in a virtual world in disclosed. The device includes a housing including a plurality of buttons and a plurality of vibration elements each associated with at least one of the plurality of buttons. An orientation sensor detects orientation of the housing, and a bearing is configured to allow the housing to freely rotate in a plurality of directions. A processor is in communication with the plurality of buttons, the plurality of vibration elements, and the orientation sensor. A transmitter/receiver unit is configured to receive data from the processor and configured to send and receive data from a central processing unit.
1. A device for dexterous interaction in a virtual world comprising: a housing including a plurality of buttons and a plurality of vibration elements each associated with at least one of the plurality of buttons; an orientation sensor for detecting movement data based on orientation of the housing, and a bearing configured to allow the housing to freely rotate in a plurality of directions; a processor in communication with the plurality of buttons, the plurality of vibration elements, and the orientation sensor, wherein the processor receives the movement data from the orientation sensor; a first transmitter/receiver unit configured to receive the movement data from the processor and configured to send and receive the movement data. 2. The device of claim 1, wherein the housing is configured to move with three degrees of freedom via the bearing. 3. The device of claim 1, wherein the housing is coupled with a 3-D mouse and feedback controller via the bearing. 4. The device of claim 1, wherein the movement data is used to manipulate a virtual hand in visualization software. 5. The device of claim 1, wherein manipulation of the housing and the plurality of buttons causes a corresponding movement of a virtual element in visualization software displayed via a display device. 6. The device of claim 1, wherein the orientation sensor comprises an accelerometer and a gyroscope. 7. A system for dexterous interaction in a virtual world comprising: a device including: a housing including a plurality of buttons and a plurality of vibration elements each associated with at least one of the plurality of buttons; an orientation sensor for detecting movement data based on orientation of the housing, and a bearing configured to allow the housing to freely rotate in a plurality of directions; a first processor in communication with the plurality of buttons, the plurality of vibration elements, and the orientation sensor; and a first transmitter/receiver unit configured to send and receive the movement data; and a computer in communication with the device, the computer including: a second processor in communication with a display device and a second transmitter/receiver unit, wherein the first transmitter/receiver unit is in communication with the second transmitter/receiver unit and sends the movement data to the second transmitter/receiver unit, and the display device displays images based on movement of the device and the movement data. 8. The system of claim 7, wherein a storage device includes a memory unit, the memory unit includes a plurality of virtual reality scenarios, the plurality of virtual reality scenarios are displayed on the display device, and a user interaction occurs with each one of the plurality of virtual reality scenarios based on movement of the device. 9. The system of claim 7, wherein the first transmitter/receiver unit is configured to wirelessly transmit and receive the movement data, and the second transmitter/receiver unit is configured to wirelessly transmit and receive the movement data. 10. The system of claim 7, wherein the plurality of buttons are aligned with a user's fingers, and engagement of the plurality of buttons by the user results in a corresponding proportional manipulation of virtual reality fingers in a virtual reality scenario displayed on the display device.
A device for dexterous interaction in a virtual world in disclosed. The device includes a housing including a plurality of buttons and a plurality of vibration elements each associated with at least one of the plurality of buttons. An orientation sensor detects orientation of the housing, and a bearing is configured to allow the housing to freely rotate in a plurality of directions. A processor is in communication with the plurality of buttons, the plurality of vibration elements, and the orientation sensor. A transmitter/receiver unit is configured to receive data from the processor and configured to send and receive data from a central processing unit.1. A device for dexterous interaction in a virtual world comprising: a housing including a plurality of buttons and a plurality of vibration elements each associated with at least one of the plurality of buttons; an orientation sensor for detecting movement data based on orientation of the housing, and a bearing configured to allow the housing to freely rotate in a plurality of directions; a processor in communication with the plurality of buttons, the plurality of vibration elements, and the orientation sensor, wherein the processor receives the movement data from the orientation sensor; a first transmitter/receiver unit configured to receive the movement data from the processor and configured to send and receive the movement data. 2. The device of claim 1, wherein the housing is configured to move with three degrees of freedom via the bearing. 3. The device of claim 1, wherein the housing is coupled with a 3-D mouse and feedback controller via the bearing. 4. The device of claim 1, wherein the movement data is used to manipulate a virtual hand in visualization software. 5. The device of claim 1, wherein manipulation of the housing and the plurality of buttons causes a corresponding movement of a virtual element in visualization software displayed via a display device. 6. The device of claim 1, wherein the orientation sensor comprises an accelerometer and a gyroscope. 7. A system for dexterous interaction in a virtual world comprising: a device including: a housing including a plurality of buttons and a plurality of vibration elements each associated with at least one of the plurality of buttons; an orientation sensor for detecting movement data based on orientation of the housing, and a bearing configured to allow the housing to freely rotate in a plurality of directions; a first processor in communication with the plurality of buttons, the plurality of vibration elements, and the orientation sensor; and a first transmitter/receiver unit configured to send and receive the movement data; and a computer in communication with the device, the computer including: a second processor in communication with a display device and a second transmitter/receiver unit, wherein the first transmitter/receiver unit is in communication with the second transmitter/receiver unit and sends the movement data to the second transmitter/receiver unit, and the display device displays images based on movement of the device and the movement data. 8. The system of claim 7, wherein a storage device includes a memory unit, the memory unit includes a plurality of virtual reality scenarios, the plurality of virtual reality scenarios are displayed on the display device, and a user interaction occurs with each one of the plurality of virtual reality scenarios based on movement of the device. 9. The system of claim 7, wherein the first transmitter/receiver unit is configured to wirelessly transmit and receive the movement data, and the second transmitter/receiver unit is configured to wirelessly transmit and receive the movement data. 10. The system of claim 7, wherein the plurality of buttons are aligned with a user's fingers, and engagement of the plurality of buttons by the user results in a corresponding proportional manipulation of virtual reality fingers in a virtual reality scenario displayed on the display device.
2,600
10,444
10,444
15,605,147
2,644
Methods and systems for cellular device detection are presented. A wideband receiver is operable to acquire a block of digitized samples in an uplink frequency band. The wideband receiver is also operable to applying one or more computational kernels to the block of digitized samples, thereby determining a possible uplink transmission from the cellular device. The cellular device is confirmed when the bandwidth of the possible uplink transmission is verified and a cellular basestation, associated with the possible uplink transmission, is located.
1. A method for detecting a cellular device, the method comprising: acquiring a block of digitized samples in an uplink frequency band with a wideband receiver; applying one or more computational kernels to the block of digitized samples to determine a possible uplink transmission from the cellular device; determining the bandwidth of the possible uplink transmission; and detecting a cellular basestation associated with the possible uplink transmission. 2. The method of claim 1, wherein applying one or more computational kernels to the block of digitized samples comprises measuring a physical layer parameter of the possible uplink transmission. 3. The method of claim 1, wherein locating a cellular basestation comprises decoding a downlink broadcast message. 4. The method of claim 1, wherein determining the bandwidth of the possible uplink transmission comprises estimating a center frequency of the possible uplink transmission. 5. The method of claim 1, wherein the possible uplink transmission corresponds to one or more cellular basestations. 6. The method of claim 5, wherein a list of cellular basestations is maintained in the wideband receiver. 7. The method of claim 6, wherein each entry in the list of cellular basestations is associated with a location. 8. The method of claim 6, wherein the list of cellular basestations is shared with another wideband receiver. 9. The method of claim 1, wherein the wideband receiver and the cellular device are traveling in a vehicle. 10. The method of claim 1, wherein the method comprises locating the wideband receiver such that digitized samples of an uplink transmission from the cellular device are acquired by the wideband receiver when the cellular device is being used by a vehicle operator. 11. A system comprising: a wideband receiver operable to determine a presence of a cellular device based on an uplink transmission and an associated downlink transmission; and a memory operable to store a list of cellular basestations based on location. 12. The system of claim 11, wherein wideband receiver is operable to apply one or more computational kernels to a block of digitized samples of an uplink frequency band. 13. The system of claim 11, wherein wideband receiver is operable to decode a downlink broadcast message. 14. The system of claim 11, wherein wideband receiver is operable to determine the bandwidth of a possible uplink transmission. 15. The system of claim 11, wherein wideband receiver is operable to estimate a center frequency of the possible uplink transmission. 16. The system of claim 11, wherein a list of cellular basestations is maintained in the wideband receiver. 17. The system of claim 16, wherein each entry in the list of cellular basestations is associated with a location and one or more uplink center frequencies. 18. The system of claim 16, wherein the list of cellular basestations is shared with another wideband receiver. 19. The system of claim 11, wherein the wideband receiver and the cellular device are traveling in a vehicle. 20. The system of claim 11, wherein the wideband receiver is located such that digitized samples of the uplink transmission are acquired by the wideband receiver when the cellular device is being used by a vehicle operator.
Methods and systems for cellular device detection are presented. A wideband receiver is operable to acquire a block of digitized samples in an uplink frequency band. The wideband receiver is also operable to applying one or more computational kernels to the block of digitized samples, thereby determining a possible uplink transmission from the cellular device. The cellular device is confirmed when the bandwidth of the possible uplink transmission is verified and a cellular basestation, associated with the possible uplink transmission, is located.1. A method for detecting a cellular device, the method comprising: acquiring a block of digitized samples in an uplink frequency band with a wideband receiver; applying one or more computational kernels to the block of digitized samples to determine a possible uplink transmission from the cellular device; determining the bandwidth of the possible uplink transmission; and detecting a cellular basestation associated with the possible uplink transmission. 2. The method of claim 1, wherein applying one or more computational kernels to the block of digitized samples comprises measuring a physical layer parameter of the possible uplink transmission. 3. The method of claim 1, wherein locating a cellular basestation comprises decoding a downlink broadcast message. 4. The method of claim 1, wherein determining the bandwidth of the possible uplink transmission comprises estimating a center frequency of the possible uplink transmission. 5. The method of claim 1, wherein the possible uplink transmission corresponds to one or more cellular basestations. 6. The method of claim 5, wherein a list of cellular basestations is maintained in the wideband receiver. 7. The method of claim 6, wherein each entry in the list of cellular basestations is associated with a location. 8. The method of claim 6, wherein the list of cellular basestations is shared with another wideband receiver. 9. The method of claim 1, wherein the wideband receiver and the cellular device are traveling in a vehicle. 10. The method of claim 1, wherein the method comprises locating the wideband receiver such that digitized samples of an uplink transmission from the cellular device are acquired by the wideband receiver when the cellular device is being used by a vehicle operator. 11. A system comprising: a wideband receiver operable to determine a presence of a cellular device based on an uplink transmission and an associated downlink transmission; and a memory operable to store a list of cellular basestations based on location. 12. The system of claim 11, wherein wideband receiver is operable to apply one or more computational kernels to a block of digitized samples of an uplink frequency band. 13. The system of claim 11, wherein wideband receiver is operable to decode a downlink broadcast message. 14. The system of claim 11, wherein wideband receiver is operable to determine the bandwidth of a possible uplink transmission. 15. The system of claim 11, wherein wideband receiver is operable to estimate a center frequency of the possible uplink transmission. 16. The system of claim 11, wherein a list of cellular basestations is maintained in the wideband receiver. 17. The system of claim 16, wherein each entry in the list of cellular basestations is associated with a location and one or more uplink center frequencies. 18. The system of claim 16, wherein the list of cellular basestations is shared with another wideband receiver. 19. The system of claim 11, wherein the wideband receiver and the cellular device are traveling in a vehicle. 20. The system of claim 11, wherein the wideband receiver is located such that digitized samples of the uplink transmission are acquired by the wideband receiver when the cellular device is being used by a vehicle operator.
2,600
10,445
10,445
15,495,095
2,651
Techniques are disclosed for overcoming communication lag between interactive operations among devices in a streaming session. According to the techniques, a first device streaming video content to a second device and an annotation is entered to a first frame being displayed at the second device, which is communicated back to the first device. Responsive to a communication that identifies the annotation, a first device may identify an element of video content from the first frame to which the annotation applies and determine whether the identified element is present in a second frame of video content currently displayed at the first terminal. If so, the first device may display the annotation with the second frame in a location where the identified element is present. If not, the first device may display the annotation via an alternate technique.
1-26. (canceled) 27. A method, comprising: displaying video streamed from a distant device, responsive to operator control, annotating the streamed video at a local device, transmitting to the distant device a communication identifying the annotation and an identifier of the annotation's location in a currently-displayed frame at the distant device. 28. The method of claim 27, wherein the communication includes a frame identifier (ID) of a first frame buffered at the distant device and a location of the annotation in the first frame. 29. The method of claim 27, wherein the communication includes an object identifier representing an object in a first frame at the distant device corresponding to a currently-displayed frame at the local device. 30. The method of claim 27, wherein the communication identifies an object in the streamed video generated by an application executing in common on the local device and the distant device. 31. The method of claim 27, wherein the communication includes audio content identifying an object in the annotated streamed video. 32. The method of claim 27, wherein the communication identifying the annotation includes a type of the annotation, a size of the annotation, and at least one additional property of the annotation as defined by the operator of the local device. 33. The method of claim 27, further comprising: coding the annotated streamed video at the local device using a reference frame received with the streamed video prior to the transmitting. 34. The method of claim 27, further comprising: coding an annotated video frame in the received streamed video independently of other frames transmitted to the distant device. 35. An annotating device, comprising: a conference manager to process streamed video received from an other device at the annotating device, a codec, having an input for the streamed video and an output for coded annotated video data, a display to display streamed video received at the annotating device, and a transceiver, having a transmitter for the coded annotated video data and a receiver for receiving communications from the other device, wherein, responsive to operator control identifying an annotation entered at the annotating device to a first frame in the streamed video, the conference manager transmits, from the transmitter, the annotated streamed video including communication identifying the annotation and an identifier of the annotation's location in a currently-displayed frame at the other device. 36. The annotating device of claim 35, wherein the communication includes a frame identifier (ID) of a first frame buffered at the other device and a location of the annotation in the first frame. 37. The annotating device of claim 35, wherein the communication includes an object identifier representing an object in a first frame at the other device corresponding to a currently-displayed frame at the annotating device. 38. The annotating device of claim 35, wherein the communication identifies an object in the streamed video generated by an application executing in common on the annotating device and the other device. 39. The annotating device of claim 35, wherein the communication includes audio content identifying an object in the annotated streamed video. 40. The annotating device of claim 35, wherein the communication identifying the annotation includes a type of the annotation, a size of the annotation, and at least one additional property of the annotation as defined by the operator of the annotating device. 41. The annotating device of claim 35, wherein the codec is configured to: code the annotated streamed video at the annotating device using a reference frame received with the streamed video prior to the transmitting. 42. The annotating device of claim 35, wherein the codec is configured to: code an annotated video frame in the received streamed video independently of other frames transmitted to the other device. 43. A non-transitory computer readable medium having stored thereon program instructions that, when executed by a processing device, cause the device to perform a method, comprising: receiving streaming video content from a first terminal at a second terminal; and responsive to a communication identifying an annotation entered at the second terminal to a first frame of video content, transmitting to the first device a communication identifying the annotation and an identifier of the annotation' s location in a currently-displayed frame at the distant device. 44. The medium of claim 43, wherein the communication includes a frame identifier (ID) of a first frame buffered at the distant device and a location of the annotation in the first frame. 45. The medium of claim 43, wherein the communication includes an object identifier representing an object in a first frame at the first terminal corresponding to a currently-displayed frame at the second terminal. 46. The medium of claim 43, wherein the communication identifies an object in the streamed video generated by an application executing in common on the first terminal and the second terminal. 47. The medium of claim 43, wherein the communication includes audio content identifying an object in the annotated streamed video. 48. The medium of claim 43, wherein the communication identifying the annotation includes a type of the annotation, a size of the annotation, and at least one additional property of the annotation as defined by the operator of the second terminal. 49. The medium of claim 43, wherein the program instructions that, when executed by a processing device, further cause the device to perform: coding of the annotated streamed video at the second terminal using a reference frame received with the streamed video prior to the transmitting. 50. The medium of claim 43, wherein the program instructions that, when executed by a processing device, further cause the device to perform: coding of an annotated video frame in the received streamed video independently of other frames transmitted to the first terminal. 51. A method of annotating data between terminals, comprising: decoding and displaying coded video received from a distant terminal, responsive to an annotation entered by a user, coding according to predictive coding techniques a displayed frame being annotated, the coding using a prediction reference stored by the distant terminal, and transmitting the coded frame and data representing the annotation to the distant terminal. 52. The method of claim 51, wherein the transmitting includes transmitting a frame identifier (ID) of a first frame buffered at the distant terminal and a location of the annotation in the first frame. 53. The method of claim 51, wherein the transmitting includes transmitting an object identifier representing an object in a first frame at the distant terminal corresponding to the displayed frame. 54. The method of claim 51, wherein the transmitting includes identifying an object in the coded video generated by an application executing in common on a local device and the distant terminal. 55. The method of claim 52, wherein the transmitting includes transmitting includes audio content identifying an object in the annotated coded video. 56. The method of claim 52, wherein the transmitting includes identifying the annotation including a type of the annotation, a size of the annotation, and at least one additional property of the annotation as defined by the operator of a local device.
Techniques are disclosed for overcoming communication lag between interactive operations among devices in a streaming session. According to the techniques, a first device streaming video content to a second device and an annotation is entered to a first frame being displayed at the second device, which is communicated back to the first device. Responsive to a communication that identifies the annotation, a first device may identify an element of video content from the first frame to which the annotation applies and determine whether the identified element is present in a second frame of video content currently displayed at the first terminal. If so, the first device may display the annotation with the second frame in a location where the identified element is present. If not, the first device may display the annotation via an alternate technique.1-26. (canceled) 27. A method, comprising: displaying video streamed from a distant device, responsive to operator control, annotating the streamed video at a local device, transmitting to the distant device a communication identifying the annotation and an identifier of the annotation's location in a currently-displayed frame at the distant device. 28. The method of claim 27, wherein the communication includes a frame identifier (ID) of a first frame buffered at the distant device and a location of the annotation in the first frame. 29. The method of claim 27, wherein the communication includes an object identifier representing an object in a first frame at the distant device corresponding to a currently-displayed frame at the local device. 30. The method of claim 27, wherein the communication identifies an object in the streamed video generated by an application executing in common on the local device and the distant device. 31. The method of claim 27, wherein the communication includes audio content identifying an object in the annotated streamed video. 32. The method of claim 27, wherein the communication identifying the annotation includes a type of the annotation, a size of the annotation, and at least one additional property of the annotation as defined by the operator of the local device. 33. The method of claim 27, further comprising: coding the annotated streamed video at the local device using a reference frame received with the streamed video prior to the transmitting. 34. The method of claim 27, further comprising: coding an annotated video frame in the received streamed video independently of other frames transmitted to the distant device. 35. An annotating device, comprising: a conference manager to process streamed video received from an other device at the annotating device, a codec, having an input for the streamed video and an output for coded annotated video data, a display to display streamed video received at the annotating device, and a transceiver, having a transmitter for the coded annotated video data and a receiver for receiving communications from the other device, wherein, responsive to operator control identifying an annotation entered at the annotating device to a first frame in the streamed video, the conference manager transmits, from the transmitter, the annotated streamed video including communication identifying the annotation and an identifier of the annotation's location in a currently-displayed frame at the other device. 36. The annotating device of claim 35, wherein the communication includes a frame identifier (ID) of a first frame buffered at the other device and a location of the annotation in the first frame. 37. The annotating device of claim 35, wherein the communication includes an object identifier representing an object in a first frame at the other device corresponding to a currently-displayed frame at the annotating device. 38. The annotating device of claim 35, wherein the communication identifies an object in the streamed video generated by an application executing in common on the annotating device and the other device. 39. The annotating device of claim 35, wherein the communication includes audio content identifying an object in the annotated streamed video. 40. The annotating device of claim 35, wherein the communication identifying the annotation includes a type of the annotation, a size of the annotation, and at least one additional property of the annotation as defined by the operator of the annotating device. 41. The annotating device of claim 35, wherein the codec is configured to: code the annotated streamed video at the annotating device using a reference frame received with the streamed video prior to the transmitting. 42. The annotating device of claim 35, wherein the codec is configured to: code an annotated video frame in the received streamed video independently of other frames transmitted to the other device. 43. A non-transitory computer readable medium having stored thereon program instructions that, when executed by a processing device, cause the device to perform a method, comprising: receiving streaming video content from a first terminal at a second terminal; and responsive to a communication identifying an annotation entered at the second terminal to a first frame of video content, transmitting to the first device a communication identifying the annotation and an identifier of the annotation' s location in a currently-displayed frame at the distant device. 44. The medium of claim 43, wherein the communication includes a frame identifier (ID) of a first frame buffered at the distant device and a location of the annotation in the first frame. 45. The medium of claim 43, wherein the communication includes an object identifier representing an object in a first frame at the first terminal corresponding to a currently-displayed frame at the second terminal. 46. The medium of claim 43, wherein the communication identifies an object in the streamed video generated by an application executing in common on the first terminal and the second terminal. 47. The medium of claim 43, wherein the communication includes audio content identifying an object in the annotated streamed video. 48. The medium of claim 43, wherein the communication identifying the annotation includes a type of the annotation, a size of the annotation, and at least one additional property of the annotation as defined by the operator of the second terminal. 49. The medium of claim 43, wherein the program instructions that, when executed by a processing device, further cause the device to perform: coding of the annotated streamed video at the second terminal using a reference frame received with the streamed video prior to the transmitting. 50. The medium of claim 43, wherein the program instructions that, when executed by a processing device, further cause the device to perform: coding of an annotated video frame in the received streamed video independently of other frames transmitted to the first terminal. 51. A method of annotating data between terminals, comprising: decoding and displaying coded video received from a distant terminal, responsive to an annotation entered by a user, coding according to predictive coding techniques a displayed frame being annotated, the coding using a prediction reference stored by the distant terminal, and transmitting the coded frame and data representing the annotation to the distant terminal. 52. The method of claim 51, wherein the transmitting includes transmitting a frame identifier (ID) of a first frame buffered at the distant terminal and a location of the annotation in the first frame. 53. The method of claim 51, wherein the transmitting includes transmitting an object identifier representing an object in a first frame at the distant terminal corresponding to the displayed frame. 54. The method of claim 51, wherein the transmitting includes identifying an object in the coded video generated by an application executing in common on a local device and the distant terminal. 55. The method of claim 52, wherein the transmitting includes transmitting includes audio content identifying an object in the annotated coded video. 56. The method of claim 52, wherein the transmitting includes identifying the annotation including a type of the annotation, a size of the annotation, and at least one additional property of the annotation as defined by the operator of a local device.
2,600
10,446
10,446
15,140,729
2,693
This invention provides a touch sensitive panel, comprising: a driving electrode layer including multiple driving electrodes being parallel to a first axis; a sensing electrode layer including multiple sensing electrodes being parallel to a second axis; and an elastic body layer sandwiched between the driving electrode layer and the sensing electrode layer, wherein an area occupied by the multiple driving electrodes is larger than 80% of an area of the touch sensitive panel.
1. A touch sensitive panel, comprising: a driving electrode layer, including a plurality of driving electrodes being parallel to a first axis; a sensing electrode layer, including a plurality of sensing electrodes being parallel to a second axis; and an elastic body layer sandwiched between the driving electrode layer and the sensing electrode layer, wherein an area occupied by the plurality of driving electrodes is larger than 80% of an area of the touch sensitive panel. 2. The touch sensitive panel of claim 1, wherein the driving electrode layer further includes a plurality of dummy electrodes, wherein an area occupied by the plurality of driving electrodes and the plurality of dummy electrodes is larger than 80% of an area of the touch sensitive panel. 3. The touch sensitive panel of claim 2, wherein the plurality of dummy electrodes are connected to a DC potential. 4. The touch sensitive panel of claim 2, wherein the plurality of driving electrodes, the plurality of sensing electrodes, and the plurality of dummy electrodes are all connected to a touch sensitive processing device. 5. The touch sensitive panel of claim 1, wherein the elastic body layer at least includes one of the following structures: a homogeneous elastic body layer; a cylinder; an elliptical cylinder; a lump; a trapezoid lump; a round ramp; an oval ramp; and a wavy curve elastic body layer. 6. The touch sensitive panel of claim 1, wherein the elastic body layer includes a plurality of intervals and/or holes. 7. The touch sensitive panel of claim 1, further comprising: a transparent protection layer being adjacent to the driving electrode layer. 8. A touch sensitive screen, comprising: a transparent protection layer; a screen; and a touch sensitive panel sandwiched between the transparent protection layer and the screen, the touch sensitive panel comprising: a driving electrode layer, including a plurality of driving electrodes being parallel to a first axis; a sensing electrode layer, including a plurality of sensing electrodes being parallel to a second axis; and an elastic body layer sandwiched between the driving electrode layer and the sensing electrode layer, wherein an area occupied by the plurality of driving electrodes is larger than 80% of an area of the touch sensitive panel. 9. A touch sensitive electronic device, comprising: a touch sensitive screen, comprising: a transparent protection layer; a screen; and a touch sensitive panel sandwiched between the transparent protection layer and the screen, the touch sensitive panel comprising: a driving electrode layer, including a plurality of driving electrodes being parallel to a first axis; a sensing electrode layer, including a plurality of sensing electrodes being parallel to a second axis; and an elastic body layer sandwiched between the driving electrode layer and the sensing electrode layer, wherein an area occupied by the plurality of driving electrodes is larger than 80% of an area of the touch sensitive panel; and a touch sensitive processing device, configured to connect the plurality of driving electrodes and the plurality of sensing electrodes.
This invention provides a touch sensitive panel, comprising: a driving electrode layer including multiple driving electrodes being parallel to a first axis; a sensing electrode layer including multiple sensing electrodes being parallel to a second axis; and an elastic body layer sandwiched between the driving electrode layer and the sensing electrode layer, wherein an area occupied by the multiple driving electrodes is larger than 80% of an area of the touch sensitive panel.1. A touch sensitive panel, comprising: a driving electrode layer, including a plurality of driving electrodes being parallel to a first axis; a sensing electrode layer, including a plurality of sensing electrodes being parallel to a second axis; and an elastic body layer sandwiched between the driving electrode layer and the sensing electrode layer, wherein an area occupied by the plurality of driving electrodes is larger than 80% of an area of the touch sensitive panel. 2. The touch sensitive panel of claim 1, wherein the driving electrode layer further includes a plurality of dummy electrodes, wherein an area occupied by the plurality of driving electrodes and the plurality of dummy electrodes is larger than 80% of an area of the touch sensitive panel. 3. The touch sensitive panel of claim 2, wherein the plurality of dummy electrodes are connected to a DC potential. 4. The touch sensitive panel of claim 2, wherein the plurality of driving electrodes, the plurality of sensing electrodes, and the plurality of dummy electrodes are all connected to a touch sensitive processing device. 5. The touch sensitive panel of claim 1, wherein the elastic body layer at least includes one of the following structures: a homogeneous elastic body layer; a cylinder; an elliptical cylinder; a lump; a trapezoid lump; a round ramp; an oval ramp; and a wavy curve elastic body layer. 6. The touch sensitive panel of claim 1, wherein the elastic body layer includes a plurality of intervals and/or holes. 7. The touch sensitive panel of claim 1, further comprising: a transparent protection layer being adjacent to the driving electrode layer. 8. A touch sensitive screen, comprising: a transparent protection layer; a screen; and a touch sensitive panel sandwiched between the transparent protection layer and the screen, the touch sensitive panel comprising: a driving electrode layer, including a plurality of driving electrodes being parallel to a first axis; a sensing electrode layer, including a plurality of sensing electrodes being parallel to a second axis; and an elastic body layer sandwiched between the driving electrode layer and the sensing electrode layer, wherein an area occupied by the plurality of driving electrodes is larger than 80% of an area of the touch sensitive panel. 9. A touch sensitive electronic device, comprising: a touch sensitive screen, comprising: a transparent protection layer; a screen; and a touch sensitive panel sandwiched between the transparent protection layer and the screen, the touch sensitive panel comprising: a driving electrode layer, including a plurality of driving electrodes being parallel to a first axis; a sensing electrode layer, including a plurality of sensing electrodes being parallel to a second axis; and an elastic body layer sandwiched between the driving electrode layer and the sensing electrode layer, wherein an area occupied by the plurality of driving electrodes is larger than 80% of an area of the touch sensitive panel; and a touch sensitive processing device, configured to connect the plurality of driving electrodes and the plurality of sensing electrodes.
2,600
10,447
10,447
15,406,369
2,621
In accordance with embodiments of the present disclosure, an information handling system may include a processor and an interface configured to electronically interface between the processor and a display assembly, wherein the interface is configured to provide a legacy supply voltage to the display assembly and an alternate supply voltage other than the legacy supply voltage to the display assembly in lieu of or in addition to the legacy supply voltage, such that the interface is compatible with each of a first type of display assembly having a first type of voltage regulator tree that generates regulated output voltages from the legacy supply voltage and a second type of display assembly having a second type of voltage regulator tree that generates regulated output voltages from the alternate supply voltage.
1. An information handling system comprising: a processor; and an interface configured to electronically interface between the processor and a display assembly, wherein the interface is configured to provide a legacy supply voltage to the display assembly and an alternate supply voltage other than the legacy supply voltage to the display assembly in lieu of or in addition to the legacy supply voltage, such that the interface is compatible with each of a first type of display assembly having a first type of voltage regulator tree that generates regulated output voltages from the legacy supply voltage and a second type of display assembly having a second type of voltage regulator tree that generates regulated output voltages from the alternate supply voltage. 2. The information handling system of claim 1, further comprising a battery and wherein the alternate supply voltage comprises a voltage generated by the battery. 3. The information handling system of claim 1, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage but not the legacy supply voltage. 4. The information handling system of claim 1, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage and the legacy supply voltage. 5. An interface configured to electronically interface between a processor of an information handling system and a display assembly of the information handling system, wherein the interface comprises: a first output configured to generate a legacy supply voltage; and a second output configured to generate an alternate supply voltage other than the legacy supply voltage to the display assembly in lieu of or in addition to the legacy supply voltage, such that the interface is compatible with each of a first type of display assembly having a first type of voltage regulator tree that generates regulated output voltages from the legacy supply voltage and a second type of display assembly having a second type of voltage regulator tree that generates regulated output voltages from the alternate supply voltage. 6. The interface of claim 5, further comprising an input configured to a battery voltage from a battery coupled to the interface, and wherein the alternate supply voltage comprises the battery voltage. 7. The interface of claim 5, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage but not the legacy supply voltage. 8. The interface of claim 5, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage and the legacy supply voltage. 9. A method comprising: generating, by an interface configured to electronically interface between a processor of an information handling system and a display assembly of the information handling system, a legacy supply voltage; and generating, by the interface, an alternate supply voltage other than the legacy supply voltage to the display assembly in lieu of or in addition to the legacy supply voltage, such that the interface is compatible with each of a first type of display assembly having a first type of voltage regulator tree that generates regulated output voltages from the legacy supply voltage and a second type of display assembly having a second type of voltage regulator tree that generates regulated output voltages from the alternate supply voltage. 10. The method of claim 9, further comprising receiving a battery voltage from a battery coupled to the interface, and wherein the alternate supply voltage comprises the battery voltage. 11. The method of claim 9, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage but not the legacy supply voltage. 12. The method of claim 9, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage and the legacy supply voltage.
In accordance with embodiments of the present disclosure, an information handling system may include a processor and an interface configured to electronically interface between the processor and a display assembly, wherein the interface is configured to provide a legacy supply voltage to the display assembly and an alternate supply voltage other than the legacy supply voltage to the display assembly in lieu of or in addition to the legacy supply voltage, such that the interface is compatible with each of a first type of display assembly having a first type of voltage regulator tree that generates regulated output voltages from the legacy supply voltage and a second type of display assembly having a second type of voltage regulator tree that generates regulated output voltages from the alternate supply voltage.1. An information handling system comprising: a processor; and an interface configured to electronically interface between the processor and a display assembly, wherein the interface is configured to provide a legacy supply voltage to the display assembly and an alternate supply voltage other than the legacy supply voltage to the display assembly in lieu of or in addition to the legacy supply voltage, such that the interface is compatible with each of a first type of display assembly having a first type of voltage regulator tree that generates regulated output voltages from the legacy supply voltage and a second type of display assembly having a second type of voltage regulator tree that generates regulated output voltages from the alternate supply voltage. 2. The information handling system of claim 1, further comprising a battery and wherein the alternate supply voltage comprises a voltage generated by the battery. 3. The information handling system of claim 1, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage but not the legacy supply voltage. 4. The information handling system of claim 1, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage and the legacy supply voltage. 5. An interface configured to electronically interface between a processor of an information handling system and a display assembly of the information handling system, wherein the interface comprises: a first output configured to generate a legacy supply voltage; and a second output configured to generate an alternate supply voltage other than the legacy supply voltage to the display assembly in lieu of or in addition to the legacy supply voltage, such that the interface is compatible with each of a first type of display assembly having a first type of voltage regulator tree that generates regulated output voltages from the legacy supply voltage and a second type of display assembly having a second type of voltage regulator tree that generates regulated output voltages from the alternate supply voltage. 6. The interface of claim 5, further comprising an input configured to a battery voltage from a battery coupled to the interface, and wherein the alternate supply voltage comprises the battery voltage. 7. The interface of claim 5, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage but not the legacy supply voltage. 8. The interface of claim 5, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage and the legacy supply voltage. 9. A method comprising: generating, by an interface configured to electronically interface between a processor of an information handling system and a display assembly of the information handling system, a legacy supply voltage; and generating, by the interface, an alternate supply voltage other than the legacy supply voltage to the display assembly in lieu of or in addition to the legacy supply voltage, such that the interface is compatible with each of a first type of display assembly having a first type of voltage regulator tree that generates regulated output voltages from the legacy supply voltage and a second type of display assembly having a second type of voltage regulator tree that generates regulated output voltages from the alternate supply voltage. 10. The method of claim 9, further comprising receiving a battery voltage from a battery coupled to the interface, and wherein the alternate supply voltage comprises the battery voltage. 11. The method of claim 9, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage but not the legacy supply voltage. 12. The method of claim 9, wherein the second type of voltage regulator tree generates regulated output voltages from the alternate supply voltage and the legacy supply voltage.
2,600
10,448
10,448
14,992,924
2,645
Aspects of the present disclosure provide techniques for reporting location information in wireless communications system. In some cases, an apparatus may provide an indication of a degree of accuracy in the reported values.
1. An apparatus for wireless communications, comprising: a receive interface configured to obtain, via at least one receive antenna, a signal transmitted from another apparatus; a processing system configured to: determine one or more values indicative of an orientation of the apparatus relative to the other apparatus, based on at least one parameter of the signal as received at the at least one receive antenna; and generate at least one frame comprising an indication of a degree of accuracy in the determined one or more values indicative of the relative orientation; and a transmit interface configured to output the at least one frame for transmission to the other apparatus. 2. The apparatus of claim 1, wherein the at least one receive antenna comprises a plurality of receive antennas, and the at least one parameter comprises a difference in phase of the signal as received at the plurality of receive antennas. 3. The apparatus of claim 1, wherein the one or more values indicative of the orientation comprises at least one of an azimuth, elevation, roll, or distance of the apparatus relative to the other apparatus. 4. The apparatus of claim 1, wherein the indication of the degree of accuracy of the relative orientation is provided via an indication of a number of different values determined based on a value of the at least one parameter. 5. The apparatus of claim 4, wherein the processing system is configured to provide at least one of the different values in the at least one frame. 6. The apparatus of claim 4, wherein the processing system is configured to generate a plurality of additional frames, each of the plurality of additional frames including one of the different values. 7. The apparatus of claim 4, wherein the processing system is configured to include at least two of the different values in the at least one frame. 8. An apparatus for wireless communications, comprising: a transmit interface configured to output a signal for transmission to another apparatus; a receive interface configured to obtain at least one frame including one or more values indicative of an orientation of the other apparatus relative to the apparatus; and a processing system configured to: determine, based on an indication received from the other apparatus, a degree of accuracy in the one or more values indicative of the relative orientation; and determine a location of the apparatus relative to the other apparatus based on at least one of the one or more values. 9. The apparatus of claim 8, wherein the processing system is configured to determine the degree of accuracy in the one or more values by determining a possible ambiguity in the one or more values, and wherein the one or more values are indicative of at least one of an azimuth, elevation, roll, or distance at which the signal was received by the other apparatus. 10. The apparatus of claim 9, wherein the processing system is further configured to take one or more actions to resolve the ambiguity, wherein the one or more actions taken to resolve the ambiguity comprise eliminating one or more of the one or more values from consideration and determining the location of the apparatus relative to the other apparatus based on a remaining one or more of the values. 11. The apparatus of claim 8, wherein the indication provided by the other apparatus comprises an indication of a number of different values determined for the orientation based on at least one parameter for the signal as received at the other apparatus. 12. The apparatus of claim 11, wherein the at least one parameter comprises a difference in phase of the signal as received at a plurality of receive antennas of the other apparatus. 13. The apparatus of claim 11, wherein the at least one frame comprises the indication. 14. The apparatus of claim 11, wherein the at least one frame comprises a plurality of different frames, each including one of the different values. 15. The apparatus of claim 11, wherein the at least one frame comprises a single frame including at least two of the different values. 16. A method for wireless communications by an apparatus, comprising: obtaining, via at least one receive antenna, a signal transmitted from another apparatus; determining one or more values indicative of an orientation of the apparatus relative to the other apparatus, based on at least one parameter of the signal as received at the at least one receive antenna; generating at least one frame comprising an indication of a degree of accuracy in the determined one or more values indicative of the relative orientation; and outputting the at least one frame for transmission to the other apparatus. 17. The method of claim 16, wherein the at least one receive antenna comprises a plurality of receive antennas, and the at least one parameter comprises a difference in phase of the signal as received at the plurality of receive antennas. 18. The method of claim 16, wherein the one or more values indicative of the orientation comprises at least one of an azimuth, elevation, roll, or distance of the apparatus relative to the other apparatus. 19. The method of claim 16, wherein generating the at least one frame comprises providing the indication of the degree of accuracy of the relative orientation via an indication of a number of different values determined based on a value of the at least one parameter. 20. The method of claim 19, wherein generating the at least one frame comprises providing at least one of the different values in the at least one frame. 21. The method of claim 19, further comprising generating a plurality of additional frames, each of the plurality of additional frames including one of the different values. 22. The method of claim 19, wherein generating the at least one frame comprises including at least two of the different values in the at least one frame. 23. A method for wireless communications by an apparatus, comprising: outputting a signal for transmission to another apparatus; obtaining at least one frame including one or more values indicative of an orientation of the other apparatus relative to the apparatus; determining, based on an indication received from the other apparatus, a degree of accuracy in the one or more values indicative of the relative orientation; and determining a location of the apparatus relative to the other apparatus based on at least one of the one or more values. 24. The method of claim 23, wherein determining the degree of accuracy in the one or more values comprises determining a possible ambiguity in the one or more values, and wherein the one or more values are indicative of at least one of an azimuth, elevation, roll, or distance at which the signal was received by the other apparatus. 25. The method of claim 24, further comprising: taking one or more actions to resolve the ambiguity, wherein taking one or more actions to resolve the ambiguity comprises: eliminating one or more of the one or more values from consideration; and determining the location of the apparatus relative to the other apparatus based on a remaining one or more of the values. 26. The method of claim 23, wherein the indication provided by the other apparatus comprises an indication of a number of different values determined for the orientation based on at least one parameter for the signal as received at the other apparatus. 27. The method of claim 26, wherein the at least one parameter comprises a difference in phase of the signal as received at a plurality of receive antennas of the other apparatus. 28. The method of claim 26, wherein the at least one frame comprises the indication. 29. The method of claim 26, wherein the at least one frame comprises a plurality of different frames, each including one of the different values. 30. The method of claim 26, wherein the at least one frame comprises a single frame including at least two of the different values. 31-49. (canceled)
Aspects of the present disclosure provide techniques for reporting location information in wireless communications system. In some cases, an apparatus may provide an indication of a degree of accuracy in the reported values.1. An apparatus for wireless communications, comprising: a receive interface configured to obtain, via at least one receive antenna, a signal transmitted from another apparatus; a processing system configured to: determine one or more values indicative of an orientation of the apparatus relative to the other apparatus, based on at least one parameter of the signal as received at the at least one receive antenna; and generate at least one frame comprising an indication of a degree of accuracy in the determined one or more values indicative of the relative orientation; and a transmit interface configured to output the at least one frame for transmission to the other apparatus. 2. The apparatus of claim 1, wherein the at least one receive antenna comprises a plurality of receive antennas, and the at least one parameter comprises a difference in phase of the signal as received at the plurality of receive antennas. 3. The apparatus of claim 1, wherein the one or more values indicative of the orientation comprises at least one of an azimuth, elevation, roll, or distance of the apparatus relative to the other apparatus. 4. The apparatus of claim 1, wherein the indication of the degree of accuracy of the relative orientation is provided via an indication of a number of different values determined based on a value of the at least one parameter. 5. The apparatus of claim 4, wherein the processing system is configured to provide at least one of the different values in the at least one frame. 6. The apparatus of claim 4, wherein the processing system is configured to generate a plurality of additional frames, each of the plurality of additional frames including one of the different values. 7. The apparatus of claim 4, wherein the processing system is configured to include at least two of the different values in the at least one frame. 8. An apparatus for wireless communications, comprising: a transmit interface configured to output a signal for transmission to another apparatus; a receive interface configured to obtain at least one frame including one or more values indicative of an orientation of the other apparatus relative to the apparatus; and a processing system configured to: determine, based on an indication received from the other apparatus, a degree of accuracy in the one or more values indicative of the relative orientation; and determine a location of the apparatus relative to the other apparatus based on at least one of the one or more values. 9. The apparatus of claim 8, wherein the processing system is configured to determine the degree of accuracy in the one or more values by determining a possible ambiguity in the one or more values, and wherein the one or more values are indicative of at least one of an azimuth, elevation, roll, or distance at which the signal was received by the other apparatus. 10. The apparatus of claim 9, wherein the processing system is further configured to take one or more actions to resolve the ambiguity, wherein the one or more actions taken to resolve the ambiguity comprise eliminating one or more of the one or more values from consideration and determining the location of the apparatus relative to the other apparatus based on a remaining one or more of the values. 11. The apparatus of claim 8, wherein the indication provided by the other apparatus comprises an indication of a number of different values determined for the orientation based on at least one parameter for the signal as received at the other apparatus. 12. The apparatus of claim 11, wherein the at least one parameter comprises a difference in phase of the signal as received at a plurality of receive antennas of the other apparatus. 13. The apparatus of claim 11, wherein the at least one frame comprises the indication. 14. The apparatus of claim 11, wherein the at least one frame comprises a plurality of different frames, each including one of the different values. 15. The apparatus of claim 11, wherein the at least one frame comprises a single frame including at least two of the different values. 16. A method for wireless communications by an apparatus, comprising: obtaining, via at least one receive antenna, a signal transmitted from another apparatus; determining one or more values indicative of an orientation of the apparatus relative to the other apparatus, based on at least one parameter of the signal as received at the at least one receive antenna; generating at least one frame comprising an indication of a degree of accuracy in the determined one or more values indicative of the relative orientation; and outputting the at least one frame for transmission to the other apparatus. 17. The method of claim 16, wherein the at least one receive antenna comprises a plurality of receive antennas, and the at least one parameter comprises a difference in phase of the signal as received at the plurality of receive antennas. 18. The method of claim 16, wherein the one or more values indicative of the orientation comprises at least one of an azimuth, elevation, roll, or distance of the apparatus relative to the other apparatus. 19. The method of claim 16, wherein generating the at least one frame comprises providing the indication of the degree of accuracy of the relative orientation via an indication of a number of different values determined based on a value of the at least one parameter. 20. The method of claim 19, wherein generating the at least one frame comprises providing at least one of the different values in the at least one frame. 21. The method of claim 19, further comprising generating a plurality of additional frames, each of the plurality of additional frames including one of the different values. 22. The method of claim 19, wherein generating the at least one frame comprises including at least two of the different values in the at least one frame. 23. A method for wireless communications by an apparatus, comprising: outputting a signal for transmission to another apparatus; obtaining at least one frame including one or more values indicative of an orientation of the other apparatus relative to the apparatus; determining, based on an indication received from the other apparatus, a degree of accuracy in the one or more values indicative of the relative orientation; and determining a location of the apparatus relative to the other apparatus based on at least one of the one or more values. 24. The method of claim 23, wherein determining the degree of accuracy in the one or more values comprises determining a possible ambiguity in the one or more values, and wherein the one or more values are indicative of at least one of an azimuth, elevation, roll, or distance at which the signal was received by the other apparatus. 25. The method of claim 24, further comprising: taking one or more actions to resolve the ambiguity, wherein taking one or more actions to resolve the ambiguity comprises: eliminating one or more of the one or more values from consideration; and determining the location of the apparatus relative to the other apparatus based on a remaining one or more of the values. 26. The method of claim 23, wherein the indication provided by the other apparatus comprises an indication of a number of different values determined for the orientation based on at least one parameter for the signal as received at the other apparatus. 27. The method of claim 26, wherein the at least one parameter comprises a difference in phase of the signal as received at a plurality of receive antennas of the other apparatus. 28. The method of claim 26, wherein the at least one frame comprises the indication. 29. The method of claim 26, wherein the at least one frame comprises a plurality of different frames, each including one of the different values. 30. The method of claim 26, wherein the at least one frame comprises a single frame including at least two of the different values. 31-49. (canceled)
2,600
10,449
10,449
15,149,052
2,647
Systems and method for the management of infrastructure equipment utilizing intermediate performance thresholds and trend performance thresholds to characterize functionality of the infrastructure equipment according to collected performance information are provided. Infrastructure equipment performance information is collected and transmitted to an infrastructure management service. The infrastructure management service utilizes multiple performance thresholds to characterize functionality of the infrastructure equipment. By utilizing the characterization of the functionality of the infrastructure equipment, an infrastructure maintenance scheduler system can mitigate a potential interruption wireless network service by servicing infrastructure equipment prior to a hardware failure.
1. A method for managing infrastructure equipment comprising: obtaining performance information for a set of infrastructure equipment, wherein the performance information corresponds to data collected during operation of the infrastructure equipment in a wireless network; storing the set of infrastructure equipment; and for individual infrastructure equipment in the set of infrastructure equipment: identifying one or more intermediate performance thresholds corresponding to performance information collected from the respective infrastructure equipment; characterizing a functionality of the respective infrastructure equipment based on applying the identified one or more intermediate performance thresholds; generating a notification based on a characterization of the functionality as like to fail; identifying one or more trend performance thresholds corresponding to a set of performance information collected from the respective infrastructure equipment, wherein the set of performance information includes at least one historical performance information; characterizing a second functionality of the respective infrastructure equipment based on applying the identified one or more trend performance thresholds; and generating a notification based on a characterization of the second functionality as like to fail. 2. The method as recited in claim 1, wherein the performance information corresponds to radio signal information associated with the respective infrastructure equipment. 3. The method as recited in claim 1, wherein the performance information corresponds to temperature information associated with the respective infrastructure equipment. 4. The method as recited in claim 1, wherein the performance information corresponds to information associated with identifiable components of the respective infrastructure equipment. 5. The method as recited in claim 1, wherein characterizing the second functionality of the respective infrastructure equipment based on applying the identified one or more trend performance thresholds includes: determining a difference between a current performance information and a historical performance information; and characterizing the second functionality based on comparing the determined difference with the trend performance threshold. 6. A system for generating managing infrastructure equipment comprising a data store, executed on one or more computing devices having a processor and a memory, the data store maintaining a library of performance thresholds, wherein the library of performance thresholds include intermediate performance thresholds and trend performance thresholds; an infrastructure maintenance scheduler, executed on one or more computing devices having a processor and a memory, the infrastructure maintenance scheduler operable to: obtain performance information collected from a set of infrastructure equipment; for individual infrastructure equipment, characterize a first functionality of the respective infrastructure equipment based on applying identified one or more intermediate performance thresholds; characterize a second functionality of the respective infrastructure equipment based on applying identified one or more trend performance thresholds, wherein the set of performance information includes at least one historical performance information; and generate one or more notifications based on at least one of the first or second characterizations of the functionality. 7. The system as recited in claim 6, wherein the performance information corresponds to radio signal information associated with the respective infrastructure equipment. 8. The system as recited in claim 7, wherein the radio signal information corresponds to a measure of forward and return signal strength. 9. The system as recited in claim 6, wherein the performance information corresponds to antenna performance information associated with the respective infrastructure equipment. 10. The system as recited in claim 9, wherein the antenna performance information corresponds to a measured gain. 11. The system as recited in claim 6, wherein the performance information corresponds to temperature information associated with the respective infrastructure equipment. 12. The system as recited in claim 6, wherein the performance information corresponds to network connectivity information associated with the respective infrastructure equipment. 13. The system as recited in claim 6, wherein the performance information corresponds to information associated with identifiable components of the respective infrastructure equipment. 14. The system as recited in claim 6, wherein at least one of the intermediate performance thresholds or the trend performance thresholds corresponds to a minimum threshold value. 15. The system as recited in claim 6, wherein at least one of the intermediate performance thresholds or the trend performance thresholds corresponds to a maximum threshold value. 16. The system as recited in claim 6, wherein the infrastructure maintenance scheduler characterizes the second functionality of the respective infrastructure equipment based on applying the identified one or more trend performance thresholds by: determining a difference between a current performance information and a historical performance information; and characterizing the second functionality based on comparing the determined difference with the trend performance threshold. 17. A method for managing infrastructure equipment comprising: obtaining performance information for a set of infrastructure equipment, wherein the performance information corresponds to data collected during operation of the infrastructure equipment in a wireless network; and for individual infrastructure equipment in the set of infrastructure equipment: characterizing a first functionality of the respective infrastructure equipment based on applying the identified one or more first performance thresholds; characterizing a second functionality of the respective infrastructure equipment based on applying identified one or more second performance thresholds utilizing at least one historical performance information; and causing maintenance of the respective infrastructure equipment based on a determining that the first or second functionality is indicative of a potential failure. 18. The method as recited in claim 17, wherein the performance information corresponds to at least one of radio signal information, antenna performance or temperature associated with the respective infrastructure equipment. 19. The method as recited in claim 17, wherein the performance information corresponds to network connectivity information associated with the respective infrastructure equipment. 20. The method as recited in claim 17, wherein the performance information corresponds to information associated with identifiable components of the respective infrastructure equipment. 21. The method as recited in claim 17, wherein at least one of the first or second performance thresholds correspond to a minimum threshold value or a maximum threshold value. 22. The method as recited in claim 17, wherein the infrastructure maintenance scheduler characterizes the second functionality of the respective infrastructure equipment based on applying the identified one or more trend performance thresholds by: determining a difference between a current performance information and a historical performance information; and characterizing the second functionality based on comparing the determined difference with the trend performance threshold. 23. The method as recited in claim 22, wherein the historical performance information corresponds to performance information obtained within a time window. 24. The method as recited in claim 17, wherein causing maintenance of the respective infrastructure equipment based on a determining that the first or second functionality is indicative of a potential failure includes causing maintenance of a component identified as generating the collected performance information. 25. The method as recited in claim 17, wherein causing maintenance of the respective infrastructure equipment based on a determining that the first or second functionality is indicative of a potential failure includes causing maintenance of a component identified as correlated to a component generating the collected performance information.
Systems and method for the management of infrastructure equipment utilizing intermediate performance thresholds and trend performance thresholds to characterize functionality of the infrastructure equipment according to collected performance information are provided. Infrastructure equipment performance information is collected and transmitted to an infrastructure management service. The infrastructure management service utilizes multiple performance thresholds to characterize functionality of the infrastructure equipment. By utilizing the characterization of the functionality of the infrastructure equipment, an infrastructure maintenance scheduler system can mitigate a potential interruption wireless network service by servicing infrastructure equipment prior to a hardware failure.1. A method for managing infrastructure equipment comprising: obtaining performance information for a set of infrastructure equipment, wherein the performance information corresponds to data collected during operation of the infrastructure equipment in a wireless network; storing the set of infrastructure equipment; and for individual infrastructure equipment in the set of infrastructure equipment: identifying one or more intermediate performance thresholds corresponding to performance information collected from the respective infrastructure equipment; characterizing a functionality of the respective infrastructure equipment based on applying the identified one or more intermediate performance thresholds; generating a notification based on a characterization of the functionality as like to fail; identifying one or more trend performance thresholds corresponding to a set of performance information collected from the respective infrastructure equipment, wherein the set of performance information includes at least one historical performance information; characterizing a second functionality of the respective infrastructure equipment based on applying the identified one or more trend performance thresholds; and generating a notification based on a characterization of the second functionality as like to fail. 2. The method as recited in claim 1, wherein the performance information corresponds to radio signal information associated with the respective infrastructure equipment. 3. The method as recited in claim 1, wherein the performance information corresponds to temperature information associated with the respective infrastructure equipment. 4. The method as recited in claim 1, wherein the performance information corresponds to information associated with identifiable components of the respective infrastructure equipment. 5. The method as recited in claim 1, wherein characterizing the second functionality of the respective infrastructure equipment based on applying the identified one or more trend performance thresholds includes: determining a difference between a current performance information and a historical performance information; and characterizing the second functionality based on comparing the determined difference with the trend performance threshold. 6. A system for generating managing infrastructure equipment comprising a data store, executed on one or more computing devices having a processor and a memory, the data store maintaining a library of performance thresholds, wherein the library of performance thresholds include intermediate performance thresholds and trend performance thresholds; an infrastructure maintenance scheduler, executed on one or more computing devices having a processor and a memory, the infrastructure maintenance scheduler operable to: obtain performance information collected from a set of infrastructure equipment; for individual infrastructure equipment, characterize a first functionality of the respective infrastructure equipment based on applying identified one or more intermediate performance thresholds; characterize a second functionality of the respective infrastructure equipment based on applying identified one or more trend performance thresholds, wherein the set of performance information includes at least one historical performance information; and generate one or more notifications based on at least one of the first or second characterizations of the functionality. 7. The system as recited in claim 6, wherein the performance information corresponds to radio signal information associated with the respective infrastructure equipment. 8. The system as recited in claim 7, wherein the radio signal information corresponds to a measure of forward and return signal strength. 9. The system as recited in claim 6, wherein the performance information corresponds to antenna performance information associated with the respective infrastructure equipment. 10. The system as recited in claim 9, wherein the antenna performance information corresponds to a measured gain. 11. The system as recited in claim 6, wherein the performance information corresponds to temperature information associated with the respective infrastructure equipment. 12. The system as recited in claim 6, wherein the performance information corresponds to network connectivity information associated with the respective infrastructure equipment. 13. The system as recited in claim 6, wherein the performance information corresponds to information associated with identifiable components of the respective infrastructure equipment. 14. The system as recited in claim 6, wherein at least one of the intermediate performance thresholds or the trend performance thresholds corresponds to a minimum threshold value. 15. The system as recited in claim 6, wherein at least one of the intermediate performance thresholds or the trend performance thresholds corresponds to a maximum threshold value. 16. The system as recited in claim 6, wherein the infrastructure maintenance scheduler characterizes the second functionality of the respective infrastructure equipment based on applying the identified one or more trend performance thresholds by: determining a difference between a current performance information and a historical performance information; and characterizing the second functionality based on comparing the determined difference with the trend performance threshold. 17. A method for managing infrastructure equipment comprising: obtaining performance information for a set of infrastructure equipment, wherein the performance information corresponds to data collected during operation of the infrastructure equipment in a wireless network; and for individual infrastructure equipment in the set of infrastructure equipment: characterizing a first functionality of the respective infrastructure equipment based on applying the identified one or more first performance thresholds; characterizing a second functionality of the respective infrastructure equipment based on applying identified one or more second performance thresholds utilizing at least one historical performance information; and causing maintenance of the respective infrastructure equipment based on a determining that the first or second functionality is indicative of a potential failure. 18. The method as recited in claim 17, wherein the performance information corresponds to at least one of radio signal information, antenna performance or temperature associated with the respective infrastructure equipment. 19. The method as recited in claim 17, wherein the performance information corresponds to network connectivity information associated with the respective infrastructure equipment. 20. The method as recited in claim 17, wherein the performance information corresponds to information associated with identifiable components of the respective infrastructure equipment. 21. The method as recited in claim 17, wherein at least one of the first or second performance thresholds correspond to a minimum threshold value or a maximum threshold value. 22. The method as recited in claim 17, wherein the infrastructure maintenance scheduler characterizes the second functionality of the respective infrastructure equipment based on applying the identified one or more trend performance thresholds by: determining a difference between a current performance information and a historical performance information; and characterizing the second functionality based on comparing the determined difference with the trend performance threshold. 23. The method as recited in claim 22, wherein the historical performance information corresponds to performance information obtained within a time window. 24. The method as recited in claim 17, wherein causing maintenance of the respective infrastructure equipment based on a determining that the first or second functionality is indicative of a potential failure includes causing maintenance of a component identified as generating the collected performance information. 25. The method as recited in claim 17, wherein causing maintenance of the respective infrastructure equipment based on a determining that the first or second functionality is indicative of a potential failure includes causing maintenance of a component identified as correlated to a component generating the collected performance information.
2,600
10,450
10,450
14,288,228
2,661
Animating digital characters based on motion captured performances, including: receiving sensory data collected using a variety of collection techniques including optical video, electro-oculography, and at least one of optical, infrared, and inertial motion capture; and managing and combining the collected sensory data to aid cleaning, tracking, labeling, and re-targeting processes. Keywords include Optical Video Data and Inertial Motion Capture.
1. A method of animating digital characters based on motion captured performances, the method comprising: receiving sensory data collected using a variety of collection techniques including optical video, electro-oculography, and at least one of optical, infrared, and inertial motion capture; and managing and combining the collected sensory data to aid cleaning, tracking, labeling, and re-targeting processes.
Animating digital characters based on motion captured performances, including: receiving sensory data collected using a variety of collection techniques including optical video, electro-oculography, and at least one of optical, infrared, and inertial motion capture; and managing and combining the collected sensory data to aid cleaning, tracking, labeling, and re-targeting processes. Keywords include Optical Video Data and Inertial Motion Capture.1. A method of animating digital characters based on motion captured performances, the method comprising: receiving sensory data collected using a variety of collection techniques including optical video, electro-oculography, and at least one of optical, infrared, and inertial motion capture; and managing and combining the collected sensory data to aid cleaning, tracking, labeling, and re-targeting processes.
2,600
10,451
10,451
14,449,707
2,653
A method for receiving authorization from a consumer includes steps of connecting a phone call associated with a consumer to an interactive voice response system after the agent has conveyed the confirmation number to the consumer, receiving the confirmation number from the consumer at the interactive voice response system to provide authorization from the consumer, and storing a record of entry of the confirmation number by the consumer through the interactive voice response system in a non-transitory computer readable storage medium.
1. A method for receiving authorization from a consumer, the method comprising steps of: communicating a confirmation number to an agent; connecting a phone call associated with a consumer to an interactive voice response system after the agent has conveyed the confirmation number to the consumer; receiving the confirmation number from the consumer at the interactive voice response system to provide authorization from the consumer; storing a record of entry of the confirmation number by the consumer through the interactive voice response system in a non-transitory computer readable storage medium. 2. The method of claim 1 further comprising prompting the consumer through the interactive voice response system to press a number or symbol after entry of the confirmation number and if the number or symbol is not pressed, connecting the phone call associated with the consumer back to the agent. 3. The method of claim 1 wherein the step of communicating the confirmation number to the agent comprises displaying the confirmation number on a display associated with the agent. 4. The method of claim 1 wherein the step of connecting the phone call associated with the consumer to the interactive voice response system comprises conferencing the consumer, the agent, and the interactive voice response system. 5. The method of claim 1 further comprising confirming identity of the consumer through the interactive voice response system before receiving the confirmation number from the consumer. 6. A method for receiving authorization from a consumer, the method comprising steps of: providing a collections relationship management system; providing an interactive voice response system; communicating a confirmation number to an agent through the collections relationship management system; connecting a phone call associated with a consumer to the interactive voice response system after the agent has conveyed the confirmation number to the consumer; receiving the confirmation number from the consumer at the interactive voice response system to provide authorization from the consumer; storing a record of entry of the confirmation number by the consumer through the interactive voice response system in a database stored in a non-transitory computer readable storage medium. 7. The method of claim 6 further comprising prompting the consumer through the interactive voice response system to press a number or symbol after entry of the confirmation number and if the number or symbol is not pressed, connecting the phone call associated with the consumer back to the agent. 8. The method of claim 6 wherein the step of communicating the confirmation number to the agent comprises displaying the confirmation number on a display associated with the agent. 9. The method of claim 6 wherein the step of connecting the phone call associated with the consumer to the interactive voice response system comprises conferencing the consumer, the agent, and the interactive voice response system. 10. The method of claim 6 further comprising confirming identity of the consumer through the interactive voice response system before receiving the confirmation number from the consumer.
A method for receiving authorization from a consumer includes steps of connecting a phone call associated with a consumer to an interactive voice response system after the agent has conveyed the confirmation number to the consumer, receiving the confirmation number from the consumer at the interactive voice response system to provide authorization from the consumer, and storing a record of entry of the confirmation number by the consumer through the interactive voice response system in a non-transitory computer readable storage medium.1. A method for receiving authorization from a consumer, the method comprising steps of: communicating a confirmation number to an agent; connecting a phone call associated with a consumer to an interactive voice response system after the agent has conveyed the confirmation number to the consumer; receiving the confirmation number from the consumer at the interactive voice response system to provide authorization from the consumer; storing a record of entry of the confirmation number by the consumer through the interactive voice response system in a non-transitory computer readable storage medium. 2. The method of claim 1 further comprising prompting the consumer through the interactive voice response system to press a number or symbol after entry of the confirmation number and if the number or symbol is not pressed, connecting the phone call associated with the consumer back to the agent. 3. The method of claim 1 wherein the step of communicating the confirmation number to the agent comprises displaying the confirmation number on a display associated with the agent. 4. The method of claim 1 wherein the step of connecting the phone call associated with the consumer to the interactive voice response system comprises conferencing the consumer, the agent, and the interactive voice response system. 5. The method of claim 1 further comprising confirming identity of the consumer through the interactive voice response system before receiving the confirmation number from the consumer. 6. A method for receiving authorization from a consumer, the method comprising steps of: providing a collections relationship management system; providing an interactive voice response system; communicating a confirmation number to an agent through the collections relationship management system; connecting a phone call associated with a consumer to the interactive voice response system after the agent has conveyed the confirmation number to the consumer; receiving the confirmation number from the consumer at the interactive voice response system to provide authorization from the consumer; storing a record of entry of the confirmation number by the consumer through the interactive voice response system in a database stored in a non-transitory computer readable storage medium. 7. The method of claim 6 further comprising prompting the consumer through the interactive voice response system to press a number or symbol after entry of the confirmation number and if the number or symbol is not pressed, connecting the phone call associated with the consumer back to the agent. 8. The method of claim 6 wherein the step of communicating the confirmation number to the agent comprises displaying the confirmation number on a display associated with the agent. 9. The method of claim 6 wherein the step of connecting the phone call associated with the consumer to the interactive voice response system comprises conferencing the consumer, the agent, and the interactive voice response system. 10. The method of claim 6 further comprising confirming identity of the consumer through the interactive voice response system before receiving the confirmation number from the consumer.
2,600
10,452
10,452
15,400,763
2,662
The present discussion relates to the use of deep learning techniques to accelerate iterative reconstruction of images, such as CT, PET, and MR images. The present approach utilizes deep learning techniques so as to provide a better initialization to one or more steps of the numerical iterative reconstruction algorithm by learning a trajectory of convergence from estimates at different convergence status so that it can reach the maximum or minimum of a cost function faster.
1. A neural network training method, comprising: acquiring a plurality of sets of scan data; performing an iterative reconstruction of each set of scan data to generate one or more input images and one or more target images for each set of scan data, wherein the one or more input images correspond to lower iteration steps or earlier convergence status of the iterative reconstruction than the one or more target images; and training a neural network to generate a trained neural network by providing the one or more input images and corresponding one or more target images for each set of scan data to the neural network. 2. The neural network training method of claim 1, further comprising generating a loss function that characterizes the difference between the one or more target images and predictions made by the neural network. 3. The neural network training method of claim 1, wherein the one or more input images comprise at least a subset of difference images generated by subtracting images generated at the lower iteration steps or earlier convergence status. 4. The neural network training method of claim 1, wherein the one or more input images comprise image feature descriptors or image patches and the target images comprise corresponding image feature descriptors or image patches. 5. The neural network training method of claim 1, wherein the one or more input images and corresponding target images are of a smaller size than the regular size of images which the trained neural network will be used to facilitate the reconstruction of. 6. An iterative reconstruction method, comprising: acquiring a set of scan data; performing an initial reconstruction of the set of scan data to generate one or more initial images; providing the one or more initial images to a trained neural network as inputs; receiving a predicted image or a predicted update as an output of the trained neural network; initializing an iterative reconstruction algorithm using the predicted image or an image generated using the predicted update; and running the iterative reconstruction algorithm for a plurality of steps to generate an output image. 7. The iterative reconstruction method of claim 6, wherein the initial reconstruction is an iterative reconstruction. 8. The iterative reconstruction method of claim 7, wherein the iterative reconstruction is one of an ordered subset expectation maximization (OSEM), penalized likelihood reconstruction, compressed-sensing reconstruction, algebraic reconstruction technique (ART), projection onto convex sets (POCS) reconstruction, or filtered versions of these iterative reconstructions. 9. The iterative reconstruction method of claim 6, wherein the initial reconstruction is an analytic reconstruction. 10. The iterative reconstruction method of claim 9, wherein the analytic reconstruction is one of a Feldkamp-Davis-Kress (FDK) reconstruction, a filtered back projection (FBP), or a filtered version of these reconstructions. 11. The iterative reconstruction method of claim 6, wherein the set of scan data is one of a set of computed tomography scan data, a set of positron emission tomography scan data, a set of single-photon emission computed tomography scan data, or a set of magnetic resonance imaging scan data. 12. The iterative reconstruction method of claim 6, wherein the predicted image has a cost function value corresponding to an iteratively reconstructed image obtained from performing a number of iteration steps on the one or more initial images. 13. The iterative reconstruction method of claim 6, further comprising: providing the output image to the trained neural network or to a second trained neural network as a subsequent input; receiving a second predicted image or a second predicted update from the trained neural network or the second trained neural network; initializing a second instance of the iterative reconstruction algorithm using the second predicted image or a derived image generated using the second predicted update and running the second instance of the iterative reconstruction algorithm for a plurality of steps to generate a second output image. 14. The iterative reconstruction method of claim 6, wherein the iterative reconstruction algorithm reaches a cost function value in fewer iterations than if the iterative reconstruction algorithm were run on the set of scan data without generating the predicted image or predicted update using the trained neural network. 15. The iterative reconstruction method of claim 6, further comprising providing image feature descriptors or image patches in addition to the one or more initial images to the trained neural network. 16. The iterative reconstruction method of claim 6, further comprising providing hyper-parameters or a transformation of the hyper-parameters and scan data in addition to the one or more initial images to the trained neural network. 17. An imaging system comprising: a data acquisition system configured to acquire a set of scan data from one or more scan components; a processing component configured to execute one or more stored processor-executable routines; and a memory storing the one or more executable-routines, wherein the one or more executable routines, when executed by the processing component, cause acts to be performed comprising: performing an initial reconstruction of the set of scan data to generate one or more initial images; providing the one or more initial images to a trained neural network as inputs; receiving a predicted image or a predicted update as an output of the trained neural network; initializing an iterative reconstruction algorithm using the predicted image or an image generated using the predicted update; and running the iterative reconstruction algorithm for a plurality of steps to generate an output image. 18. The imaging system of claim 17, wherein the initial reconstruction is an iterative reconstruction. 19. The imaging system of claim 17, wherein the initial reconstruction is an analytic reconstruction. 20. The imaging system of claim 17, wherein the imaging system is one of a computed tomography imaging system, a positron emission tomography imaging system, a single-photon emission computed tomography system, or a magnetic resonance imaging system. 21. The imaging system of claim 17, wherein the predicted image has a cost function value corresponding to an iteratively reconstructed image obtained from performing a number of iteration steps on the one or more initial images.
The present discussion relates to the use of deep learning techniques to accelerate iterative reconstruction of images, such as CT, PET, and MR images. The present approach utilizes deep learning techniques so as to provide a better initialization to one or more steps of the numerical iterative reconstruction algorithm by learning a trajectory of convergence from estimates at different convergence status so that it can reach the maximum or minimum of a cost function faster.1. A neural network training method, comprising: acquiring a plurality of sets of scan data; performing an iterative reconstruction of each set of scan data to generate one or more input images and one or more target images for each set of scan data, wherein the one or more input images correspond to lower iteration steps or earlier convergence status of the iterative reconstruction than the one or more target images; and training a neural network to generate a trained neural network by providing the one or more input images and corresponding one or more target images for each set of scan data to the neural network. 2. The neural network training method of claim 1, further comprising generating a loss function that characterizes the difference between the one or more target images and predictions made by the neural network. 3. The neural network training method of claim 1, wherein the one or more input images comprise at least a subset of difference images generated by subtracting images generated at the lower iteration steps or earlier convergence status. 4. The neural network training method of claim 1, wherein the one or more input images comprise image feature descriptors or image patches and the target images comprise corresponding image feature descriptors or image patches. 5. The neural network training method of claim 1, wherein the one or more input images and corresponding target images are of a smaller size than the regular size of images which the trained neural network will be used to facilitate the reconstruction of. 6. An iterative reconstruction method, comprising: acquiring a set of scan data; performing an initial reconstruction of the set of scan data to generate one or more initial images; providing the one or more initial images to a trained neural network as inputs; receiving a predicted image or a predicted update as an output of the trained neural network; initializing an iterative reconstruction algorithm using the predicted image or an image generated using the predicted update; and running the iterative reconstruction algorithm for a plurality of steps to generate an output image. 7. The iterative reconstruction method of claim 6, wherein the initial reconstruction is an iterative reconstruction. 8. The iterative reconstruction method of claim 7, wherein the iterative reconstruction is one of an ordered subset expectation maximization (OSEM), penalized likelihood reconstruction, compressed-sensing reconstruction, algebraic reconstruction technique (ART), projection onto convex sets (POCS) reconstruction, or filtered versions of these iterative reconstructions. 9. The iterative reconstruction method of claim 6, wherein the initial reconstruction is an analytic reconstruction. 10. The iterative reconstruction method of claim 9, wherein the analytic reconstruction is one of a Feldkamp-Davis-Kress (FDK) reconstruction, a filtered back projection (FBP), or a filtered version of these reconstructions. 11. The iterative reconstruction method of claim 6, wherein the set of scan data is one of a set of computed tomography scan data, a set of positron emission tomography scan data, a set of single-photon emission computed tomography scan data, or a set of magnetic resonance imaging scan data. 12. The iterative reconstruction method of claim 6, wherein the predicted image has a cost function value corresponding to an iteratively reconstructed image obtained from performing a number of iteration steps on the one or more initial images. 13. The iterative reconstruction method of claim 6, further comprising: providing the output image to the trained neural network or to a second trained neural network as a subsequent input; receiving a second predicted image or a second predicted update from the trained neural network or the second trained neural network; initializing a second instance of the iterative reconstruction algorithm using the second predicted image or a derived image generated using the second predicted update and running the second instance of the iterative reconstruction algorithm for a plurality of steps to generate a second output image. 14. The iterative reconstruction method of claim 6, wherein the iterative reconstruction algorithm reaches a cost function value in fewer iterations than if the iterative reconstruction algorithm were run on the set of scan data without generating the predicted image or predicted update using the trained neural network. 15. The iterative reconstruction method of claim 6, further comprising providing image feature descriptors or image patches in addition to the one or more initial images to the trained neural network. 16. The iterative reconstruction method of claim 6, further comprising providing hyper-parameters or a transformation of the hyper-parameters and scan data in addition to the one or more initial images to the trained neural network. 17. An imaging system comprising: a data acquisition system configured to acquire a set of scan data from one or more scan components; a processing component configured to execute one or more stored processor-executable routines; and a memory storing the one or more executable-routines, wherein the one or more executable routines, when executed by the processing component, cause acts to be performed comprising: performing an initial reconstruction of the set of scan data to generate one or more initial images; providing the one or more initial images to a trained neural network as inputs; receiving a predicted image or a predicted update as an output of the trained neural network; initializing an iterative reconstruction algorithm using the predicted image or an image generated using the predicted update; and running the iterative reconstruction algorithm for a plurality of steps to generate an output image. 18. The imaging system of claim 17, wherein the initial reconstruction is an iterative reconstruction. 19. The imaging system of claim 17, wherein the initial reconstruction is an analytic reconstruction. 20. The imaging system of claim 17, wherein the imaging system is one of a computed tomography imaging system, a positron emission tomography imaging system, a single-photon emission computed tomography system, or a magnetic resonance imaging system. 21. The imaging system of claim 17, wherein the predicted image has a cost function value corresponding to an iteratively reconstructed image obtained from performing a number of iteration steps on the one or more initial images.
2,600
10,453
10,453
13,932,177
2,648
A fully functional spectrum analyzer is integrated into an outdoor communications unit of a point-to-point communication system. The spectrum analyzer of the outdoor unit provides for remote spectral diagnostics for network planning and wideband operation and is operable to capture signals outside of the signal bandwidth. With the spectrum analyzer integrated into the outdoor unit, accessing spectral diagnostic information is conducted without having to disrupt the normal operation of the communications network.
1. A communications unit, comprising: a transmitter generating outgoing communication signals; a receiver processing received incoming communication signals; and an integrated spectrum analyzer with digital signal processor, the digital signal processor configured to process at least the incoming communication signals to determine spectral information associated with the incoming communication signals. 2. The communications unit of claim 1, wherein the receiver is a direct-conversion receiver having one or more receiver paths that limit incoming communication signal bandwidth by at least one wide band low-pass filter. 3. The communications unit of claim 2, further comprising one or more wide band high speed converters operatively connected to the at least one wide band low-pass filter and the digital signal processor is further configured to capture the spectral information from the wide band low-pass filtered incoming communication signals. 4. The communications unit of claim 3, wherein the digital signal processor is further configured to process the captured spectral information. 5. The communications unit of claim 4, wherein the digital signal processor is further configured to process the captured spectral information by digital Fast Fourier Transformation (Digital FFT). 6. The communications unit of claim 3, wherein the digital signal processor further comprises a memory and is further configured to store the processed captured spectral information as data. 7. The communications unit of claim 6, wherein the digital signal processor is further configured to provide the stored data to a remote network operator. 8. The communications unit of claim 1, wherein the receiver is a direct-conversion receiver having one or more first receiver signal paths that limit incoming communication signal bandwidth by at least one operational band low-pass filter and one or more second receiver signal paths that remain unfiltered. 9. The communications unit of claim 8, wherein the digital signal processor is further configured to process the one or more first and second receiver signal paths to capture the spectral information. 10. A method of detecting out-of-band interferers to a communications unit, the method performed within the communications unit and comprising: receiving incoming communications signals; limiting bandwidth of the incoming communication signals to wide band incoming communications signals; converting the wide band incoming communications signals with high speed wide band converters to capture spectral information; and processing and storing the captured spectral information as data. 11. The method of claim 10, wherein the communications unit is a point-to-point outdoor communications unit. 12. The method of claim 10, wherein the communications unit is a microwave point-to-point outdoor communications unit. 13. The method of claim 10, wherein the communications unit is an RF point-to-point outdoor communications unit. 14. The method of claim 10, wherein the processing is performed by a digital signal processor configured to provide the captured spectral information to a remote network operator. 15. An outdoor point-to-point communications unit comprising: a spectrum analyzer including a digital signal processor; a transmitter operatively connected to the digital signal processor and generating outgoing communication signals; a receiver operatively connected to the digital signal processor and processing received incoming communication signals; and the digital signal processor configured to process at least the received incoming communication signals to determine spectral information associated with the incoming communication signals. 16. The outdoor point-to-point communications unit of claim 15, wherein the receiver is a direct-conversion receiver having one or more receiver paths that limit incoming communication signal bandwidth by at least one wide band low-pass filter and one or more wide band high speed converters operatively connected to the at least one wide band low-pass filter configured to capture the spectral information from the wide band low-pass filtered incoming communication signals. 17. The outdoor point-to-point communications unit of claim 15, wherein the receiver is a direct-conversion receiver having one or more first receiver signal paths that limit incoming communication signal bandwidth by at least one operational band low-pass filter and one or more second receiver paths that remain unfiltered and the digital signal processor is configured to process the one or more first and second receiver signal paths to capture the spectral information. 18. The outdoor point-to-point communications unit of claim 15, further comprising an adaptive digital pre-distortion feedback loop for the transmitter. 19. The outdoor point-to-point communications unit of claim 15, wherein the digital signal processor further comprises a memory and the digital signal processor is further configured to process and store in the memory the captured spectral information as data. 20. The outdoor point-to-point communications unit of claim 19, wherein the digital signal processor is further configured to provide the stored data to a remote network operator.
A fully functional spectrum analyzer is integrated into an outdoor communications unit of a point-to-point communication system. The spectrum analyzer of the outdoor unit provides for remote spectral diagnostics for network planning and wideband operation and is operable to capture signals outside of the signal bandwidth. With the spectrum analyzer integrated into the outdoor unit, accessing spectral diagnostic information is conducted without having to disrupt the normal operation of the communications network.1. A communications unit, comprising: a transmitter generating outgoing communication signals; a receiver processing received incoming communication signals; and an integrated spectrum analyzer with digital signal processor, the digital signal processor configured to process at least the incoming communication signals to determine spectral information associated with the incoming communication signals. 2. The communications unit of claim 1, wherein the receiver is a direct-conversion receiver having one or more receiver paths that limit incoming communication signal bandwidth by at least one wide band low-pass filter. 3. The communications unit of claim 2, further comprising one or more wide band high speed converters operatively connected to the at least one wide band low-pass filter and the digital signal processor is further configured to capture the spectral information from the wide band low-pass filtered incoming communication signals. 4. The communications unit of claim 3, wherein the digital signal processor is further configured to process the captured spectral information. 5. The communications unit of claim 4, wherein the digital signal processor is further configured to process the captured spectral information by digital Fast Fourier Transformation (Digital FFT). 6. The communications unit of claim 3, wherein the digital signal processor further comprises a memory and is further configured to store the processed captured spectral information as data. 7. The communications unit of claim 6, wherein the digital signal processor is further configured to provide the stored data to a remote network operator. 8. The communications unit of claim 1, wherein the receiver is a direct-conversion receiver having one or more first receiver signal paths that limit incoming communication signal bandwidth by at least one operational band low-pass filter and one or more second receiver signal paths that remain unfiltered. 9. The communications unit of claim 8, wherein the digital signal processor is further configured to process the one or more first and second receiver signal paths to capture the spectral information. 10. A method of detecting out-of-band interferers to a communications unit, the method performed within the communications unit and comprising: receiving incoming communications signals; limiting bandwidth of the incoming communication signals to wide band incoming communications signals; converting the wide band incoming communications signals with high speed wide band converters to capture spectral information; and processing and storing the captured spectral information as data. 11. The method of claim 10, wherein the communications unit is a point-to-point outdoor communications unit. 12. The method of claim 10, wherein the communications unit is a microwave point-to-point outdoor communications unit. 13. The method of claim 10, wherein the communications unit is an RF point-to-point outdoor communications unit. 14. The method of claim 10, wherein the processing is performed by a digital signal processor configured to provide the captured spectral information to a remote network operator. 15. An outdoor point-to-point communications unit comprising: a spectrum analyzer including a digital signal processor; a transmitter operatively connected to the digital signal processor and generating outgoing communication signals; a receiver operatively connected to the digital signal processor and processing received incoming communication signals; and the digital signal processor configured to process at least the received incoming communication signals to determine spectral information associated with the incoming communication signals. 16. The outdoor point-to-point communications unit of claim 15, wherein the receiver is a direct-conversion receiver having one or more receiver paths that limit incoming communication signal bandwidth by at least one wide band low-pass filter and one or more wide band high speed converters operatively connected to the at least one wide band low-pass filter configured to capture the spectral information from the wide band low-pass filtered incoming communication signals. 17. The outdoor point-to-point communications unit of claim 15, wherein the receiver is a direct-conversion receiver having one or more first receiver signal paths that limit incoming communication signal bandwidth by at least one operational band low-pass filter and one or more second receiver paths that remain unfiltered and the digital signal processor is configured to process the one or more first and second receiver signal paths to capture the spectral information. 18. The outdoor point-to-point communications unit of claim 15, further comprising an adaptive digital pre-distortion feedback loop for the transmitter. 19. The outdoor point-to-point communications unit of claim 15, wherein the digital signal processor further comprises a memory and the digital signal processor is further configured to process and store in the memory the captured spectral information as data. 20. The outdoor point-to-point communications unit of claim 19, wherein the digital signal processor is further configured to provide the stored data to a remote network operator.
2,600
10,454
10,454
15,285,995
2,625
A display unit is equipped with an LED circuit body for irradiating light onto the windshield of a vehicle and a surface panel positioned on the front side in the light irradiating direction of the LED circuit body and constituting part of the surface of the instrument panel of the vehicle. The surface panel is provided with a plurality of pores formed in the direction connecting the LED circuit body to the windshield. The light irradiating direction of the LED circuit body is set to the direction in which the light irradiated from the LED circuit body and reflected by the windshield is visually recognized by the driver.
1. A display device, comprising a display section for irradiating light to the windshield of a vehicle, and a surface panel positioned on a front side in a light irradiating direction of the display section and constituting part of a surface of an instrument panel of the vehicle, wherein the surface panel is provided with a plurality of holes formed in a direction connecting the display section to the windshield, and a light irradiating direction of the display section is set to a direction in which the light irradiated from the display section and reflected by the windshield is visually recognized by the driver. 2. The display device according to claim 1, wherein the display section is implemented as a pair of display sections installed on the left and right sides. 3. The display device according to claim 2, wherein an instrument unit for displaying states of the vehicle is installed in the instrument panel, and the display section is implemented as a pair of display sections installed on the left and right sides of the instrument unit. 4. The display device according to claim 3, comprising a control section for controlling display produced by the display section and the instrument unit, wherein the control section relates a content of the display produced by the display section to a content of the display produced by the instrument unit.
A display unit is equipped with an LED circuit body for irradiating light onto the windshield of a vehicle and a surface panel positioned on the front side in the light irradiating direction of the LED circuit body and constituting part of the surface of the instrument panel of the vehicle. The surface panel is provided with a plurality of pores formed in the direction connecting the LED circuit body to the windshield. The light irradiating direction of the LED circuit body is set to the direction in which the light irradiated from the LED circuit body and reflected by the windshield is visually recognized by the driver.1. A display device, comprising a display section for irradiating light to the windshield of a vehicle, and a surface panel positioned on a front side in a light irradiating direction of the display section and constituting part of a surface of an instrument panel of the vehicle, wherein the surface panel is provided with a plurality of holes formed in a direction connecting the display section to the windshield, and a light irradiating direction of the display section is set to a direction in which the light irradiated from the display section and reflected by the windshield is visually recognized by the driver. 2. The display device according to claim 1, wherein the display section is implemented as a pair of display sections installed on the left and right sides. 3. The display device according to claim 2, wherein an instrument unit for displaying states of the vehicle is installed in the instrument panel, and the display section is implemented as a pair of display sections installed on the left and right sides of the instrument unit. 4. The display device according to claim 3, comprising a control section for controlling display produced by the display section and the instrument unit, wherein the control section relates a content of the display produced by the display section to a content of the display produced by the instrument unit.
2,600
10,455
10,455
15,234,552
2,632
A method and apparatus provide equal energy codebooks for coupled antennas with transmission lines. A plurality of precoders can be received from a codebook in a transmitter having an antenna array. Each precoder of the plurality of precoders can be transformed to a transformed precoder such that the transmit power for each transformed precoder is equal to the transmit power for each of other transformed precoders of the plurality of precoders. The transmit power can be expressed as a quadratic form with respect to the corresponding precoder. The quadratic form can be based on a transmission line impedance of a transmission line between a signal source and the antenna array. A signal can be received from the signal source. A transformed precoder of the plurality of transformed precoders can be applied to the signal to generate a precoded signal for transmission over a physical channel. The precoded signal can be transmitted.
1. A method comprising: receiving a plurality of precoders from a codebook in a transmitter having an antenna array; transforming each precoder of the plurality of precoders to a transformed precoder such that the transmit power for each transformed precoder is equal to the transmit power for each of other transformed precoders of the plurality of precoders, where the transmit power is expressed as a quadratic form with respect to the corresponding precoder, where the quadratic form is based on a transmission line impedance of a transmission line between a signal source and the antenna array; receiving a signal from the signal source; applying a transformed precoder of the plurality of transformed precoders to the signal to generate a precoded signal for transmission over a physical channel; and transmitting the precoded signal. 2. The method according to claim 1, wherein the quadratic form is also based on an impedance matrix of the antenna array. 3. The method according to claim 1, wherein the quadratic form is also based on a matching network between the signal source and the antenna array. 4. The method according to claim 1, wherein the quadratic form is also based on an impedance of the antenna array, a signal source impedance, and length of the transmission line. 5. The method according to claim 1, wherein each precoder is transformed by scaling each precoder by an inverse square root of a transmit power that results from the corresponding precoder before scaling is applied. 6. The method according to claim 5, wherein scaling comprises normalizing a precoder into a normalized precoder based on the quadratic form. 7. The method according to claim 5, further comprising transmitting a scaling factor used for the scaling. 8. The method according to claim 1, wherein each precoder is transformed by multiplying each precoder by a transformation matrix such that the resulting set of precoders each map to antenna patterns having the same power. 9. The method according to claim 8, wherein the transformation matrix is based on an impedance matrix of the antenna array seen at the signal source as a function of the transmission line length, the transmission line impedance, and an impedance of the antenna array. 10. The method according to claim 1, wherein the transformation equalizes the transmit power for all of the plurality of precoders. 11. The method according to claim 1, wherein transforming transforms the precoders to equal energy precoders when transmission lines are used between a source of the signal and the antenna array. 12. The method according to claim 1, wherein both data symbol precoders and reference symbol precoders are transformed by the same transformation. 13. The method according to claim 12, further comprising: receiving a precoded signal including reference symbols; estimating channels for the reference symbols; estimating a channel for the data symbols by taking an inner product of a conjugate of a data symbol precoder and the reference symbol channel estimates; and demodulating received data symbols based on the estimated channel. 14. An apparatus comprising: an antenna array; a transceiver coupled to the antenna array; a signal source coupled to the transceiver; a transmission line coupled between the signal source and the antenna array, the transmission line having a transmission line impedance; a memory to store a codebook including a plurality of precoders; and a controller coupled to the transceiver and the memory, the to receive a plurality of precoders from the codebook, transform each precoder of the plurality of precoders to a transformed precoder such that the transmit power for each transformed precoder is equal to the transmit power for each of other transformed precoders of the plurality of precoders, where the transmit power is expressed as a quadratic form with respect to the corresponding precoder, where the quadratic form is based on the transmission line impedance, receive a signal from the signal source, and apply a transformed precoder of the plurality of transformed precoders to the signal to generate a precoded signal for transmission over a physical channel, wherein the transceiver transmits the precoded signal. 15. The apparatus according to claim 15, wherein the quadratic form is also based on at least one selected from an impedance matrix of the antenna array, a matching network between the signal source and the antenna array, an impedance of the antenna array, a signal source impedance, and length of the transmission line. 16. The apparatus according to claim 14, wherein each precoder is transformed by scaling each precoder by an inverse square root of a transmit power that results from the corresponding precoder before scaling is applied. 17. The apparatus according to claim 16, wherein scaling comprises normalizing a precoder into a normalized precoder based on the quadratic form. 18. The apparatus according to claim 14, wherein each precoder is transformed by multiplying each precoder by a transformation matrix such that the resulting set of precoders each map to antenna patterns having the same power. 19. The apparatus according to claim 18, wherein the transformation matrix is based on an impedance matrix of the antenna array seen at the signal source as a function of the transmission line length, the transmission line impedance, and an impedance of the antenna array. 20. The apparatus according to claim 14, wherein the controller transforms the precoders to equal energy precoders when transmission lines are used between a source of the signal and the antenna array.
A method and apparatus provide equal energy codebooks for coupled antennas with transmission lines. A plurality of precoders can be received from a codebook in a transmitter having an antenna array. Each precoder of the plurality of precoders can be transformed to a transformed precoder such that the transmit power for each transformed precoder is equal to the transmit power for each of other transformed precoders of the plurality of precoders. The transmit power can be expressed as a quadratic form with respect to the corresponding precoder. The quadratic form can be based on a transmission line impedance of a transmission line between a signal source and the antenna array. A signal can be received from the signal source. A transformed precoder of the plurality of transformed precoders can be applied to the signal to generate a precoded signal for transmission over a physical channel. The precoded signal can be transmitted.1. A method comprising: receiving a plurality of precoders from a codebook in a transmitter having an antenna array; transforming each precoder of the plurality of precoders to a transformed precoder such that the transmit power for each transformed precoder is equal to the transmit power for each of other transformed precoders of the plurality of precoders, where the transmit power is expressed as a quadratic form with respect to the corresponding precoder, where the quadratic form is based on a transmission line impedance of a transmission line between a signal source and the antenna array; receiving a signal from the signal source; applying a transformed precoder of the plurality of transformed precoders to the signal to generate a precoded signal for transmission over a physical channel; and transmitting the precoded signal. 2. The method according to claim 1, wherein the quadratic form is also based on an impedance matrix of the antenna array. 3. The method according to claim 1, wherein the quadratic form is also based on a matching network between the signal source and the antenna array. 4. The method according to claim 1, wherein the quadratic form is also based on an impedance of the antenna array, a signal source impedance, and length of the transmission line. 5. The method according to claim 1, wherein each precoder is transformed by scaling each precoder by an inverse square root of a transmit power that results from the corresponding precoder before scaling is applied. 6. The method according to claim 5, wherein scaling comprises normalizing a precoder into a normalized precoder based on the quadratic form. 7. The method according to claim 5, further comprising transmitting a scaling factor used for the scaling. 8. The method according to claim 1, wherein each precoder is transformed by multiplying each precoder by a transformation matrix such that the resulting set of precoders each map to antenna patterns having the same power. 9. The method according to claim 8, wherein the transformation matrix is based on an impedance matrix of the antenna array seen at the signal source as a function of the transmission line length, the transmission line impedance, and an impedance of the antenna array. 10. The method according to claim 1, wherein the transformation equalizes the transmit power for all of the plurality of precoders. 11. The method according to claim 1, wherein transforming transforms the precoders to equal energy precoders when transmission lines are used between a source of the signal and the antenna array. 12. The method according to claim 1, wherein both data symbol precoders and reference symbol precoders are transformed by the same transformation. 13. The method according to claim 12, further comprising: receiving a precoded signal including reference symbols; estimating channels for the reference symbols; estimating a channel for the data symbols by taking an inner product of a conjugate of a data symbol precoder and the reference symbol channel estimates; and demodulating received data symbols based on the estimated channel. 14. An apparatus comprising: an antenna array; a transceiver coupled to the antenna array; a signal source coupled to the transceiver; a transmission line coupled between the signal source and the antenna array, the transmission line having a transmission line impedance; a memory to store a codebook including a plurality of precoders; and a controller coupled to the transceiver and the memory, the to receive a plurality of precoders from the codebook, transform each precoder of the plurality of precoders to a transformed precoder such that the transmit power for each transformed precoder is equal to the transmit power for each of other transformed precoders of the plurality of precoders, where the transmit power is expressed as a quadratic form with respect to the corresponding precoder, where the quadratic form is based on the transmission line impedance, receive a signal from the signal source, and apply a transformed precoder of the plurality of transformed precoders to the signal to generate a precoded signal for transmission over a physical channel, wherein the transceiver transmits the precoded signal. 15. The apparatus according to claim 15, wherein the quadratic form is also based on at least one selected from an impedance matrix of the antenna array, a matching network between the signal source and the antenna array, an impedance of the antenna array, a signal source impedance, and length of the transmission line. 16. The apparatus according to claim 14, wherein each precoder is transformed by scaling each precoder by an inverse square root of a transmit power that results from the corresponding precoder before scaling is applied. 17. The apparatus according to claim 16, wherein scaling comprises normalizing a precoder into a normalized precoder based on the quadratic form. 18. The apparatus according to claim 14, wherein each precoder is transformed by multiplying each precoder by a transformation matrix such that the resulting set of precoders each map to antenna patterns having the same power. 19. The apparatus according to claim 18, wherein the transformation matrix is based on an impedance matrix of the antenna array seen at the signal source as a function of the transmission line length, the transmission line impedance, and an impedance of the antenna array. 20. The apparatus according to claim 14, wherein the controller transforms the precoders to equal energy precoders when transmission lines are used between a source of the signal and the antenna array.
2,600
10,456
10,456
16,056,878
2,685
A codeset having function-code combinations is provisioned on a controlling device to control functions of an intended target device. Input is provided to the controlling device which designates a function to be controlled on the intended target device. From a plurality of codes that are each associated with the designated function in a database stored in a memory of the controlling device a first code that is determined to be valid for use in controlling the designated function on the intended target device is selected. When the codeset is then provisioned on the controlling device, the provisioned codeset includes as a function-code combination thereof the designated function and the first code.
1. A controlling device adapted to command a functional operation of a controlled device, comprising: a processing device; an input element coupled to the processing device; a transmitter coupled to the processing device; and a memory coupled to the processing device; wherein the memory has stored therein a first preset codeset, a second preset codeset, and instructions executable by the processing device which, when executed by the processing device, causes the controlling device to respond to the input element of the controlling device being activated by determining if a keycode corresponding to the activated input element is present in the first preset codeset and, when it is determined that the keycode corresponding to the activated input element is not present in the first preset codeset, automatically using a keycode corresponding to the activated input element in the second preset codeset to generate a command for transmission to the controlled device via use of the transmitter. 2. The controlling device as recited in claim 1, wherein the first preset codeset and the second preset codeset are each intended for use in commanding functional operations of a controlled device of a same type. 3. The controlling device as recited in claim 1, wherein the first preset codeset and the second preset codeset are each intended for use in commanding functional operations of a controlled device of a different type. 4. The controlling device as recited in claim 1, wherein the instructions, when executed by the processing device, check a status of a flag bit associated with the first preset codeset when determining if the keycode corresponding to the activated input element is present in the first preset codeset. 5. The controlling device as recited in claim 1, wherein the instructions, when executed by the processing device, use timing and modulation scheme information associated with the second preset codeset when transmitting the command to the controlled device via use of the transmitter. 6. The controlling device as recited in claim 1, wherein the memory stores information that functions to link the second preset codeset to the first preset codeset. 7. The controlling device as recited in claim 6, wherein the instructions, when executed by the processing device, further cause the controlling device to implement an auto-scan procedure for use in determining the information. 8. The controlling device as recited in claim 1, wherein the transmitter comprises an infrared transmitter. 9. The controlling device as recited in claim 1, wherein the transmitter comprises a radio frequency transmitter of the controlling device.
A codeset having function-code combinations is provisioned on a controlling device to control functions of an intended target device. Input is provided to the controlling device which designates a function to be controlled on the intended target device. From a plurality of codes that are each associated with the designated function in a database stored in a memory of the controlling device a first code that is determined to be valid for use in controlling the designated function on the intended target device is selected. When the codeset is then provisioned on the controlling device, the provisioned codeset includes as a function-code combination thereof the designated function and the first code.1. A controlling device adapted to command a functional operation of a controlled device, comprising: a processing device; an input element coupled to the processing device; a transmitter coupled to the processing device; and a memory coupled to the processing device; wherein the memory has stored therein a first preset codeset, a second preset codeset, and instructions executable by the processing device which, when executed by the processing device, causes the controlling device to respond to the input element of the controlling device being activated by determining if a keycode corresponding to the activated input element is present in the first preset codeset and, when it is determined that the keycode corresponding to the activated input element is not present in the first preset codeset, automatically using a keycode corresponding to the activated input element in the second preset codeset to generate a command for transmission to the controlled device via use of the transmitter. 2. The controlling device as recited in claim 1, wherein the first preset codeset and the second preset codeset are each intended for use in commanding functional operations of a controlled device of a same type. 3. The controlling device as recited in claim 1, wherein the first preset codeset and the second preset codeset are each intended for use in commanding functional operations of a controlled device of a different type. 4. The controlling device as recited in claim 1, wherein the instructions, when executed by the processing device, check a status of a flag bit associated with the first preset codeset when determining if the keycode corresponding to the activated input element is present in the first preset codeset. 5. The controlling device as recited in claim 1, wherein the instructions, when executed by the processing device, use timing and modulation scheme information associated with the second preset codeset when transmitting the command to the controlled device via use of the transmitter. 6. The controlling device as recited in claim 1, wherein the memory stores information that functions to link the second preset codeset to the first preset codeset. 7. The controlling device as recited in claim 6, wherein the instructions, when executed by the processing device, further cause the controlling device to implement an auto-scan procedure for use in determining the information. 8. The controlling device as recited in claim 1, wherein the transmitter comprises an infrared transmitter. 9. The controlling device as recited in claim 1, wherein the transmitter comprises a radio frequency transmitter of the controlling device.
2,600
10,457
10,457
13,323,840
2,647
Method, device, and system for deriving keys are provided in the field of mobile communications technologies. The method for deriving keys may be used, for example, in a handover process of a User Equipment (UE) from an Evolved Universal Terrestrial Radio Access Network (EUTRAN) to a Universal Terrestrial Radio Access Network (UTRAN). If a failure occurred in a first handover, the method ensures that the key derived by a source Mobility Management Entity (MME) for a second handover process of the UE is different from the key derived for the first handover process of the UE. This is done by changing input parameters used in the key derivation, so as to prevent the situation in the prior art that once the key used on one Radio Network Controller (RNC) is obtained, the keys on other RNCs can be derived accordingly, thereby enhancing the network security.
1. A method for deriving a key for use by a user equipment (UE) in a target radio access network when the UE is handed over from a source radio access network to the target radio access network, comprising: receiving, by a mobility management entity (MME) of the source radio access network, a “Handover Required” message from a base station (BS) of the source radio access network; obtaining, by the MME, a Non-Access Stratum (NAS) downlink COUNT value; deriving, by the MME, a key for use by the UE in the target radio access network according to a Key Derivation Function (KDF), a root key, and the NAS downlink COUNT value; and sending, by the MME, at least a part of the NAS downlink COUNT value to the UE. 2. The method according to claim 1, wherein the NAS downlink COUNT value is a new NAS downlink COUNT value obtained based on a current NAS downlink COUNT value, and obtaining, by the MME, the new NAS downlink COUNT value comprises: adding, by the MME, a certain value to the current NAS downlink COUNT value. 3. The method according to claim 2, wherein the certain value is 1. 4. The method according to claim 1, wherein the obtaining, by the MME, the NAS downlink COUNT value comprises: sending, by the MME, an NAS message to the UE, and using an NAS downlink COUNT value after the NAS message is sent as the NAS downlink COUNT value. 5. The method according to claim 1, wherein the sending, by the MME, at least a part of the NAS downlink COUNT value to the UE comprises: sending, by the MME, four least significant bits of the NAS downlink COUNT value to the UE. 6. The method according to claim 1, wherein the source radio access network is an Evolved Universal Terrestrial Radio Access Network (EUTRAN). 7. The method according to claim 1, wherein the target radio access network is a Universal Terrestrial Radio Access Network (UTRAN). 8. A method for derive a key for use by a user equipment (UE) in a target radio access network when the UE is handed over from a source radio access network to a target radio access network, comprising: receiving, by the UE, a Handover Command message from a base station of the source radio access network; wherein the Handover Command message comprises at least a part of a Non-Access Stratum (NAS) downlink COUNT value obtained from a Mobility Management Entity (MME) of the source radio access network; and deriving, by the UE, the key for use by the UE in the target radio access network according to a Key Derivation Function (KDF), a root key, and the at least a part of the NAS downlink COUNT value. 9. The method according to claim 8, wherein the NAS downlink COUNT value is a new NAS downlink COUNT value obtained based on a current NAS downlink COUNT value, and the new NAS downlink COUNT value is obtained by adding a certain value to the current NAS downlink COUNT value. 10. The method according to claim 9, wherein the certain value is 1. 11. The method according to claim 8, wherein the at least a part of the NAS downlink COUNT value comprises: four least significant bits of the NAS downlink COUNT value. 12. The method according to claim 8, wherein the source radio access network is an Evolved Universal Terrestrial Radio Access Network (EUTRAN). 13. The method according to claim 8, wherein the target radio access network is a Universal Terrestrial Radio Access Network (UTRAN). 14. An apparatus for deriving a key for use by a user equipment (UE) in a target radio access network when the UE is handed over from a source radio access network to the target radio access network, comprising: a receiving unit, configured to receive a “Handover Required” message from a base station of the source radio access network; an obtaining unit, configured to obtain, a Non-Access Stratum (NAS) downlink COUNT value; a key deriving unit, configured to derive a new key for the UE to use in the target radio access network according to a Key Derivation Function (KDF), a root key, and the NAS downlink COUNT value; and a sending unit, configured to send at least a part of the NAS downlink COUNT value to the UE. 15. The apparatus according to claim 14, wherein the NAS downlink COUNT value is a new NAS downlink COUNT value obtained based on a current NAS downlink COUNT value, and the obtaining unit is configured to obtain the new NAS downlink COUNT value by adding a certain value to the current NAS downlink COUNT value. 16. The apparatus according to claim 15, wherein the certain value is 1. 17. The apparatus according to claim 14, wherein the sending unit is configured to send four least significant bits of the NAS downlink COUNT value to the UE. 18. The apparatus according to claim 14, wherein the source radio access network is an Evolved Universal Terrestrial Radio Access Network (EUTRAN). 19. The apparatus according to claim 14, wherein the target radio access network is a Universal Terrestrial Radio Access Network (UTRAN). 20. The apparatus according to claim 14, where the apparatus is a Mobility Management Entity of the source radio access network. 21. A user equipment (UE), comprising: a receiving unit, configured to receive a Handover Command message from a base station of a source radio access network where the UE is located; wherein the Handover Command message comprises at least a part of a Non-Access Stratum (NAS) downlink COUNT value obtained from a Mobility Management Entity (MME) of the source radio access network; and a key deriving unit, configured to derive a key for use by the UE in a target radio access network which the UE is handed over to according to a Key Derivation Function (KDF), a root key, and the at least a part of the NAS downlink COUNT value. 22. The UE according to claim 21, wherein the NAS downlink COUNT value is a new NAS downlink COUNT value obtained based on a current NAS downlink COUNT value, and the new NAS downlink COUNT value is obtained by adding a certain value to the current NAS downlink COUNT value. 23. The UE according to claim 22, wherein the certain value is 1. 24. The UE according to claim 21, wherein the part of the NAS downlink COUNT value comprises four least significant bits of the NAS downlink COUNT value. 25. The UE according to claim 21, wherein the source radio access network is an Evolved Universal Terrestrial Radio Access Network (EUTRAN). 26. The UE according to claim 21, wherein the target radio access network is a Universal Terrestrial Radio Access Network (UTRAN).
Method, device, and system for deriving keys are provided in the field of mobile communications technologies. The method for deriving keys may be used, for example, in a handover process of a User Equipment (UE) from an Evolved Universal Terrestrial Radio Access Network (EUTRAN) to a Universal Terrestrial Radio Access Network (UTRAN). If a failure occurred in a first handover, the method ensures that the key derived by a source Mobility Management Entity (MME) for a second handover process of the UE is different from the key derived for the first handover process of the UE. This is done by changing input parameters used in the key derivation, so as to prevent the situation in the prior art that once the key used on one Radio Network Controller (RNC) is obtained, the keys on other RNCs can be derived accordingly, thereby enhancing the network security.1. A method for deriving a key for use by a user equipment (UE) in a target radio access network when the UE is handed over from a source radio access network to the target radio access network, comprising: receiving, by a mobility management entity (MME) of the source radio access network, a “Handover Required” message from a base station (BS) of the source radio access network; obtaining, by the MME, a Non-Access Stratum (NAS) downlink COUNT value; deriving, by the MME, a key for use by the UE in the target radio access network according to a Key Derivation Function (KDF), a root key, and the NAS downlink COUNT value; and sending, by the MME, at least a part of the NAS downlink COUNT value to the UE. 2. The method according to claim 1, wherein the NAS downlink COUNT value is a new NAS downlink COUNT value obtained based on a current NAS downlink COUNT value, and obtaining, by the MME, the new NAS downlink COUNT value comprises: adding, by the MME, a certain value to the current NAS downlink COUNT value. 3. The method according to claim 2, wherein the certain value is 1. 4. The method according to claim 1, wherein the obtaining, by the MME, the NAS downlink COUNT value comprises: sending, by the MME, an NAS message to the UE, and using an NAS downlink COUNT value after the NAS message is sent as the NAS downlink COUNT value. 5. The method according to claim 1, wherein the sending, by the MME, at least a part of the NAS downlink COUNT value to the UE comprises: sending, by the MME, four least significant bits of the NAS downlink COUNT value to the UE. 6. The method according to claim 1, wherein the source radio access network is an Evolved Universal Terrestrial Radio Access Network (EUTRAN). 7. The method according to claim 1, wherein the target radio access network is a Universal Terrestrial Radio Access Network (UTRAN). 8. A method for derive a key for use by a user equipment (UE) in a target radio access network when the UE is handed over from a source radio access network to a target radio access network, comprising: receiving, by the UE, a Handover Command message from a base station of the source radio access network; wherein the Handover Command message comprises at least a part of a Non-Access Stratum (NAS) downlink COUNT value obtained from a Mobility Management Entity (MME) of the source radio access network; and deriving, by the UE, the key for use by the UE in the target radio access network according to a Key Derivation Function (KDF), a root key, and the at least a part of the NAS downlink COUNT value. 9. The method according to claim 8, wherein the NAS downlink COUNT value is a new NAS downlink COUNT value obtained based on a current NAS downlink COUNT value, and the new NAS downlink COUNT value is obtained by adding a certain value to the current NAS downlink COUNT value. 10. The method according to claim 9, wherein the certain value is 1. 11. The method according to claim 8, wherein the at least a part of the NAS downlink COUNT value comprises: four least significant bits of the NAS downlink COUNT value. 12. The method according to claim 8, wherein the source radio access network is an Evolved Universal Terrestrial Radio Access Network (EUTRAN). 13. The method according to claim 8, wherein the target radio access network is a Universal Terrestrial Radio Access Network (UTRAN). 14. An apparatus for deriving a key for use by a user equipment (UE) in a target radio access network when the UE is handed over from a source radio access network to the target radio access network, comprising: a receiving unit, configured to receive a “Handover Required” message from a base station of the source radio access network; an obtaining unit, configured to obtain, a Non-Access Stratum (NAS) downlink COUNT value; a key deriving unit, configured to derive a new key for the UE to use in the target radio access network according to a Key Derivation Function (KDF), a root key, and the NAS downlink COUNT value; and a sending unit, configured to send at least a part of the NAS downlink COUNT value to the UE. 15. The apparatus according to claim 14, wherein the NAS downlink COUNT value is a new NAS downlink COUNT value obtained based on a current NAS downlink COUNT value, and the obtaining unit is configured to obtain the new NAS downlink COUNT value by adding a certain value to the current NAS downlink COUNT value. 16. The apparatus according to claim 15, wherein the certain value is 1. 17. The apparatus according to claim 14, wherein the sending unit is configured to send four least significant bits of the NAS downlink COUNT value to the UE. 18. The apparatus according to claim 14, wherein the source radio access network is an Evolved Universal Terrestrial Radio Access Network (EUTRAN). 19. The apparatus according to claim 14, wherein the target radio access network is a Universal Terrestrial Radio Access Network (UTRAN). 20. The apparatus according to claim 14, where the apparatus is a Mobility Management Entity of the source radio access network. 21. A user equipment (UE), comprising: a receiving unit, configured to receive a Handover Command message from a base station of a source radio access network where the UE is located; wherein the Handover Command message comprises at least a part of a Non-Access Stratum (NAS) downlink COUNT value obtained from a Mobility Management Entity (MME) of the source radio access network; and a key deriving unit, configured to derive a key for use by the UE in a target radio access network which the UE is handed over to according to a Key Derivation Function (KDF), a root key, and the at least a part of the NAS downlink COUNT value. 22. The UE according to claim 21, wherein the NAS downlink COUNT value is a new NAS downlink COUNT value obtained based on a current NAS downlink COUNT value, and the new NAS downlink COUNT value is obtained by adding a certain value to the current NAS downlink COUNT value. 23. The UE according to claim 22, wherein the certain value is 1. 24. The UE according to claim 21, wherein the part of the NAS downlink COUNT value comprises four least significant bits of the NAS downlink COUNT value. 25. The UE according to claim 21, wherein the source radio access network is an Evolved Universal Terrestrial Radio Access Network (EUTRAN). 26. The UE according to claim 21, wherein the target radio access network is a Universal Terrestrial Radio Access Network (UTRAN).
2,600
10,458
10,458
14,634,244
2,624
One embodiment provides an information handling device, including: a processor; a memory device that stores instructions executable by the processor to: receive, at an electronic device, a drawing input; receive, at the electronic device, secondary user input; and modify, the drawing input based on the secondary user input; wherein the secondary input is voice input. Other aspects are described and claimed.
1. A method, comprising: receiving, at an electronic device, a drawing input; receiving, at the electronic device, secondary user input; and modifying the drawing input based on the secondary user input; wherein the secondary user input is voice input. 2. The method of claim 1, wherein the modification is applied while the drawing input is received. 3. The method of claim 1, wherein the receiving a drawing input further comprises detecting one or more of: stylus pressure, stylus speed, and stylus direction that exceeds a predetermined threshold; and wherein the modifying the drawing input occurs in response to the detecting. 4. The method of claim 1, wherein the receiving a drawing input comprises accepting a drawing input entered using a device selected from the group consisting of: a mouse, a finger, a pen, a capacitive stylus, a resistive stylus, a surface acoustic wave stylus, and an active digitizer pen. 5. The method of claim 1, wherein the modifying comprises changing a character of the drawing input. 6. The method of claim 5, wherein the character is one or more of: color, thickness, shape, texture, diameter, scattering, opacity, flow, hardness, and angle. 7. The method of claim 1, wherein the modifying comprises changing a character of an object, wherein the character is one or more of: orientation, rotation, color, size, shape, filter, and opacity. 8. The method of claim 1, further comprising interpreting the drawing input as an animation trajectory. 9. The method of claim 8, wherein the modifying comprises modifying at least one preset animation along the animation trajectory. 10. The method of claim 1, wherein the modification is applied not while the drawing input is received. 11. An information handling device, comprising: a processor; a memory device that stores instructions executable by the processor to: receive, at an electronic device, a drawing input; receive, at the electronic device, secondary user input; and modify, the drawing input based on the secondary user input; wherein the secondary input is voice input. 12. The information handling device of claim 11, wherein the modification is applied while the drawing input is received. 13. The information handling device of claim 11, wherein the receiving a drawing input further comprises detecting one or more of: stylus pressure, stylus speed, and stylus direction that exceeds a predetermined threshold; and wherein the modifying the drawing input occurs in response to the detecting. 14. The information handling device of claim 11, wherein the receiving a drawing input comprises accepting a drawing input entered using a device selected from the group consisting of: a mouse, a finger, a pen, a capacitive stylus, a resistive stylus, a surface acoustic wave stylus, and an active digitizer pen. 15. The information handling device of claim 11, wherein the modifying comprises changing a character of the drawing input, wherein the character is one or more of: color, thickness, shape, texture, diameter, scattering, opacity, flow, hardness, and angle. 16. The information handling device of claim 11, wherein the modifying comprises changing a character of an object, wherein the character is one or more of: orientation, rotation, color, size, shape, filter, and opacity. 17. The information handling device of claim 11, further comprising interpreting the drawing input as an animation trajectory. 18. The information handling device of claim 17, wherein the modifying comprises modifying at least one preset animation along the animation trajectory. 19. The information handling device of claim 11, wherein the modification is applied not while the drawing input is received. 20. A product, comprising: a storage device having code stored therewith, the code being executable by the processor and comprising: code that receives, at an electronic device, a drawing input; code that receives, at the electronic device, secondary user input; and code that modifies, the drawing input based on the secondary user input; wherein the secondary user input is voice input.
One embodiment provides an information handling device, including: a processor; a memory device that stores instructions executable by the processor to: receive, at an electronic device, a drawing input; receive, at the electronic device, secondary user input; and modify, the drawing input based on the secondary user input; wherein the secondary input is voice input. Other aspects are described and claimed.1. A method, comprising: receiving, at an electronic device, a drawing input; receiving, at the electronic device, secondary user input; and modifying the drawing input based on the secondary user input; wherein the secondary user input is voice input. 2. The method of claim 1, wherein the modification is applied while the drawing input is received. 3. The method of claim 1, wherein the receiving a drawing input further comprises detecting one or more of: stylus pressure, stylus speed, and stylus direction that exceeds a predetermined threshold; and wherein the modifying the drawing input occurs in response to the detecting. 4. The method of claim 1, wherein the receiving a drawing input comprises accepting a drawing input entered using a device selected from the group consisting of: a mouse, a finger, a pen, a capacitive stylus, a resistive stylus, a surface acoustic wave stylus, and an active digitizer pen. 5. The method of claim 1, wherein the modifying comprises changing a character of the drawing input. 6. The method of claim 5, wherein the character is one or more of: color, thickness, shape, texture, diameter, scattering, opacity, flow, hardness, and angle. 7. The method of claim 1, wherein the modifying comprises changing a character of an object, wherein the character is one or more of: orientation, rotation, color, size, shape, filter, and opacity. 8. The method of claim 1, further comprising interpreting the drawing input as an animation trajectory. 9. The method of claim 8, wherein the modifying comprises modifying at least one preset animation along the animation trajectory. 10. The method of claim 1, wherein the modification is applied not while the drawing input is received. 11. An information handling device, comprising: a processor; a memory device that stores instructions executable by the processor to: receive, at an electronic device, a drawing input; receive, at the electronic device, secondary user input; and modify, the drawing input based on the secondary user input; wherein the secondary input is voice input. 12. The information handling device of claim 11, wherein the modification is applied while the drawing input is received. 13. The information handling device of claim 11, wherein the receiving a drawing input further comprises detecting one or more of: stylus pressure, stylus speed, and stylus direction that exceeds a predetermined threshold; and wherein the modifying the drawing input occurs in response to the detecting. 14. The information handling device of claim 11, wherein the receiving a drawing input comprises accepting a drawing input entered using a device selected from the group consisting of: a mouse, a finger, a pen, a capacitive stylus, a resistive stylus, a surface acoustic wave stylus, and an active digitizer pen. 15. The information handling device of claim 11, wherein the modifying comprises changing a character of the drawing input, wherein the character is one or more of: color, thickness, shape, texture, diameter, scattering, opacity, flow, hardness, and angle. 16. The information handling device of claim 11, wherein the modifying comprises changing a character of an object, wherein the character is one or more of: orientation, rotation, color, size, shape, filter, and opacity. 17. The information handling device of claim 11, further comprising interpreting the drawing input as an animation trajectory. 18. The information handling device of claim 17, wherein the modifying comprises modifying at least one preset animation along the animation trajectory. 19. The information handling device of claim 11, wherein the modification is applied not while the drawing input is received. 20. A product, comprising: a storage device having code stored therewith, the code being executable by the processor and comprising: code that receives, at an electronic device, a drawing input; code that receives, at the electronic device, secondary user input; and code that modifies, the drawing input based on the secondary user input; wherein the secondary user input is voice input.
2,600
10,459
10,459
15,437,609
2,694
The present invention provides a method for reducing interference to liquid crystal touch screen from touch driving signal, wherein the liquid crystal touch screen comprises a display composed of multiple pixel horizontal axes, multiple parallel first electrodes and multiple parallel second electrodes, multiple intersections are formed by the first electrodes and the second electrodes, the method comprising: concurrently providing sine wave driving signal to at least one of the first electrodes; and sensing the sine wave driving signal via the multiple second electrodes, wherein the multiple pixel horizontal axes are refreshed sequentially during the time interval of providing sine wave driving signal.
1. A method for reducing interference to liquid crystal touch screen from touch driving signal, wherein the liquid crystal touch screen comprises a display composed of multiple pixel horizontal axes, multiple parallel first electrodes and multiple parallel second electrodes, multiple intersections are formed by the first electrodes and the second electrodes, the method comprising: concurrently providing sine wave driving signal to at least one of the first electrodes; and sensing the sine wave driving signal via the multiple second electrodes, wherein the multiple pixel horizontal axes are refreshed sequentially during the time interval of providing sine wave driving signal. 2. The method of claim 1, wherein the step of concurrently providing sine wave driving signal to at least one of the first electrodes further comprises concurrently providing the sine wave driving signal to all of the first electrodes. 3. The method of claim 1, wherein the multiple parallel first electrodes are parallel to the pixel horizontal axes. 4. The method of claim 3, wherein at least one of the pixel horizontal axes refreshed sequentially is covered by the first electrode. 5. The method of claim 1, wherein the liquid crystal touch screen is structured as “in-cell” form. 6. A touch sensitive processor for reducing interference to liquid crystal touch screen from touch driving signal, wherein the liquid crystal touch screen comprises a display composed of multiple pixel horizontal axes, multiple parallel first electrodes and multiple parallel second electrodes, multiple intersections are formed by the first electrodes and the second electrodes, the touch sensitive processor comprising: a driving circuit for concurrently providing sine wave driving signal to at least one of the first electrodes; and a sensing circuit for sensing the sine wave driving signal by the multiple second electrodes, wherein the multiple pixel horizontal axes are refreshed sequentially by a display controller during the time interval of providing sine wave driving signal. 7. The touch sensitive processor of claim 6, wherein the step of concurrently providing by the driving circuit further comprises concurrently providing sine wave driving signal to all of the first electrodes. 8. The touch sensitive processor of claim 6, wherein the multiple parallel first electrodes are parallel to the pixel horizontal axes. 9. The touch sensitive processor of claim 8, wherein at least one of the pixel horizontal axes refreshed sequentially is covered by the first electrode. 10. The touch sensitive processor of claim 6, wherein the liquid crystal touch screen is structured as “in-cell” form. 11. An electronic system for reducing interference to liquid crystal touch screen from touch driving signal, comprising: a liquid crystal touch screen comprises: a display composed of multiple pixel horizontal axes; multiple parallel first electrodes; and multiple parallel second electrodes, wherein multiple intersections are formed by the first electrodes and the second electrodes; a display controller for sequentially refreshing the pixel horizontal axes; and a touch sensitive processor comprises: a driving circuit for concurrently providing sine wave driving signal to at least one of the first electrodes; and a sensing circuit for sensing the sine wave driving signal by the multiple second electrodes, wherein the multiple pixel horizontal axes are refreshed sequentially by the display controller during the time interval of providing sine wave driving signal. 12. The electronic system of claim 11, wherein the step of concurrently providing by the driving circuit further comprises concurrently providing sine wave driving signal to all of the first electrodes. 13. The electronic system of claim 11, wherein the multiple parallel first electrodes are parallel to the pixel horizontal axes. 14. The electronic system of claim 13, wherein at least one of the pixel horizontal axes refreshed sequentially is covered by the first electrode. 15. The electronic system of claim 11, wherein the liquid crystal touch screen is structured as “in-cell” form.
The present invention provides a method for reducing interference to liquid crystal touch screen from touch driving signal, wherein the liquid crystal touch screen comprises a display composed of multiple pixel horizontal axes, multiple parallel first electrodes and multiple parallel second electrodes, multiple intersections are formed by the first electrodes and the second electrodes, the method comprising: concurrently providing sine wave driving signal to at least one of the first electrodes; and sensing the sine wave driving signal via the multiple second electrodes, wherein the multiple pixel horizontal axes are refreshed sequentially during the time interval of providing sine wave driving signal.1. A method for reducing interference to liquid crystal touch screen from touch driving signal, wherein the liquid crystal touch screen comprises a display composed of multiple pixel horizontal axes, multiple parallel first electrodes and multiple parallel second electrodes, multiple intersections are formed by the first electrodes and the second electrodes, the method comprising: concurrently providing sine wave driving signal to at least one of the first electrodes; and sensing the sine wave driving signal via the multiple second electrodes, wherein the multiple pixel horizontal axes are refreshed sequentially during the time interval of providing sine wave driving signal. 2. The method of claim 1, wherein the step of concurrently providing sine wave driving signal to at least one of the first electrodes further comprises concurrently providing the sine wave driving signal to all of the first electrodes. 3. The method of claim 1, wherein the multiple parallel first electrodes are parallel to the pixel horizontal axes. 4. The method of claim 3, wherein at least one of the pixel horizontal axes refreshed sequentially is covered by the first electrode. 5. The method of claim 1, wherein the liquid crystal touch screen is structured as “in-cell” form. 6. A touch sensitive processor for reducing interference to liquid crystal touch screen from touch driving signal, wherein the liquid crystal touch screen comprises a display composed of multiple pixel horizontal axes, multiple parallel first electrodes and multiple parallel second electrodes, multiple intersections are formed by the first electrodes and the second electrodes, the touch sensitive processor comprising: a driving circuit for concurrently providing sine wave driving signal to at least one of the first electrodes; and a sensing circuit for sensing the sine wave driving signal by the multiple second electrodes, wherein the multiple pixel horizontal axes are refreshed sequentially by a display controller during the time interval of providing sine wave driving signal. 7. The touch sensitive processor of claim 6, wherein the step of concurrently providing by the driving circuit further comprises concurrently providing sine wave driving signal to all of the first electrodes. 8. The touch sensitive processor of claim 6, wherein the multiple parallel first electrodes are parallel to the pixel horizontal axes. 9. The touch sensitive processor of claim 8, wherein at least one of the pixel horizontal axes refreshed sequentially is covered by the first electrode. 10. The touch sensitive processor of claim 6, wherein the liquid crystal touch screen is structured as “in-cell” form. 11. An electronic system for reducing interference to liquid crystal touch screen from touch driving signal, comprising: a liquid crystal touch screen comprises: a display composed of multiple pixel horizontal axes; multiple parallel first electrodes; and multiple parallel second electrodes, wherein multiple intersections are formed by the first electrodes and the second electrodes; a display controller for sequentially refreshing the pixel horizontal axes; and a touch sensitive processor comprises: a driving circuit for concurrently providing sine wave driving signal to at least one of the first electrodes; and a sensing circuit for sensing the sine wave driving signal by the multiple second electrodes, wherein the multiple pixel horizontal axes are refreshed sequentially by the display controller during the time interval of providing sine wave driving signal. 12. The electronic system of claim 11, wherein the step of concurrently providing by the driving circuit further comprises concurrently providing sine wave driving signal to all of the first electrodes. 13. The electronic system of claim 11, wherein the multiple parallel first electrodes are parallel to the pixel horizontal axes. 14. The electronic system of claim 13, wherein at least one of the pixel horizontal axes refreshed sequentially is covered by the first electrode. 15. The electronic system of claim 11, wherein the liquid crystal touch screen is structured as “in-cell” form.
2,600
10,460
10,460
15,297,271
2,631
According to an embodiment of the present disclosure, there is a wearable electronic device, comprising: a first sensor configured to sense a movement of the electronic device; a second sensor configured to sense a biological signal for a user wearing the electronic device; and a processor configured to compute a movement value of the electronic device using the first sensor, to detect a resting state when the movement value lasts within a predetermined first threshold range during a first time period, and to configure biological information of the user based on a biological signal measured after detection of the resting state.
1. A wearable electronic device, comprising: a first sensor configured to sense a movement of the electronic device; a second sensor configured to sense a biological signal of a user wearing the electronic device; and a processor configured to receive, via the first sensor, information relating to the movement of the electronic device, and to determine a pulse rate of the user based on the biological signal at a first time if a movement of the electronic device is less than a predetermined first threshold range during a first time period before the first time based on the information received from the first sensor. 2. The wearable electronic device of claim 1, wherein the electronic device comprises a wrist watch, and the processor is configured to determine whether the wrist watch is currently being worn by a user. 3. The wearable electronic device of claim 1, wherein, if a movement of the electronic device is greater than the first threshold range during the first time period, the processor is configured to determine a pulse rate of the user at a later time after the first time. 4. The wearable electronic device of claim 1, wherein the processor is configured to activate the second sensor to sense the biological signal if a movement of the electronic device is less than a predetermined first threshold range during a first time period. 5. A method for measuring biological information using a wearable electronic device, the method comprising: receiving information relating to a movement of the electronic device; determining whether a movement of the electronic device is less than a predetermined first threshold range during a first time period based on the information relating to a movement of the electronic device; receiving a biological signal of a user wearing the electronic device; and determining a pulse rate of the user based on the biological signal at a first time if a movement of the electronic device is less than a predetermined first threshold range during a first time period before the first time based on the information relating to a movement of the electronic device. 6. The method of claim 5, wherein the electronic device comprises a wrist watch, and the method further comprising determining whether the wrist watch is currently being worn by a user. 7. The method of claim 5, further comprising if a movement of the electronic device is greater than the first threshold range during the first time period, determining a pulse rate of the user at a later time after the first time. 8. The method of claim 5, wherein receiving the biological signal of the user is initiated if a movement of the electronic device is less than a predetermined first threshold range during a first time period.
According to an embodiment of the present disclosure, there is a wearable electronic device, comprising: a first sensor configured to sense a movement of the electronic device; a second sensor configured to sense a biological signal for a user wearing the electronic device; and a processor configured to compute a movement value of the electronic device using the first sensor, to detect a resting state when the movement value lasts within a predetermined first threshold range during a first time period, and to configure biological information of the user based on a biological signal measured after detection of the resting state.1. A wearable electronic device, comprising: a first sensor configured to sense a movement of the electronic device; a second sensor configured to sense a biological signal of a user wearing the electronic device; and a processor configured to receive, via the first sensor, information relating to the movement of the electronic device, and to determine a pulse rate of the user based on the biological signal at a first time if a movement of the electronic device is less than a predetermined first threshold range during a first time period before the first time based on the information received from the first sensor. 2. The wearable electronic device of claim 1, wherein the electronic device comprises a wrist watch, and the processor is configured to determine whether the wrist watch is currently being worn by a user. 3. The wearable electronic device of claim 1, wherein, if a movement of the electronic device is greater than the first threshold range during the first time period, the processor is configured to determine a pulse rate of the user at a later time after the first time. 4. The wearable electronic device of claim 1, wherein the processor is configured to activate the second sensor to sense the biological signal if a movement of the electronic device is less than a predetermined first threshold range during a first time period. 5. A method for measuring biological information using a wearable electronic device, the method comprising: receiving information relating to a movement of the electronic device; determining whether a movement of the electronic device is less than a predetermined first threshold range during a first time period based on the information relating to a movement of the electronic device; receiving a biological signal of a user wearing the electronic device; and determining a pulse rate of the user based on the biological signal at a first time if a movement of the electronic device is less than a predetermined first threshold range during a first time period before the first time based on the information relating to a movement of the electronic device. 6. The method of claim 5, wherein the electronic device comprises a wrist watch, and the method further comprising determining whether the wrist watch is currently being worn by a user. 7. The method of claim 5, further comprising if a movement of the electronic device is greater than the first threshold range during the first time period, determining a pulse rate of the user at a later time after the first time. 8. The method of claim 5, wherein receiving the biological signal of the user is initiated if a movement of the electronic device is less than a predetermined first threshold range during a first time period.
2,600
10,461
10,461
14,619,815
2,621
Methods, devices, and systems are provided that enable a user to discreetly provide input to a computer system via a handheld input control device. The input control device is physically discrete, or separate, from the computer and is configured to provide input based on one or more of an orientation of the device and a disposition of a user's digits on the device. The device can continually and dynamically reconfigure itself based on a recognizable pattern, or locational arrangement, associated with a user's hand. For example, the device can determine where the features of a user's hand are, on or about the device, at any point in time. The device can then map, or remap, various input sensors to match the locational arrangement of features in an ad hoc manner when the device is grasped.
1. An input control device, comprising: a contact surface separated into a plurality of contact areas; one or more input sensors disposed adjacent to each contact area and configured to receive user input therefrom; and a controller operatively connected to the one or more input sensors and configured to determine a baseline locational arrangement of one or more input entities relative to one another and dynamically assign input sensors adjacent to the one or more input entities to receive input based on the baseline locational arrangement. 2. The input control device of claim 1, wherein the controller is further configured to determine a baseline orientation of the input control device, and wherein the baseline locational arrangement and the baseline orientation define a baseline operational condition of the input control device. 3. The input control device of claim 2, wherein the controller is further configured to provide a control instruction based on a difference between the baseline operational condition of the input control device and at least one of contact information corresponding to a disposition of the one or more input entities adjacent to the contact surface and an orientation of the input control device. 4. The input control device of claim 3, wherein the one or more input entities correspond to digits of a hand of a user, and wherein the baseline locational arrangement corresponds to measured distances between portions of the digits of the hand contacting the input control device. 5. The input control device of claim 4, wherein the baseline locational arrangement defines which individual digits of the hand are allowed to provide input to the input control device. 6. The input control device of claim 4, wherein upon moving the input control device relative to the hand of the user such that the digits contact other input sensors of the one or more input sensors, the controller is further configured to dynamically assign the other contacted input sensors adjacent to the digits to receive input based on the baseline locational arrangement. 7. The input control device of claim 4, wherein the control instruction is at least partially based on the difference between the baseline orientation and a changed orientation of the input control device, wherein the changed orientation of the input control device corresponds to at least one of a pitch, a roll, and a yaw of the input control device. 8. The input control device of claim 4, further comprising: one or more orientation sensors configured to provide at least one of the baseline orientation of the input control device and a changed orientation of the input control device based on a measurement of a device reference relative to a gravity vector reference. 9. The input control device of claim 4, wherein the one or more input sensors include at least one of a pressure sensor, piezoelectric sensor or transducer, capacitive sensor, potentiometric transducer, inductive pressure transducer, strain gauge, displacement transducer, resistive touch surface, capacitive touch surface, image sensor, camera, temperature sensor, and IR sensor. 10. The input control device of claim 4, wherein the input control device is configured as a substantially ellipsoidal or ovoid shape. 11. The input control device of claim 4, further comprising: a communications module configured to provide the control instruction to a computer system communicatively connected to the input control device. 12. The input control device of claim 4, wherein the control instruction is based at least partially on a pressure associated with one or more digits contacting a particular contact area of the input control device. 13. A method of configuring an input control device, comprising: determining a baseline locational arrangement of one or more input entities relative to one another based on information provided via one or more input sensors of the input control device; and assigning, dynamically and in response to determining the baseline locational arrangement, input sensors adjacent to the one or more input entities to receive input based on the baseline locational arrangement. 14. The method of claim 13, further comprising: determining a baseline orientation of the input control device, and wherein the baseline locational arrangement and the baseline orientation define a baseline operational condition of the input control device. 15. The method of claim 14, further comprising: providing a control instruction based on a difference between the baseline operational condition of the input control device and at least one of contact information corresponding to a disposition of the one or more input entities adjacent to the contact surface and an orientation of the input control device. 16. The method of claim 15, wherein prior to providing the control instruction, the method further comprises: determining which individual entities of the one or more entities are allowed to provide input to the input control device. 17. The method of claim 13, further comprising: initiating an operational timer upon receiving a contact from the one or more input entities, wherein the operational timer includes an expiration value; determining whether the operational timer has expired; and reducing a power consumption of the input control device when the operational timer has expired. 18. The method of claim 13, wherein the one or more input entities correspond to digits of a hand of the user, and wherein the baseline locational arrangement corresponds to measured distances between contacting portions of the digits of the hand. 19. The method of claim 18, wherein upon moving the input control device relative to the hand of the user such that the digits contact other input sensors of the one or more input sensors, the method further comprises: dynamically assigning the other contacted input sensors adjacent to the digits to receive input based on the baseline locational arrangement. 20. A computer control system, comprising: an input control device, comprising: a nonplanar contact surface having a plurality of contact areas; one or more input sensors disposed adjacent to each contact area, the one or more input sensors having an unassigned input functionality; and a controller operatively connected to the one or more input sensors and configured to determine a baseline locational arrangement of one or more input entities relative to one another and dynamically assign an input functionality to input sensors adjacent to the one or more input entities such that the one or more input sensors are configured to receive input based on the baseline locational arrangement; and a computer system having at least one of an audio, a video, and a haptic output, wherein the computer system is configured to receive input provided via the input sensors adjacent to the one or more input entities and translate the input provided to the at least one of the audio, the video, and the haptic output.
Methods, devices, and systems are provided that enable a user to discreetly provide input to a computer system via a handheld input control device. The input control device is physically discrete, or separate, from the computer and is configured to provide input based on one or more of an orientation of the device and a disposition of a user's digits on the device. The device can continually and dynamically reconfigure itself based on a recognizable pattern, or locational arrangement, associated with a user's hand. For example, the device can determine where the features of a user's hand are, on or about the device, at any point in time. The device can then map, or remap, various input sensors to match the locational arrangement of features in an ad hoc manner when the device is grasped.1. An input control device, comprising: a contact surface separated into a plurality of contact areas; one or more input sensors disposed adjacent to each contact area and configured to receive user input therefrom; and a controller operatively connected to the one or more input sensors and configured to determine a baseline locational arrangement of one or more input entities relative to one another and dynamically assign input sensors adjacent to the one or more input entities to receive input based on the baseline locational arrangement. 2. The input control device of claim 1, wherein the controller is further configured to determine a baseline orientation of the input control device, and wherein the baseline locational arrangement and the baseline orientation define a baseline operational condition of the input control device. 3. The input control device of claim 2, wherein the controller is further configured to provide a control instruction based on a difference between the baseline operational condition of the input control device and at least one of contact information corresponding to a disposition of the one or more input entities adjacent to the contact surface and an orientation of the input control device. 4. The input control device of claim 3, wherein the one or more input entities correspond to digits of a hand of a user, and wherein the baseline locational arrangement corresponds to measured distances between portions of the digits of the hand contacting the input control device. 5. The input control device of claim 4, wherein the baseline locational arrangement defines which individual digits of the hand are allowed to provide input to the input control device. 6. The input control device of claim 4, wherein upon moving the input control device relative to the hand of the user such that the digits contact other input sensors of the one or more input sensors, the controller is further configured to dynamically assign the other contacted input sensors adjacent to the digits to receive input based on the baseline locational arrangement. 7. The input control device of claim 4, wherein the control instruction is at least partially based on the difference between the baseline orientation and a changed orientation of the input control device, wherein the changed orientation of the input control device corresponds to at least one of a pitch, a roll, and a yaw of the input control device. 8. The input control device of claim 4, further comprising: one or more orientation sensors configured to provide at least one of the baseline orientation of the input control device and a changed orientation of the input control device based on a measurement of a device reference relative to a gravity vector reference. 9. The input control device of claim 4, wherein the one or more input sensors include at least one of a pressure sensor, piezoelectric sensor or transducer, capacitive sensor, potentiometric transducer, inductive pressure transducer, strain gauge, displacement transducer, resistive touch surface, capacitive touch surface, image sensor, camera, temperature sensor, and IR sensor. 10. The input control device of claim 4, wherein the input control device is configured as a substantially ellipsoidal or ovoid shape. 11. The input control device of claim 4, further comprising: a communications module configured to provide the control instruction to a computer system communicatively connected to the input control device. 12. The input control device of claim 4, wherein the control instruction is based at least partially on a pressure associated with one or more digits contacting a particular contact area of the input control device. 13. A method of configuring an input control device, comprising: determining a baseline locational arrangement of one or more input entities relative to one another based on information provided via one or more input sensors of the input control device; and assigning, dynamically and in response to determining the baseline locational arrangement, input sensors adjacent to the one or more input entities to receive input based on the baseline locational arrangement. 14. The method of claim 13, further comprising: determining a baseline orientation of the input control device, and wherein the baseline locational arrangement and the baseline orientation define a baseline operational condition of the input control device. 15. The method of claim 14, further comprising: providing a control instruction based on a difference between the baseline operational condition of the input control device and at least one of contact information corresponding to a disposition of the one or more input entities adjacent to the contact surface and an orientation of the input control device. 16. The method of claim 15, wherein prior to providing the control instruction, the method further comprises: determining which individual entities of the one or more entities are allowed to provide input to the input control device. 17. The method of claim 13, further comprising: initiating an operational timer upon receiving a contact from the one or more input entities, wherein the operational timer includes an expiration value; determining whether the operational timer has expired; and reducing a power consumption of the input control device when the operational timer has expired. 18. The method of claim 13, wherein the one or more input entities correspond to digits of a hand of the user, and wherein the baseline locational arrangement corresponds to measured distances between contacting portions of the digits of the hand. 19. The method of claim 18, wherein upon moving the input control device relative to the hand of the user such that the digits contact other input sensors of the one or more input sensors, the method further comprises: dynamically assigning the other contacted input sensors adjacent to the digits to receive input based on the baseline locational arrangement. 20. A computer control system, comprising: an input control device, comprising: a nonplanar contact surface having a plurality of contact areas; one or more input sensors disposed adjacent to each contact area, the one or more input sensors having an unassigned input functionality; and a controller operatively connected to the one or more input sensors and configured to determine a baseline locational arrangement of one or more input entities relative to one another and dynamically assign an input functionality to input sensors adjacent to the one or more input entities such that the one or more input sensors are configured to receive input based on the baseline locational arrangement; and a computer system having at least one of an audio, a video, and a haptic output, wherein the computer system is configured to receive input provided via the input sensors adjacent to the one or more input entities and translate the input provided to the at least one of the audio, the video, and the haptic output.
2,600
10,462
10,462
13,968,232
2,619
The projection of interactive images such that different images are pre-edited so that when projected, the image is better suited for viewing from a particular perspective. Thus, a variety of images might be projected such that some are suitable for one perspective, some are suitable for another perspective, and so forth. For instance, one image might be edited so that when projected, the projected first image is presented for better viewing from a first perspective. Another image might be edited so that when projected, the projected second image is presented for better viewing from a second perspective.
1. A computer program product comprising one or more computer-readable storage media having thereon computer-executable instructions that are structured such that, when executed by one or more processors of a computing system, cause the computing system to perform a method comprising: an act of editing a first image so that when projected, the projected first image is presented for better viewing from a first perspective; an act of editing a second image so that when projected, the projected second image is presented for better viewing from a second perspective; an act of detecting a first image input event using first captured data representing user interaction with the projected first image; and an act of detecting a second image input event using second captured data representing user interaction with the projected second image. 2. The computer program product in accordance with claim 1, the act of editing the first image occurring in a manner in which keystoning is reduced when viewed from a first angle; and the act of editing the second image occurring in a manner in which keystoning is reduced when viewed from a second angle different than the first angle. 3. The computer program product in accordance with claim 1, the act of editing the first image occurring in a manner in which keystoning is reduced when viewed from a first angle and when the projected first image is projected onto a surface that is not perpendicular to the direction of projection; and the act of editing the second image occurring in a manner in which keystoning is reduced when viewed from a second angle different than the first angle and when the projected first image is projected onto the surface that is not perpendicular to the direction of projection. 4. The computer program product in accordance with claim 1, wherein the first and second image are the same image prior to the acts of editing. 5. The computer program product in accordance with claim 4, the act of editing the first image comprising removing image data from a first portion of the first image; and the act of editing the second image comprising removing image data from a second portion of the second image. 6. The computer program product in accordance with claim 1, the first image and the second image are each dynamic images having a plurality of frames, wherein the frames of the first image are interleaved with the frames of the second image, such that the first perspective is through a first shuttering system that permits the frames of the first image to be viewed but not the frames of the second image, and such that the second perspective is through a second shuttering system that permits the frames of the second image to be viewed but not the frames of the first image. 7. The computer program product in accordance with claim 6, wherein the first image is a three-dimensional image such that a portion of the frames of the first image are to be viewed by a left eye of a user through the first shuttering system, and such that a portion of the frames of the first image are to be viewed by a right eye of the user through the first shuttering system. 8. The computer program product in accordance with claim 7, wherein the second image is also a three-dimensional image such that a portion of the frames of the second image are to be viewed by a left eye of a second user through the second shuttering system, and such that a portion of the frames of the second image are to be viewed by a right eye of the second user through the second shuttering system. 9. The computer program product in accordance with claim 1, the method further comprising: an act of detecting an object in a field of projection of the projected first image; in response to the act of detecting the object in the field of projection, the act of editing the first image includes an act of editing the first image such that a portion of the first image corresponding to a location of the detected object is modified. 10. The computer program product in accordance with claim 9, wherein the portion of the first image is modified such that the detected object has a certain color. 11. The computer program product in accordance with claim 9, wherein the portion of the first image is modified such that the detected object has at a least one control displayed thereon, the user interaction with the projected first image comprising the user interacting with the control projected on the detected object. 12. The computer program product in accordance with claim 11, wherein the detected object is a hand, wherein the portion of the first image is modified such that the hand has a plurality of controls displayed thereon, each control corresponding to a predetermined portion of the hand. 13. The computer program product in accordance with claim 9, wherein the portion of the first image is modified such that the detected object has displayed thereon image data that is obscured by the detected object. 14. The computer program product in accordance with claim 9, wherein the editing of the first image is performed so as to incorporate one or more user preferences of a user that is to view the projected image from the first perspective. 15. A method comprising: an act of a computing system editing a first image so that when projected, the projected first image is presented for better viewing from a first perspective as compared to a second perspective; an act of a projection system projecting the first image onto a surface; an act of a camera system capturing first captured data representing user interaction with the projected first image; an act of the computing system detecting a first image input event using the first captured data representing user interaction; an act of the computing system editing a second image so that when projected, the projected second image is presented for better viewing from the second perspective as compared to the first perspective; and an act of the projection system projecting the second image onto the surface. 16. The method in accordance with claim 15, further comprising: an act of the camera system capturing second captured data representing user interaction with the projected second image; and an act of the computing system detecting a second image input event using the second captured data representing user interaction. 17. The method in accordance with claim 15, the act of the projection system projecting the first image onto the surface comprising: an act of using a first projector to project the first image onto the surface from a first angle and having a first field of projection; and an act of using a second projector to project the first image onto the surface from a second angle and having a second field of projection, such that the first and second fields of projection converge on the surface; and an act of detecting an object in either or both of the first and second fields of projection of the projected first image; in response to the act of detecting the object, the act of editing the first image includes an act of editing the first image for at least one of the fields of projection so as not to provide non-convergent versions of the first image onto the detected object. 18. A system comprising: a projection system; a camera system; and a control system, wherein the control system is configured to perform the following method: an act of editing a first image so that when projected by the projection system on a surface, the projected first image is presented for better viewing from a first perspective; an act of editing a second image so that when projected by the projection system on the surface, the projected second image is presented for better viewing from a second perspective; an act of detecting a first image input event using first captured data representing user interaction with the projected first image, the first captured data captured by the camera system; and an act of detecting a second image input event using second captured data representing user interaction with the projected second image, the second captured data captured by the camera system. 19. The system in accordance with claim 18, wherein the projection system, the camera system, and the control system are integrated and are designed to sit on a same flat surface onto which the projection system projects. 20. The system in accordance with claim 19, wherein the projection system is configured to be attached to a ceiling.
The projection of interactive images such that different images are pre-edited so that when projected, the image is better suited for viewing from a particular perspective. Thus, a variety of images might be projected such that some are suitable for one perspective, some are suitable for another perspective, and so forth. For instance, one image might be edited so that when projected, the projected first image is presented for better viewing from a first perspective. Another image might be edited so that when projected, the projected second image is presented for better viewing from a second perspective.1. A computer program product comprising one or more computer-readable storage media having thereon computer-executable instructions that are structured such that, when executed by one or more processors of a computing system, cause the computing system to perform a method comprising: an act of editing a first image so that when projected, the projected first image is presented for better viewing from a first perspective; an act of editing a second image so that when projected, the projected second image is presented for better viewing from a second perspective; an act of detecting a first image input event using first captured data representing user interaction with the projected first image; and an act of detecting a second image input event using second captured data representing user interaction with the projected second image. 2. The computer program product in accordance with claim 1, the act of editing the first image occurring in a manner in which keystoning is reduced when viewed from a first angle; and the act of editing the second image occurring in a manner in which keystoning is reduced when viewed from a second angle different than the first angle. 3. The computer program product in accordance with claim 1, the act of editing the first image occurring in a manner in which keystoning is reduced when viewed from a first angle and when the projected first image is projected onto a surface that is not perpendicular to the direction of projection; and the act of editing the second image occurring in a manner in which keystoning is reduced when viewed from a second angle different than the first angle and when the projected first image is projected onto the surface that is not perpendicular to the direction of projection. 4. The computer program product in accordance with claim 1, wherein the first and second image are the same image prior to the acts of editing. 5. The computer program product in accordance with claim 4, the act of editing the first image comprising removing image data from a first portion of the first image; and the act of editing the second image comprising removing image data from a second portion of the second image. 6. The computer program product in accordance with claim 1, the first image and the second image are each dynamic images having a plurality of frames, wherein the frames of the first image are interleaved with the frames of the second image, such that the first perspective is through a first shuttering system that permits the frames of the first image to be viewed but not the frames of the second image, and such that the second perspective is through a second shuttering system that permits the frames of the second image to be viewed but not the frames of the first image. 7. The computer program product in accordance with claim 6, wherein the first image is a three-dimensional image such that a portion of the frames of the first image are to be viewed by a left eye of a user through the first shuttering system, and such that a portion of the frames of the first image are to be viewed by a right eye of the user through the first shuttering system. 8. The computer program product in accordance with claim 7, wherein the second image is also a three-dimensional image such that a portion of the frames of the second image are to be viewed by a left eye of a second user through the second shuttering system, and such that a portion of the frames of the second image are to be viewed by a right eye of the second user through the second shuttering system. 9. The computer program product in accordance with claim 1, the method further comprising: an act of detecting an object in a field of projection of the projected first image; in response to the act of detecting the object in the field of projection, the act of editing the first image includes an act of editing the first image such that a portion of the first image corresponding to a location of the detected object is modified. 10. The computer program product in accordance with claim 9, wherein the portion of the first image is modified such that the detected object has a certain color. 11. The computer program product in accordance with claim 9, wherein the portion of the first image is modified such that the detected object has at a least one control displayed thereon, the user interaction with the projected first image comprising the user interacting with the control projected on the detected object. 12. The computer program product in accordance with claim 11, wherein the detected object is a hand, wherein the portion of the first image is modified such that the hand has a plurality of controls displayed thereon, each control corresponding to a predetermined portion of the hand. 13. The computer program product in accordance with claim 9, wherein the portion of the first image is modified such that the detected object has displayed thereon image data that is obscured by the detected object. 14. The computer program product in accordance with claim 9, wherein the editing of the first image is performed so as to incorporate one or more user preferences of a user that is to view the projected image from the first perspective. 15. A method comprising: an act of a computing system editing a first image so that when projected, the projected first image is presented for better viewing from a first perspective as compared to a second perspective; an act of a projection system projecting the first image onto a surface; an act of a camera system capturing first captured data representing user interaction with the projected first image; an act of the computing system detecting a first image input event using the first captured data representing user interaction; an act of the computing system editing a second image so that when projected, the projected second image is presented for better viewing from the second perspective as compared to the first perspective; and an act of the projection system projecting the second image onto the surface. 16. The method in accordance with claim 15, further comprising: an act of the camera system capturing second captured data representing user interaction with the projected second image; and an act of the computing system detecting a second image input event using the second captured data representing user interaction. 17. The method in accordance with claim 15, the act of the projection system projecting the first image onto the surface comprising: an act of using a first projector to project the first image onto the surface from a first angle and having a first field of projection; and an act of using a second projector to project the first image onto the surface from a second angle and having a second field of projection, such that the first and second fields of projection converge on the surface; and an act of detecting an object in either or both of the first and second fields of projection of the projected first image; in response to the act of detecting the object, the act of editing the first image includes an act of editing the first image for at least one of the fields of projection so as not to provide non-convergent versions of the first image onto the detected object. 18. A system comprising: a projection system; a camera system; and a control system, wherein the control system is configured to perform the following method: an act of editing a first image so that when projected by the projection system on a surface, the projected first image is presented for better viewing from a first perspective; an act of editing a second image so that when projected by the projection system on the surface, the projected second image is presented for better viewing from a second perspective; an act of detecting a first image input event using first captured data representing user interaction with the projected first image, the first captured data captured by the camera system; and an act of detecting a second image input event using second captured data representing user interaction with the projected second image, the second captured data captured by the camera system. 19. The system in accordance with claim 18, wherein the projection system, the camera system, and the control system are integrated and are designed to sit on a same flat surface onto which the projection system projects. 20. The system in accordance with claim 19, wherein the projection system is configured to be attached to a ceiling.
2,600
10,463
10,463
15,129,218
2,653
In cases of rendering a multichannel signal such as a 22.2 channel signal as a 5.1 channel signal, a three dimensional (3D) audio signal may be reproduced using a two dimensional (2D) output channel, but rendered audio signals are sensitively affected by a layout of speakers and may cause distortion of a sound image when the layout of arranged speakers is different from a standard layout. The present invention may solve the aforementioned problem of the prior art. The audio signal rendering method for reducing distortion of a sound image even when the layout of the arranged speakers is different from the standard layout, according to one embodiment of the present invention, includes: receiving a multi-channel signal including a plurality of input channels that are to be converted to a plurality of output channels; obtaining deviation information about at least one output channel, from a location of a speaker and a standard location corresponding to each of the plurality of output channels; and modifying a panning gain from a height channel included in the plurality of input channels to the output channel having the deviation information, based on obtained deviation information.
1-19. (canceled) 20. A method of rendering an audio signal, the method comprising: receiving multi-channel signals including one or more height input channels, to be converted from input channels configurations to output channel configurations; obtaining a panning gain for a height input channel to be converted into an output channel based on a standard loudspeaker position; obtaining deviation information about the output channel, from an output loudspeaker position and the standard loudspeaker position; and modifying the obtained panning gain based on the obtained deviation information and the standard loudspeaker position. 21. The method of claim 20, wherein the deviation information includes at least one of an elevation deviation and an azimuth deviation, and wherein the panning gain is modified to keep a central image corresponding to an azimuth of the height input channel. 22. The method of claim 20, wherein plurality of output channels included in the output channel configurations are horizontal channels. 23. The method of claim 20, wherein the output channel comprises at least one of a left horizontal channel and a right horizontal channel. 24. The method of claim 21, wherein the modifying of the panning gain compensates an effect caused by an elevation deviation, when the obtained deviation information comprises the elevation deviation. 25. The method of claim 21, wherein the modifying of the panning gain compensates the panning gain by a two-dimensional (2D) panning method, when the obtained deviation information does not comprise the elevation deviation. 26. The method of claim 24, wherein the compensating of the effect caused by the elevation deviation comprises compensating an inter-aural level difference (ILD) resulting from the elevation deviation. 27. The method of claim 23, wherein the modified panning gain is proportional to the obtained elevation deviation. 28. The method of claim 20, wherein a sum of square values of modified panning gains with respect to plurality of output channels included in the output channel configurations for each of plurality of input channels included in the input channel configurations is 1. 29. An apparatus for rendering an audio signal, the apparatus comprising: a receiver configured to receive a multi-channel signals including one or more height input channels, to be converted from input channel configurations to output channel configurations; a deviation obtaining unit configured to obtain deviation information about an output channel, from an output loudspeaker position and a standard loudspeaker position; and a panning gain obtaining unit configured to obtain a panning gain for a height input channel to be converted into the output channel based on the standard loudspeaker position and to modify the obtained panning gain based on the deviation information and the standard loudspeaker position. 30. The apparatus of claim 29, wherein the deviation information includes at least one of an elevation deviation and an azimuth deviation, and wherein the panning gain is modified to keep a central image corresponding to an azimuth of the height input channel. 31. The apparatus of claim 29, wherein the plurality of output channels are horizontal channels. 32. The apparatus of claim 29, wherein the output channel comprises at least one of a left horizontal channel and a right horizontal channel. 33. The apparatus of claim 30, wherein the panning gain obtaining unit compensates an effect caused by an elevation deviation, when the obtained deviation information comprises the elevation deviation. 34. The apparatus of claim 30, wherein the panning gain obtaining unit compensates the panning gain by a two-dimensional (2D) panning method, when the obtained deviation information does not comprise the elevation deviation. 35. The apparatus of claim 33, wherein the panning gain obtaining unit compensates an inter-aural level difference caused by the elevation deviation to compensate an effect caused by the elevation deviation. 36. The apparatus of claim 30, wherein the panning gain is proportional to the obtained elevation deviation. 37. The apparatus of claim 29, wherein a sum of square values of modified panning gains with respect to plurality of output channels included in the output channel configurations for each of plurality of input channels included in the input channel configurations is 1. 38. A computer-readable recording medium having recorded thereon a computer program for executing the method of claim 20.
In cases of rendering a multichannel signal such as a 22.2 channel signal as a 5.1 channel signal, a three dimensional (3D) audio signal may be reproduced using a two dimensional (2D) output channel, but rendered audio signals are sensitively affected by a layout of speakers and may cause distortion of a sound image when the layout of arranged speakers is different from a standard layout. The present invention may solve the aforementioned problem of the prior art. The audio signal rendering method for reducing distortion of a sound image even when the layout of the arranged speakers is different from the standard layout, according to one embodiment of the present invention, includes: receiving a multi-channel signal including a plurality of input channels that are to be converted to a plurality of output channels; obtaining deviation information about at least one output channel, from a location of a speaker and a standard location corresponding to each of the plurality of output channels; and modifying a panning gain from a height channel included in the plurality of input channels to the output channel having the deviation information, based on obtained deviation information.1-19. (canceled) 20. A method of rendering an audio signal, the method comprising: receiving multi-channel signals including one or more height input channels, to be converted from input channels configurations to output channel configurations; obtaining a panning gain for a height input channel to be converted into an output channel based on a standard loudspeaker position; obtaining deviation information about the output channel, from an output loudspeaker position and the standard loudspeaker position; and modifying the obtained panning gain based on the obtained deviation information and the standard loudspeaker position. 21. The method of claim 20, wherein the deviation information includes at least one of an elevation deviation and an azimuth deviation, and wherein the panning gain is modified to keep a central image corresponding to an azimuth of the height input channel. 22. The method of claim 20, wherein plurality of output channels included in the output channel configurations are horizontal channels. 23. The method of claim 20, wherein the output channel comprises at least one of a left horizontal channel and a right horizontal channel. 24. The method of claim 21, wherein the modifying of the panning gain compensates an effect caused by an elevation deviation, when the obtained deviation information comprises the elevation deviation. 25. The method of claim 21, wherein the modifying of the panning gain compensates the panning gain by a two-dimensional (2D) panning method, when the obtained deviation information does not comprise the elevation deviation. 26. The method of claim 24, wherein the compensating of the effect caused by the elevation deviation comprises compensating an inter-aural level difference (ILD) resulting from the elevation deviation. 27. The method of claim 23, wherein the modified panning gain is proportional to the obtained elevation deviation. 28. The method of claim 20, wherein a sum of square values of modified panning gains with respect to plurality of output channels included in the output channel configurations for each of plurality of input channels included in the input channel configurations is 1. 29. An apparatus for rendering an audio signal, the apparatus comprising: a receiver configured to receive a multi-channel signals including one or more height input channels, to be converted from input channel configurations to output channel configurations; a deviation obtaining unit configured to obtain deviation information about an output channel, from an output loudspeaker position and a standard loudspeaker position; and a panning gain obtaining unit configured to obtain a panning gain for a height input channel to be converted into the output channel based on the standard loudspeaker position and to modify the obtained panning gain based on the deviation information and the standard loudspeaker position. 30. The apparatus of claim 29, wherein the deviation information includes at least one of an elevation deviation and an azimuth deviation, and wherein the panning gain is modified to keep a central image corresponding to an azimuth of the height input channel. 31. The apparatus of claim 29, wherein the plurality of output channels are horizontal channels. 32. The apparatus of claim 29, wherein the output channel comprises at least one of a left horizontal channel and a right horizontal channel. 33. The apparatus of claim 30, wherein the panning gain obtaining unit compensates an effect caused by an elevation deviation, when the obtained deviation information comprises the elevation deviation. 34. The apparatus of claim 30, wherein the panning gain obtaining unit compensates the panning gain by a two-dimensional (2D) panning method, when the obtained deviation information does not comprise the elevation deviation. 35. The apparatus of claim 33, wherein the panning gain obtaining unit compensates an inter-aural level difference caused by the elevation deviation to compensate an effect caused by the elevation deviation. 36. The apparatus of claim 30, wherein the panning gain is proportional to the obtained elevation deviation. 37. The apparatus of claim 29, wherein a sum of square values of modified panning gains with respect to plurality of output channels included in the output channel configurations for each of plurality of input channels included in the input channel configurations is 1. 38. A computer-readable recording medium having recorded thereon a computer program for executing the method of claim 20.
2,600
10,464
10,464
11,094,823
2,628
During handwriting recognition input, when the user needs to input, e.g., a digit or a commonly used punctuation symbol (e.g. comma and period) into a text, she/he can simply make a single-click or a double-click on a corresponding “shortcut” sub-area on the touchpad under the existing input mode and the digit or punctuation symbol will be input. The “shortcuts” of the grid are advantageously arranged as the layout of the dial keys of telephones, which is easy to remember.
1. A method of controlling a mobile communication terminal, said terminal comprising a control unit, a display and touch sensitive means that are configured to recognize handwriting via trace signals generated as a user writes within a hand writing recognition area on the touch sensitive means, said method comprising, while sensing and analyzing a trace signal comprising a sequence of position detections: recognizing a tapping action at a detected action position, selecting a specific symbol from a set of predetermined symbols, said selection being performed in dependence of the detected action position, and providing the selected symbol for use in said control of the mobile communication terminal. 2. The method of claim 1 wherein the symbols in said set of predetermined symbols are associated with a respective sub-area within the writing recognition area and wherein the selection is performed in dependence on within which sub-area the action position is. 3. The method of claim 2 wherein said sub-areas are arranged in rows and columns within the writing recognition area. 4. The method of claim 1 wherein the tapping action is a single-click action. 5. The method of claim 1 wherein the tapping action is a double-click action. 6. The method of claim 1 wherein the predetermined set of symbols comprises the sequence of digits 0 to 9. 7. The method of claim 1 wherein the predetermined set of symbols comprises at least one punctuation symbol. 8. The method of claim 1 wherein the control of the terminal comprises a text editing operation. 9. A mobile communication terminal comprising a control unit, a display and touch sensitive means that are configured to recognize handwriting via trace signals generated as a user writes within a hand writing recognition area on the touch sensitive means, said terminal comprising: control means for sensing and analyzing a trace signal comprising a sequence of position detections, recognizing means for recognizing a tapping action at a detected action position, selection means for selecting a specific symbol from a set of predetermined symbols, said selection being performed in dependence of the detected action position, and provision means for providing the selected symbol for use in said control of the mobile communication terminal. 10. The mobile communication terminal of claim 9 wherein the touch sensitive means are arranged in combination with the display thereby constituting a touch sensitive display. 11. The mobile communication terminal of claim 9 wherein the touch sensitive means are arranged separately with respect to the display thereby constituting a touch sensitive pad. 12. A computer program comprising software instructions that, when executed in a mobile communication terminal comprising a control unit, a display and touch sensitive means that are configured to recognize handwriting via trace signals generated as a user writes within a hand writing recognition area on the touch sensitive means, controls the terminal while sensing and analyzing a trace signal comprising a sequence of position detections by recognizing a tapping action at a detected action position, selecting a specific symbol from a set of predetermined symbols, said selection being performed in dependence of the detected action position, and providing the selected symbol for use in said control of the mobile communication terminal.
During handwriting recognition input, when the user needs to input, e.g., a digit or a commonly used punctuation symbol (e.g. comma and period) into a text, she/he can simply make a single-click or a double-click on a corresponding “shortcut” sub-area on the touchpad under the existing input mode and the digit or punctuation symbol will be input. The “shortcuts” of the grid are advantageously arranged as the layout of the dial keys of telephones, which is easy to remember.1. A method of controlling a mobile communication terminal, said terminal comprising a control unit, a display and touch sensitive means that are configured to recognize handwriting via trace signals generated as a user writes within a hand writing recognition area on the touch sensitive means, said method comprising, while sensing and analyzing a trace signal comprising a sequence of position detections: recognizing a tapping action at a detected action position, selecting a specific symbol from a set of predetermined symbols, said selection being performed in dependence of the detected action position, and providing the selected symbol for use in said control of the mobile communication terminal. 2. The method of claim 1 wherein the symbols in said set of predetermined symbols are associated with a respective sub-area within the writing recognition area and wherein the selection is performed in dependence on within which sub-area the action position is. 3. The method of claim 2 wherein said sub-areas are arranged in rows and columns within the writing recognition area. 4. The method of claim 1 wherein the tapping action is a single-click action. 5. The method of claim 1 wherein the tapping action is a double-click action. 6. The method of claim 1 wherein the predetermined set of symbols comprises the sequence of digits 0 to 9. 7. The method of claim 1 wherein the predetermined set of symbols comprises at least one punctuation symbol. 8. The method of claim 1 wherein the control of the terminal comprises a text editing operation. 9. A mobile communication terminal comprising a control unit, a display and touch sensitive means that are configured to recognize handwriting via trace signals generated as a user writes within a hand writing recognition area on the touch sensitive means, said terminal comprising: control means for sensing and analyzing a trace signal comprising a sequence of position detections, recognizing means for recognizing a tapping action at a detected action position, selection means for selecting a specific symbol from a set of predetermined symbols, said selection being performed in dependence of the detected action position, and provision means for providing the selected symbol for use in said control of the mobile communication terminal. 10. The mobile communication terminal of claim 9 wherein the touch sensitive means are arranged in combination with the display thereby constituting a touch sensitive display. 11. The mobile communication terminal of claim 9 wherein the touch sensitive means are arranged separately with respect to the display thereby constituting a touch sensitive pad. 12. A computer program comprising software instructions that, when executed in a mobile communication terminal comprising a control unit, a display and touch sensitive means that are configured to recognize handwriting via trace signals generated as a user writes within a hand writing recognition area on the touch sensitive means, controls the terminal while sensing and analyzing a trace signal comprising a sequence of position detections by recognizing a tapping action at a detected action position, selecting a specific symbol from a set of predetermined symbols, said selection being performed in dependence of the detected action position, and providing the selected symbol for use in said control of the mobile communication terminal.
2,600
10,465
10,465
14,965,544
2,699
The present invention relates to interfaces and methods for producing input for software applications based on the absolute pose of an item manipulated or worn by a user in a three-dimensional environment. Absolute pose in the sense of the present invention means both the position and the orientation of the item as described in a stable frame defined in that three-dimensional environment. The invention describes how to recover the absolute pose with optical hardware and methods, and how to map at least one of the recovered absolute pose parameters to the three translational and three rotational degrees of freedom available to the item to generate useful input. The applications that can most benefit from the interfaces and methods of the invention involve 3D virtual spaces including augmented reality and mixed reality environments.
1. An interface for producing an input from an absolute pose of a first item associated with a user in a three-dimensional environment, said interface comprising: (a) a unit on-board said first item, said unit configured to receive non-collinear optical inputs presented by at least one stationary object in said three-dimensional environment, said at least one stationary object having at least one feature detectable via an electromagnetic radiation, said at least one feature presenting said non-collinear optical inputs for establishing a stable frame in said three-dimensional environment; (b) processing electronics for recovering a homography, said homography mapping said non-collinear optical inputs to a reference frame, said processing electronics further generating a signal related to a first element selected from the group consisting of said homography and a transformation of said homography; (c) an application employing said signal in said input; wherein said absolute pose comprises at least three translational degrees of freedom and at least three rotational degrees of freedom. 2. The interface of claim 1, wherein said at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprises two translational degrees of freedom defining a plane in said three-dimensional environment. 3. The interface of claim 1, wherein said at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprises three translational degrees of freedom defining a volume in said three-dimensional environment. 4. The interface of claim 3, further comprising a three dimensional display and wherein said volume corresponds to a virtual display volume of said three-dimensional display. 5. The interface of claim 1, wherein said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprise three mutually independent translational degrees of freedom and three mutually independent rotational degrees of freedom. 6. The interface of claim 1, further comprising a feedback unit for providing a feedback to said user in response to at least one portion of said homography. 7. The interface of claim 1, further comprising a relative motion sensor onboard said item for producing data indicative of a change in a second element selected from the group consisting of said homography and a transformation of said homography. 8. The interface of claim 1, wherein said at least one stationary object is selected from the group consisting of a game console, a television, a stereo, an electronic picture frame, a computer, a tablet, an RF transmitter unit, a set-top box, a base station, a portable user device having a display, a non-portable user device having a display, an appliance, a road sign, a billboard, a landmark, a geographical sign and a navigational sign. 9. The interface of claim 1, wherein said non-collinear optical inputs are selected from the group consisting of point-like inputs, line-like inputs, area-like inputs and volume-like inputs. 10. The interface of claim 1, wherein said three-dimensional environment is selected from the group of environments consisting of real space, a cyberspace, a virtual space, an augmented reality space and a mixed space. 11. The interface of claim 1, wherein said first item is selected from the group consisting of a manipulated item and a wearable item. 12. The interface of claim 11, wherein said first item is a manipulated item selected from the group consisting of wands, remote controls, portable phones, portable electronic devices, medical implements, digitizers, handheld tools, hand held clubs, gaming controls, gaming items, digital inking devices, pointers, remote touch devices, TV remotes and magic wands. 13. The interface of claim 11, wherein said first item is a wearable item selected from the group consisting of glasses, goggles, gloves, a head-mounted display (HMD), items affixed on glasses, items affixed on gloves, items affixed on headgear, items affixed on gloves, items affixed on rings, items affixed on watches, items affixed on articles of clothing, items affixed on accessories, items affixed on jewelry and items affixed on accoutrements. 14. The interface of claim 11, wherein said input is used to control a second item selected from the group consisting of a game console, a television, a stereo, an electronic picture frame, a computer, a tablet, an RF transmitter unit, a set-top box, a base station, a portable user device having a display, a non-portable user device having a display, an appliance, a road sign, a billboard, a landmark, a geographical sign and a navigational sign. 15. The interface of claim 1, wherein said application is selected from the group consisting of a virtual reality application, an augmented reality application and a mixed reality application, and said homography is used to render visual information onto a second item selected from the group consisting of a real surface, a real display, a virtual surface, a virtual display, a superposed display, a superimposed display and an overlay graphics display. 16. The system of claim 15, wherein said second item is affixed to a second element selected from the group consisting of a part of a vehicle, a sign and the ground. 17. The system of claim 16, wherein said second element is a part of a vehicle selected from the group consisting of a dashboard, a steering implement, a windshield and said vehicle is selected from the group consisting of a car, a truck, a Sports Utility Vehicle (SUV), a van, a motorcycle, a scooter, a bicycle, a tricycle, a train engine, an aircraft and a boat. 18. The system of claim 16, wherein said second element is a sign selected from the group consisting of a road sign, a billboard, a construction sign, a manufacturing sign, an airport sign, a railroad sign, a facility sign and a navigational sign. 19. A method for producing an input from an absolute pose of an item associated with a user in a three-dimensional environment, said method comprising: (a) placing in said three-dimensional environment at least one stationary object presenting at least one feature in said three-dimensional environment, said at least one feature presenting non-collinear optical inputs detectable via an electromagnetic radiation to establish a stable frame in said three-dimensional environment; (b) receiving by a unit on-board said item, said non-collinear optical inputs; (c) recovering with processing electronics a homography, said homography mapping said non-collinear optical inputs to a reference frame; (d) generating a signal related to a first element selected from the group consisting of said homography and a transformation of said homography; (e) communicating said signal via a link to an application for use in said input; wherein said absolute pose comprises at least three translational degrees of freedom and at least three rotational degrees of freedom. 20. The method of claim 19, wherein said transformation is selected from the group consisting of a linear transformation and a matrix operation. 21. The method of claim 19, wherein said input comprises a gesture performed by said user. 22. The method of claim 19, further comprising the steps of: (f) constructing a subspace of said at least three translational degrees of freedom and said at least three rotational degrees of freedom; (g) projecting said first element onto said subspace to obtain a projected portion of said first element; and (h) communicating said projected portion to said application for use in said input. 23. The method of claim 19, further comprising processing said signal to compute an aspect of said item in said application, and optionally providing a feedback to said user depending on said aspect. 24. An interface for producing an input from extrinsic parameters of a camera in a three-dimensional environment, said interface comprising: a) at least one stationary object having at least one feature detectable via an electromagnetic radiation, said at least one feature presenting non-collinear optical inputs for establishing a stable frame in said three-dimensional environment; b) said camera receiving said non-collinear optical inputs; c) processing electronics for recovering a set of intrinsic parameters and a set of extrinsic parameters of said camera, and for generating a signal related to said set of extrinsic parameters; d) an application employing said signal in said input; whereby said extrinsic parameters comprise at least three translational degrees of freedom and at least three rotational degrees of freedom of said camera.
The present invention relates to interfaces and methods for producing input for software applications based on the absolute pose of an item manipulated or worn by a user in a three-dimensional environment. Absolute pose in the sense of the present invention means both the position and the orientation of the item as described in a stable frame defined in that three-dimensional environment. The invention describes how to recover the absolute pose with optical hardware and methods, and how to map at least one of the recovered absolute pose parameters to the three translational and three rotational degrees of freedom available to the item to generate useful input. The applications that can most benefit from the interfaces and methods of the invention involve 3D virtual spaces including augmented reality and mixed reality environments.1. An interface for producing an input from an absolute pose of a first item associated with a user in a three-dimensional environment, said interface comprising: (a) a unit on-board said first item, said unit configured to receive non-collinear optical inputs presented by at least one stationary object in said three-dimensional environment, said at least one stationary object having at least one feature detectable via an electromagnetic radiation, said at least one feature presenting said non-collinear optical inputs for establishing a stable frame in said three-dimensional environment; (b) processing electronics for recovering a homography, said homography mapping said non-collinear optical inputs to a reference frame, said processing electronics further generating a signal related to a first element selected from the group consisting of said homography and a transformation of said homography; (c) an application employing said signal in said input; wherein said absolute pose comprises at least three translational degrees of freedom and at least three rotational degrees of freedom. 2. The interface of claim 1, wherein said at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprises two translational degrees of freedom defining a plane in said three-dimensional environment. 3. The interface of claim 1, wherein said at least one among said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprises three translational degrees of freedom defining a volume in said three-dimensional environment. 4. The interface of claim 3, further comprising a three dimensional display and wherein said volume corresponds to a virtual display volume of said three-dimensional display. 5. The interface of claim 1, wherein said at least three translational degrees of freedom and said at least three rotational degrees of freedom comprise three mutually independent translational degrees of freedom and three mutually independent rotational degrees of freedom. 6. The interface of claim 1, further comprising a feedback unit for providing a feedback to said user in response to at least one portion of said homography. 7. The interface of claim 1, further comprising a relative motion sensor onboard said item for producing data indicative of a change in a second element selected from the group consisting of said homography and a transformation of said homography. 8. The interface of claim 1, wherein said at least one stationary object is selected from the group consisting of a game console, a television, a stereo, an electronic picture frame, a computer, a tablet, an RF transmitter unit, a set-top box, a base station, a portable user device having a display, a non-portable user device having a display, an appliance, a road sign, a billboard, a landmark, a geographical sign and a navigational sign. 9. The interface of claim 1, wherein said non-collinear optical inputs are selected from the group consisting of point-like inputs, line-like inputs, area-like inputs and volume-like inputs. 10. The interface of claim 1, wherein said three-dimensional environment is selected from the group of environments consisting of real space, a cyberspace, a virtual space, an augmented reality space and a mixed space. 11. The interface of claim 1, wherein said first item is selected from the group consisting of a manipulated item and a wearable item. 12. The interface of claim 11, wherein said first item is a manipulated item selected from the group consisting of wands, remote controls, portable phones, portable electronic devices, medical implements, digitizers, handheld tools, hand held clubs, gaming controls, gaming items, digital inking devices, pointers, remote touch devices, TV remotes and magic wands. 13. The interface of claim 11, wherein said first item is a wearable item selected from the group consisting of glasses, goggles, gloves, a head-mounted display (HMD), items affixed on glasses, items affixed on gloves, items affixed on headgear, items affixed on gloves, items affixed on rings, items affixed on watches, items affixed on articles of clothing, items affixed on accessories, items affixed on jewelry and items affixed on accoutrements. 14. The interface of claim 11, wherein said input is used to control a second item selected from the group consisting of a game console, a television, a stereo, an electronic picture frame, a computer, a tablet, an RF transmitter unit, a set-top box, a base station, a portable user device having a display, a non-portable user device having a display, an appliance, a road sign, a billboard, a landmark, a geographical sign and a navigational sign. 15. The interface of claim 1, wherein said application is selected from the group consisting of a virtual reality application, an augmented reality application and a mixed reality application, and said homography is used to render visual information onto a second item selected from the group consisting of a real surface, a real display, a virtual surface, a virtual display, a superposed display, a superimposed display and an overlay graphics display. 16. The system of claim 15, wherein said second item is affixed to a second element selected from the group consisting of a part of a vehicle, a sign and the ground. 17. The system of claim 16, wherein said second element is a part of a vehicle selected from the group consisting of a dashboard, a steering implement, a windshield and said vehicle is selected from the group consisting of a car, a truck, a Sports Utility Vehicle (SUV), a van, a motorcycle, a scooter, a bicycle, a tricycle, a train engine, an aircraft and a boat. 18. The system of claim 16, wherein said second element is a sign selected from the group consisting of a road sign, a billboard, a construction sign, a manufacturing sign, an airport sign, a railroad sign, a facility sign and a navigational sign. 19. A method for producing an input from an absolute pose of an item associated with a user in a three-dimensional environment, said method comprising: (a) placing in said three-dimensional environment at least one stationary object presenting at least one feature in said three-dimensional environment, said at least one feature presenting non-collinear optical inputs detectable via an electromagnetic radiation to establish a stable frame in said three-dimensional environment; (b) receiving by a unit on-board said item, said non-collinear optical inputs; (c) recovering with processing electronics a homography, said homography mapping said non-collinear optical inputs to a reference frame; (d) generating a signal related to a first element selected from the group consisting of said homography and a transformation of said homography; (e) communicating said signal via a link to an application for use in said input; wherein said absolute pose comprises at least three translational degrees of freedom and at least three rotational degrees of freedom. 20. The method of claim 19, wherein said transformation is selected from the group consisting of a linear transformation and a matrix operation. 21. The method of claim 19, wherein said input comprises a gesture performed by said user. 22. The method of claim 19, further comprising the steps of: (f) constructing a subspace of said at least three translational degrees of freedom and said at least three rotational degrees of freedom; (g) projecting said first element onto said subspace to obtain a projected portion of said first element; and (h) communicating said projected portion to said application for use in said input. 23. The method of claim 19, further comprising processing said signal to compute an aspect of said item in said application, and optionally providing a feedback to said user depending on said aspect. 24. An interface for producing an input from extrinsic parameters of a camera in a three-dimensional environment, said interface comprising: a) at least one stationary object having at least one feature detectable via an electromagnetic radiation, said at least one feature presenting non-collinear optical inputs for establishing a stable frame in said three-dimensional environment; b) said camera receiving said non-collinear optical inputs; c) processing electronics for recovering a set of intrinsic parameters and a set of extrinsic parameters of said camera, and for generating a signal related to said set of extrinsic parameters; d) an application employing said signal in said input; whereby said extrinsic parameters comprise at least three translational degrees of freedom and at least three rotational degrees of freedom of said camera.
2,600
10,466
10,466
15,524,199
2,600
Disclosed is a process for producing an animation of the multipage type, in which each still image of the animation has scalability properties. Preferably, each still image is to be saved on at least two levels, the first level containing few details and each following level providing additional details to the preceding level.
1-21. (canceled) 22. Process for producing an animation (A) of the multipage type, said the animation medium (AM, AG) comprising still images (Pn) with scalability properties, each level providing information of a higher resolution than the preceding level, the process comprising, in the reading order of said medium, all the levels (Nm) of a first rank (m) according to the order in which the images (Pn) of the animation are intended to be played, then all the levels of the immediately higher rank (Nm+1). 23. Process according to claim 22, wherein each of the scalable images is stored in at least two levels (Nm), the first level (N1) comprising the least details and each following level (Nm) providing details additional to the preceding level (Nm−1). 24. Process according to claim 23, wherein to play an image (Pn), a level (Nm) is first played before the level of a higher rank (Nm+1). 25. Process according to claim 24, wherein for each image (Pn) the data contained in all the levels available for said image (Pn) are decompressed and displayed. 26. Process according to claim 24, wherein the data corresponding only to the levels available for all the images are decompressed and displayed. 27. Process according to claim 24, wherein the data corresponding to the smallest set of levels are decompressed and displayed enabling at least a given image display resolution to be achieved. 28. Process according to claim 22, wherein an image (Pn) is played according to the data stemming from a level of said image (Pn:Nm) before the data in the higher level rank (Pn:Nm+1) have been read, then processed. 29. Process according to claim 22, wherein the image processed is resized, before being played, to achieve the display resolution sought. 30. Process according to claim 22, wherein the still images were previously compressed by a method including at least one transformed by wavelets. 31. Process according to claim 22, wherein the still images were previously compressed by a method including at least one difference between two adjacent pixels. 32. Process according to claim 22, wherein at least one initial still image was compressed using a first compression method, and at least a second still image was compressed using a second compression method. 33. Process according to claim 32, further comprising an analysis stage for each still image so as to determine the most suitable compression method. 34. Process according to claim 33, wherein the first compression method comprises at least one transformed by wavelets, and the second compression method comprises at least one difference between two adjacent pixels. 35. Process according to claim 34, wherein the analysis stage files the images by contrast level, the least contrasted images being compressed according to the first method, and the most contrasted images being compressed according to the second method. 36. Process according to claim 22, wherein the image medium is a file. 37. Medium (AG) for implementing a Process according to claim 22, further comprising, in the reading order of said medium, all the levels (Nm) of a first rank (m) according to the order in which the animation's images (Pn) are intended to be played, then all the immediately higher levels (Nm+1). 38. Process to select a still image in an animation medium (AM, AG) according to claim 36, wherein a level of a rank (m) is used to display a preview of the media's (AG) images without reading or processing the images at higher levels (>m). 39. Process according to claim 33, wherein a still image media is created from levels of the still image selected. 40. A non-transitory computer readable medium on which is stored a program, which when executed by a computer, causes the computer to produce a multipage type of animation (A), said animation medium (AM, AG) comprising still images (Pn) with scalability properties, each level providing information of a higher resolution than the preceding level, the program comprising, in the reading order of said medium, all the levels (Nm) of a first rank (m) according to the order in which the images (Pn) of the animation are intended to be played, then all the levels of the immediately higher rank (Nm+1). 41. The process of claim 22, wherein the process starts with the first levels (Pn:N1).
Disclosed is a process for producing an animation of the multipage type, in which each still image of the animation has scalability properties. Preferably, each still image is to be saved on at least two levels, the first level containing few details and each following level providing additional details to the preceding level.1-21. (canceled) 22. Process for producing an animation (A) of the multipage type, said the animation medium (AM, AG) comprising still images (Pn) with scalability properties, each level providing information of a higher resolution than the preceding level, the process comprising, in the reading order of said medium, all the levels (Nm) of a first rank (m) according to the order in which the images (Pn) of the animation are intended to be played, then all the levels of the immediately higher rank (Nm+1). 23. Process according to claim 22, wherein each of the scalable images is stored in at least two levels (Nm), the first level (N1) comprising the least details and each following level (Nm) providing details additional to the preceding level (Nm−1). 24. Process according to claim 23, wherein to play an image (Pn), a level (Nm) is first played before the level of a higher rank (Nm+1). 25. Process according to claim 24, wherein for each image (Pn) the data contained in all the levels available for said image (Pn) are decompressed and displayed. 26. Process according to claim 24, wherein the data corresponding only to the levels available for all the images are decompressed and displayed. 27. Process according to claim 24, wherein the data corresponding to the smallest set of levels are decompressed and displayed enabling at least a given image display resolution to be achieved. 28. Process according to claim 22, wherein an image (Pn) is played according to the data stemming from a level of said image (Pn:Nm) before the data in the higher level rank (Pn:Nm+1) have been read, then processed. 29. Process according to claim 22, wherein the image processed is resized, before being played, to achieve the display resolution sought. 30. Process according to claim 22, wherein the still images were previously compressed by a method including at least one transformed by wavelets. 31. Process according to claim 22, wherein the still images were previously compressed by a method including at least one difference between two adjacent pixels. 32. Process according to claim 22, wherein at least one initial still image was compressed using a first compression method, and at least a second still image was compressed using a second compression method. 33. Process according to claim 32, further comprising an analysis stage for each still image so as to determine the most suitable compression method. 34. Process according to claim 33, wherein the first compression method comprises at least one transformed by wavelets, and the second compression method comprises at least one difference between two adjacent pixels. 35. Process according to claim 34, wherein the analysis stage files the images by contrast level, the least contrasted images being compressed according to the first method, and the most contrasted images being compressed according to the second method. 36. Process according to claim 22, wherein the image medium is a file. 37. Medium (AG) for implementing a Process according to claim 22, further comprising, in the reading order of said medium, all the levels (Nm) of a first rank (m) according to the order in which the animation's images (Pn) are intended to be played, then all the immediately higher levels (Nm+1). 38. Process to select a still image in an animation medium (AM, AG) according to claim 36, wherein a level of a rank (m) is used to display a preview of the media's (AG) images without reading or processing the images at higher levels (>m). 39. Process according to claim 33, wherein a still image media is created from levels of the still image selected. 40. A non-transitory computer readable medium on which is stored a program, which when executed by a computer, causes the computer to produce a multipage type of animation (A), said animation medium (AM, AG) comprising still images (Pn) with scalability properties, each level providing information of a higher resolution than the preceding level, the program comprising, in the reading order of said medium, all the levels (Nm) of a first rank (m) according to the order in which the images (Pn) of the animation are intended to be played, then all the levels of the immediately higher rank (Nm+1). 41. The process of claim 22, wherein the process starts with the first levels (Pn:N1).
2,600
10,467
10,467
14,559,425
2,625
A laser projection/display apparatus includes a photosensor for detecting light amounts of laser lights, and an image processing unit that processes an image signal based on the detected light amounts, and supplies the image signal to a laser light source drive unit. The image processing unit obtains data for making the light amounts of the laser lights, which are detected by the photosensor, equal to respective values at a second luminance that is different from a first luminance, which is the luminance of the image currently being displayed, during a flyback period of the image signal. The image processing unit processes an image signal to be supplied to the laser light source drive unit based on the data when the image signal is projected and displayed with the second luminance.
1. A laser projection/display apparatus for displaying an image corresponding to an image signal by projecting laser lights of a plurality of colors corresponding to the image signal, comprising: a laser light source that emits the laser lights of the colors; a laser light source drive unit that drives the laser light source so that the laser light source emits laser lights corresponding to the image signal; a scanning unit that scans the laser lights emitted by the laser light source in accordance with a sync signal related to the image signal; a photosensor that detects the light amounts of the laser lights emitted by the laser light source; and an image processing unit that processes the image signal in accordance with the light amounts of the laser lights detected by the photosensor, and supplies the processed image signal to the laser light source drive unit, wherein the image processing unit obtains data used for making the light amounts of the laser lights, which are detected by the photosensor, equal to respective predefined values regarding a plurality of luminance levels during the flyback period of the image signal, and the image processing unit processes the image signal to be supplied to the laser light source drive unit on the basis of the data when the image signal is projected and displayed. 2. The laser projection/display apparatus according to claim 1, wherein the image processing unit obtains data used for making the light amounts of the laser lights, which are detected by the photosensor, equal to respective predefined values regarding a plurality of luminance levels at a second luminance that is different from a first luminance, which is the luminance of the image currently being displayed, during the flyback period of the image signal, and the image processing unit processes an image signal to be supplied to the laser light source drive unit on the basis of the data when the image signal is projected and displayed with the second luminance. 3. The laser projection/display apparatus according to claim 2, further comprising an illuminance sensor that detects the brightness of the periphery of the laser projection/display apparatus, wherein the image processing unit changes the luminance of an image to be displayed from the first luminance to the second luminance in accordance with the brightness detected by the illuminance sensor. 4. The laser projection/display apparatus according to claim 2, wherein the image processing unit changes the luminance of an image to be displayed from the first luminance to the second luminance in accordance with the instructions of a user of the laser projection/display apparatus. 5. The laser projection/display apparatus according to claim 2, wherein the image processing unit changes the gains of signals showing the amounts of the laser lights detected by the photosensor, and processes an image signal to be supplied to the laser light source so that the light amounts of the laser lights become equal to respective predefined values. 6. The laser projection/display apparatus according to claim 5, wherein the gains in the image processing unit during the display period of the image signal are respectively different from the gains during the flyback period of the image signal. 7. The laser projection/display apparatus according to claim 2, wherein the first luminance is a luminance for displaying an image in the bright state of the periphery, and the second luminance is a luminance for displaying an image in the dark state of the periphery. 8. The laser projection/display apparatus according to claim 2, wherein the image processing unit stores data, which is obtained for setting an image signal to be supplied to a laser light source, in an LUT that is a data table when the image processing unit displays an image with the second luminance, and updates the data. 9. The laser projection/display apparatus according to claim 1, wherein the laser light drive unit current-drives the laser light source; and the image processing unit obtains the threshold of the current that sets the light amount of a light emitted by the laser light source to the predefined lower limit value, and processes an image signal to be supplied to the laser light source drive unit so that the laser light source is driven with a current whose upper limit value is obtained by subtracting a predefined value from the threshold in the case where the laser light source is driven with a current less than the threshold.
A laser projection/display apparatus includes a photosensor for detecting light amounts of laser lights, and an image processing unit that processes an image signal based on the detected light amounts, and supplies the image signal to a laser light source drive unit. The image processing unit obtains data for making the light amounts of the laser lights, which are detected by the photosensor, equal to respective values at a second luminance that is different from a first luminance, which is the luminance of the image currently being displayed, during a flyback period of the image signal. The image processing unit processes an image signal to be supplied to the laser light source drive unit based on the data when the image signal is projected and displayed with the second luminance.1. A laser projection/display apparatus for displaying an image corresponding to an image signal by projecting laser lights of a plurality of colors corresponding to the image signal, comprising: a laser light source that emits the laser lights of the colors; a laser light source drive unit that drives the laser light source so that the laser light source emits laser lights corresponding to the image signal; a scanning unit that scans the laser lights emitted by the laser light source in accordance with a sync signal related to the image signal; a photosensor that detects the light amounts of the laser lights emitted by the laser light source; and an image processing unit that processes the image signal in accordance with the light amounts of the laser lights detected by the photosensor, and supplies the processed image signal to the laser light source drive unit, wherein the image processing unit obtains data used for making the light amounts of the laser lights, which are detected by the photosensor, equal to respective predefined values regarding a plurality of luminance levels during the flyback period of the image signal, and the image processing unit processes the image signal to be supplied to the laser light source drive unit on the basis of the data when the image signal is projected and displayed. 2. The laser projection/display apparatus according to claim 1, wherein the image processing unit obtains data used for making the light amounts of the laser lights, which are detected by the photosensor, equal to respective predefined values regarding a plurality of luminance levels at a second luminance that is different from a first luminance, which is the luminance of the image currently being displayed, during the flyback period of the image signal, and the image processing unit processes an image signal to be supplied to the laser light source drive unit on the basis of the data when the image signal is projected and displayed with the second luminance. 3. The laser projection/display apparatus according to claim 2, further comprising an illuminance sensor that detects the brightness of the periphery of the laser projection/display apparatus, wherein the image processing unit changes the luminance of an image to be displayed from the first luminance to the second luminance in accordance with the brightness detected by the illuminance sensor. 4. The laser projection/display apparatus according to claim 2, wherein the image processing unit changes the luminance of an image to be displayed from the first luminance to the second luminance in accordance with the instructions of a user of the laser projection/display apparatus. 5. The laser projection/display apparatus according to claim 2, wherein the image processing unit changes the gains of signals showing the amounts of the laser lights detected by the photosensor, and processes an image signal to be supplied to the laser light source so that the light amounts of the laser lights become equal to respective predefined values. 6. The laser projection/display apparatus according to claim 5, wherein the gains in the image processing unit during the display period of the image signal are respectively different from the gains during the flyback period of the image signal. 7. The laser projection/display apparatus according to claim 2, wherein the first luminance is a luminance for displaying an image in the bright state of the periphery, and the second luminance is a luminance for displaying an image in the dark state of the periphery. 8. The laser projection/display apparatus according to claim 2, wherein the image processing unit stores data, which is obtained for setting an image signal to be supplied to a laser light source, in an LUT that is a data table when the image processing unit displays an image with the second luminance, and updates the data. 9. The laser projection/display apparatus according to claim 1, wherein the laser light drive unit current-drives the laser light source; and the image processing unit obtains the threshold of the current that sets the light amount of a light emitted by the laser light source to the predefined lower limit value, and processes an image signal to be supplied to the laser light source drive unit so that the laser light source is driven with a current whose upper limit value is obtained by subtracting a predefined value from the threshold in the case where the laser light source is driven with a current less than the threshold.
2,600
10,468
10,468
15,546,072
2,658
In one example of the disclosure, a machine-translation for each of a plurality of strings is determined, the strings for display upon execution of a subject application. A first display of a test step to be performed by a test application during execution of the subject application is caused. A second display of a state for the subject application that includes the plurality of strings is caused concurrent with the first display. A user-translation for each of the strings is obtained, the user-translations provided via a GUI included within the second display. A translation property file associated with the subject application is amended to include the user-translations.
1. A system, comprising: a machine-translation engine, to determine a machine-translation for each of a plurality of strings, the strings for display upon execution of a subject application; a first display engine, to cause a first display of a test step to be performed by a test application during execution of the subject application; a second display engine, to cause, concurrent with the first display, a second display of a state for the subject application that includes the plurality of strings; a user-translation engine, to obtain a user-translation for each of the strings, the user-translations provided via a GUI included within the second display; and a property file engine, to amend a translation property file associated with the subject application to include the user-translations. 2. The system of claim 1, wherein the first display engine is to obtain a user-initiated instruction to begin a quality assurance test upon the subject application, and the first display engine is to cause the first display and the second display engine is to cause the second display responsive to receipt of the instruction. 3. The system of claim 1, wherein the test step is performed by the test application concurrent with the provision of the second display. 4. The system of claim 1, wherein the GUI is a second GUI, wherein the first display includes a first GUI for receiving a command to pause or stop execution of the test application, and wherein the user-translation for each of the plurality of strings is obtained during a period that execution of the subject application is paused or stopped. 5. The system of claim 1, wherein the machine-translation engine is to determine the machine-translation for each of the plurality of strings utilizing a machine-translation script, and further comprising a machine-translation update engine to update the machine-translation script to include the user-translations. 6. The system of claim 1, wherein the machine-translation engine is to determine an application context for the subject application, and is to determine the machine-translation for the plurality of strings according to the application context. 7. The system of claim 1, wherein the application context is determined according to a subject, a functionality, or an attribute of the subject application or of the state for the application. 8. The system of claim 1, further comprising a translation marking engine to mark the subject application as user-translated responsive to receipt of data indicative that execution of the test application has completed. 9. The system of claim 1, wherein the first display and the second display are to occur at a same display device. 10. The system of claim 1, wherein the translation property file is a language-specific property file. 11. The system of claim 1, wherein the translation property file includes the machine-translations. 12. A memory resource storing instructions that when executed cause a processing resource to obtain user-translations utilizing test step and application state displays, the instructions comprising: a machine-translation module, that when executed causes the processing resource to utilize a machine-translation script to determine a machine-translation for each of a plurality of strings that are to be displayed upon execution of a subject application; a first display module, that when executed causes the processing resource to cause a first display of, a test step to be performed by a test application during execution of the subject application; a second display module, that when executed causes the processing resource to cause a second display, to occur concurrent with the first display, of an application state associated with the test step, the second display including the plurality of strings; a user-translation module, that when executed causes the processing resource to acquire a user-translation for each of the strings, the user-translations having been provided via a GUI within the second display; and a property file module, that when executed causes the processing resource to amend a translation property file associated with the subject application to include the user-translations; and a machine-translation update module, that when executed causes the processing resource to update the machine-translation script to include the acquired user-translations. 13. The memory resource of claim 12, wherein the test step is to be performed by the test application concurrent with the provision of the second display. 14. The memory resource of claim 12, wherein the machine-translation module when executed is to receive an application context for the subject application, and is to determine the machine-translation for the plurality of strings according to the application context 15. A method for translating strings in a subject application, comprising: utilizing a machine-translation script to determine a machine-translation for each of a plurality of strings to be displayed upon execution of a subject application; causing a first display including a test step to be performed by a test application during execution of the subject application, and including a first GUI for receiving a command to pause or stop execution of the test application; causing, concurrent with the first display, a second display that includes an application state, the application state including the plurality of strings; acquiring, during a period or periods that execution of the subject application is paused or stopped, a user-translation for each of the strings, the user-translations having been user-provided via a second GUI within the second display; amending a translation property file included with the subject application to include the acquired user-translations; and updating the machine-translation script to include the acquired user-translations.
In one example of the disclosure, a machine-translation for each of a plurality of strings is determined, the strings for display upon execution of a subject application. A first display of a test step to be performed by a test application during execution of the subject application is caused. A second display of a state for the subject application that includes the plurality of strings is caused concurrent with the first display. A user-translation for each of the strings is obtained, the user-translations provided via a GUI included within the second display. A translation property file associated with the subject application is amended to include the user-translations.1. A system, comprising: a machine-translation engine, to determine a machine-translation for each of a plurality of strings, the strings for display upon execution of a subject application; a first display engine, to cause a first display of a test step to be performed by a test application during execution of the subject application; a second display engine, to cause, concurrent with the first display, a second display of a state for the subject application that includes the plurality of strings; a user-translation engine, to obtain a user-translation for each of the strings, the user-translations provided via a GUI included within the second display; and a property file engine, to amend a translation property file associated with the subject application to include the user-translations. 2. The system of claim 1, wherein the first display engine is to obtain a user-initiated instruction to begin a quality assurance test upon the subject application, and the first display engine is to cause the first display and the second display engine is to cause the second display responsive to receipt of the instruction. 3. The system of claim 1, wherein the test step is performed by the test application concurrent with the provision of the second display. 4. The system of claim 1, wherein the GUI is a second GUI, wherein the first display includes a first GUI for receiving a command to pause or stop execution of the test application, and wherein the user-translation for each of the plurality of strings is obtained during a period that execution of the subject application is paused or stopped. 5. The system of claim 1, wherein the machine-translation engine is to determine the machine-translation for each of the plurality of strings utilizing a machine-translation script, and further comprising a machine-translation update engine to update the machine-translation script to include the user-translations. 6. The system of claim 1, wherein the machine-translation engine is to determine an application context for the subject application, and is to determine the machine-translation for the plurality of strings according to the application context. 7. The system of claim 1, wherein the application context is determined according to a subject, a functionality, or an attribute of the subject application or of the state for the application. 8. The system of claim 1, further comprising a translation marking engine to mark the subject application as user-translated responsive to receipt of data indicative that execution of the test application has completed. 9. The system of claim 1, wherein the first display and the second display are to occur at a same display device. 10. The system of claim 1, wherein the translation property file is a language-specific property file. 11. The system of claim 1, wherein the translation property file includes the machine-translations. 12. A memory resource storing instructions that when executed cause a processing resource to obtain user-translations utilizing test step and application state displays, the instructions comprising: a machine-translation module, that when executed causes the processing resource to utilize a machine-translation script to determine a machine-translation for each of a plurality of strings that are to be displayed upon execution of a subject application; a first display module, that when executed causes the processing resource to cause a first display of, a test step to be performed by a test application during execution of the subject application; a second display module, that when executed causes the processing resource to cause a second display, to occur concurrent with the first display, of an application state associated with the test step, the second display including the plurality of strings; a user-translation module, that when executed causes the processing resource to acquire a user-translation for each of the strings, the user-translations having been provided via a GUI within the second display; and a property file module, that when executed causes the processing resource to amend a translation property file associated with the subject application to include the user-translations; and a machine-translation update module, that when executed causes the processing resource to update the machine-translation script to include the acquired user-translations. 13. The memory resource of claim 12, wherein the test step is to be performed by the test application concurrent with the provision of the second display. 14. The memory resource of claim 12, wherein the machine-translation module when executed is to receive an application context for the subject application, and is to determine the machine-translation for the plurality of strings according to the application context 15. A method for translating strings in a subject application, comprising: utilizing a machine-translation script to determine a machine-translation for each of a plurality of strings to be displayed upon execution of a subject application; causing a first display including a test step to be performed by a test application during execution of the subject application, and including a first GUI for receiving a command to pause or stop execution of the test application; causing, concurrent with the first display, a second display that includes an application state, the application state including the plurality of strings; acquiring, during a period or periods that execution of the subject application is paused or stopped, a user-translation for each of the strings, the user-translations having been user-provided via a second GUI within the second display; amending a translation property file included with the subject application to include the acquired user-translations; and updating the machine-translation script to include the acquired user-translations.
2,600
10,469
10,469
14,324,740
2,653
Provided are a method and apparatus for localizing a multichannel sound signal. The method includes: obtaining a multichannel sound signal to which sense of elevation is applied by applying a first filter to an input sound signal; determining at least one frequency range of a dynamic cue according to change of a head-related transfer function (HRTF) indicating information regarding paths from a spatial location of an actual speaker to ears of an audience; and applying a second filter to at least one sound signal, corresponding to the determined at least one frequency range, of at least one channel in the multichannel sound signal to change the at least one sound signal so as to remove or to reduce the dynamic cue when the multichannel sound signal is output.
1. A method of localizing a multichannel sound signal, the method comprising: obtaining a multichannel sound signal to which sense of elevation is applied by applying a first filter, which corresponds to a predetermined elevation, to an input sound signal; determining at least one frequency range of a dynamic cue according to change of a head-related transfer function (HRTF) indicating information regarding paths from a spatial location of an actual speaker to ears of an audience; and applying a second filter to at least one sound signal, corresponding to the determined at least one frequency range, of at least one channel in the multichannel sound signal to change the at least one sound signal so as to remove or to reduce the dynamic cue when the multichannel sound signal is output. 2. The method of claim 1, wherein the obtaining the multichannel sound signal comprises: applying the first filter to an input mono sound signal; and obtaining the multichannel sound signal to which the sense of elevation is applied by replicating the input mono sound signal to which the first filter is applied. 3. The method of claim 1, wherein the first filter is determined according to: a second HRTF/a first HRTF, wherein the second HRTF includes an HRTF indicating information regarding paths from a spatial location of a virtual speaker located at the predetermined elevation to the ears of the audience, and wherein the first HRTF includes the HRTF indicating the information regarding the paths from the spatial location of the actual speaker to the ears of the audience. 4. The method of claim 1, wherein the determining the at least one frequency range of the dynamic cue comprises determining, as the at least one frequency range of the dynamic cue, at least one frequency range in the frequency domain of the HRTF that changes in correspondence to changes of locations of the ears the an audience or a change of the audience. 5. The method of claim 1, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a phase inverse filter for inversing a phase of at least one sound signal included in the at least one frequency range of the dynamic cue; and the applying the second filter to the at least one sound signal of the at least one channel in the multichannel sound signal comprises applying the phase inverse filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 6. The method of claim 1, wherein the second filter comprises an amplitude adjusting filter for adjusting amplitudes of at least one sound signal included in the at least one frequency range of the dynamic cue. 7. The method of claim 1, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a delay filter for delaying at least one sound signal included in the at least one frequency range of the dynamic cue; and the applying the second filter to the at least one sound signal of the at least one channel in the multichannel sound signal comprises applying the delay filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 8. The method of claim 1, further comprising adjusting amplitudes of sound signals of respective channels in the multichannel sound signal, such that the virtual speaker is located on a predetermined position on a horizontal surface including the virtual speaker at the predetermined elevation. 9. A non-transitory computer-readable recording medium having recorded thereon a computer program for implementing the method of claim 1. 10. A multichannel sound signal localizing apparatus comprising: a multichannel sound signal obtainer configured to obtain a multichannel sound signal to which sense of elevation is applied by applying a first filter, which corresponds to a predetermined elevation, to an input sound signal; a frequency range determiner configured to determine at least one frequency range of a dynamic cue according to change of a head-related transfer function (HRTF) indicating information regarding paths from a spatial location of an actual speaker to ears of an audience; and a second filterer configured to apply a second filter to at least one sound signal, corresponding to the determined at least one frequency range, of at least one channel in the multichannel sound signal to change the at least one sound signal so as to remove or to reduce the dynamic cue when the multichannel sound signal is output. 11. The multichannel sound signal localizing apparatus of claim 10, wherein the multichannel sound signal obtainer comprises: a first filterer configured to apply the first filter to an input mono sound signal; and a signal replicator configured to obtain the multichannel sound signal to which the sense of elevation is applied by replicating the input mono sound signal to which the first filter is applied. 12. The multichannel sound signal localizing apparatus of claim 10, wherein the first filter is determined according to: a second HRTF/a first HRTF, wherein the second HRTF includes an HRTF indicating information regarding paths from a spatial location of a virtual speaker located at the predetermined elevation to the ears of the audience, and wherein the first HRTF includes the HRTF indicating the information regarding the paths from the spatial location of the actual speaker to the ears of the audience. 13. The multichannel sound signal localizing apparatus of claim 10, wherein the frequency range determiner is configured to determine, as the at least one frequency range of the dynamic cue, at least one frequency range in the frequency domain of the HRTF that changes in correspondence to changes of locations of the ears of the audience or a change of the audience. 14. The multichannel sound signal localizing apparatus of claim 10, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a phase inverse filter for inversing a phase of at least one sound signal included in the at least one frequency range of the dynamic cue; and the second filterer applies the phase inverse filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 15. The multichannel sound signal localizing apparatus of claim 10, wherein the second filter comprises an amplitude adjusting filter for adjusting amplitudes of at least one sound signal included in the at least one frequency range of the dynamic cue. 16. The multichannel sound signal localizing apparatus of claim 10, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a delay filter for delaying at least one sound signal included in the at least one frequency range of the dynamic cue; and the second filterer applies the delay filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 17. The multichannel sound signal localizing apparatus of claim 10, further comprising an amplitude adjuster configured to adjust amplitudes of sound signals of respective channels in the multichannel sound signal, such that the virtual speaker is located on a predetermined position on a horizontal surface including the virtual speaker at the predetermined elevation. 18. A method of localizing a multichannel sound signal, the method comprising: determining at least one frequency range of a dynamic cue according to change of a head-related transfer function (HRTF) indicating information regarding paths from a spatial location of an actual speaker to ears of an audience; and applying a second filter to at least one sound signal, corresponding to the determined at least one frequency range, of at least one channel in a multichannel sound signal to change the at least one sound signal so as to remove or to reduce the dynamic cue when the multichannel sound signal is output. 19. The method of claim 18, wherein the determining the at least one frequency range of the dynamic cue comprises determining, as the at least one frequency range of the dynamic cue, at least one frequency range in the frequency domain of the HRTF that changes in correspondence to changes of locations of the ears the an audience or a change of the audience. 20. The method of claim 18, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a phase inverse filter for inversing a phase of at least one sound signal included in the at least one frequency range of the dynamic cue; and the applying the second filter to the at least one sound signal of the at least one channel in the multichannel sound signal comprises applying the phase inverse filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 21. The method of claim 18, wherein the second filter comprises an amplitude adjusting filter for adjusting amplitudes of at least one sound signal included in the at least one frequency range of the dynamic cue. 22. The method of claim 18, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a delay filter for delaying at least one sound signal included in the at least one frequency range of the dynamic cue; and the applying the second filter to the at least one sound signal of the at least one channel in the multichannel sound signal comprises applying the delay filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 23. A non-transitory computer-readable recording medium having recorded thereon a computer program for implementing the method of claim 18.
Provided are a method and apparatus for localizing a multichannel sound signal. The method includes: obtaining a multichannel sound signal to which sense of elevation is applied by applying a first filter to an input sound signal; determining at least one frequency range of a dynamic cue according to change of a head-related transfer function (HRTF) indicating information regarding paths from a spatial location of an actual speaker to ears of an audience; and applying a second filter to at least one sound signal, corresponding to the determined at least one frequency range, of at least one channel in the multichannel sound signal to change the at least one sound signal so as to remove or to reduce the dynamic cue when the multichannel sound signal is output.1. A method of localizing a multichannel sound signal, the method comprising: obtaining a multichannel sound signal to which sense of elevation is applied by applying a first filter, which corresponds to a predetermined elevation, to an input sound signal; determining at least one frequency range of a dynamic cue according to change of a head-related transfer function (HRTF) indicating information regarding paths from a spatial location of an actual speaker to ears of an audience; and applying a second filter to at least one sound signal, corresponding to the determined at least one frequency range, of at least one channel in the multichannel sound signal to change the at least one sound signal so as to remove or to reduce the dynamic cue when the multichannel sound signal is output. 2. The method of claim 1, wherein the obtaining the multichannel sound signal comprises: applying the first filter to an input mono sound signal; and obtaining the multichannel sound signal to which the sense of elevation is applied by replicating the input mono sound signal to which the first filter is applied. 3. The method of claim 1, wherein the first filter is determined according to: a second HRTF/a first HRTF, wherein the second HRTF includes an HRTF indicating information regarding paths from a spatial location of a virtual speaker located at the predetermined elevation to the ears of the audience, and wherein the first HRTF includes the HRTF indicating the information regarding the paths from the spatial location of the actual speaker to the ears of the audience. 4. The method of claim 1, wherein the determining the at least one frequency range of the dynamic cue comprises determining, as the at least one frequency range of the dynamic cue, at least one frequency range in the frequency domain of the HRTF that changes in correspondence to changes of locations of the ears the an audience or a change of the audience. 5. The method of claim 1, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a phase inverse filter for inversing a phase of at least one sound signal included in the at least one frequency range of the dynamic cue; and the applying the second filter to the at least one sound signal of the at least one channel in the multichannel sound signal comprises applying the phase inverse filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 6. The method of claim 1, wherein the second filter comprises an amplitude adjusting filter for adjusting amplitudes of at least one sound signal included in the at least one frequency range of the dynamic cue. 7. The method of claim 1, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a delay filter for delaying at least one sound signal included in the at least one frequency range of the dynamic cue; and the applying the second filter to the at least one sound signal of the at least one channel in the multichannel sound signal comprises applying the delay filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 8. The method of claim 1, further comprising adjusting amplitudes of sound signals of respective channels in the multichannel sound signal, such that the virtual speaker is located on a predetermined position on a horizontal surface including the virtual speaker at the predetermined elevation. 9. A non-transitory computer-readable recording medium having recorded thereon a computer program for implementing the method of claim 1. 10. A multichannel sound signal localizing apparatus comprising: a multichannel sound signal obtainer configured to obtain a multichannel sound signal to which sense of elevation is applied by applying a first filter, which corresponds to a predetermined elevation, to an input sound signal; a frequency range determiner configured to determine at least one frequency range of a dynamic cue according to change of a head-related transfer function (HRTF) indicating information regarding paths from a spatial location of an actual speaker to ears of an audience; and a second filterer configured to apply a second filter to at least one sound signal, corresponding to the determined at least one frequency range, of at least one channel in the multichannel sound signal to change the at least one sound signal so as to remove or to reduce the dynamic cue when the multichannel sound signal is output. 11. The multichannel sound signal localizing apparatus of claim 10, wherein the multichannel sound signal obtainer comprises: a first filterer configured to apply the first filter to an input mono sound signal; and a signal replicator configured to obtain the multichannel sound signal to which the sense of elevation is applied by replicating the input mono sound signal to which the first filter is applied. 12. The multichannel sound signal localizing apparatus of claim 10, wherein the first filter is determined according to: a second HRTF/a first HRTF, wherein the second HRTF includes an HRTF indicating information regarding paths from a spatial location of a virtual speaker located at the predetermined elevation to the ears of the audience, and wherein the first HRTF includes the HRTF indicating the information regarding the paths from the spatial location of the actual speaker to the ears of the audience. 13. The multichannel sound signal localizing apparatus of claim 10, wherein the frequency range determiner is configured to determine, as the at least one frequency range of the dynamic cue, at least one frequency range in the frequency domain of the HRTF that changes in correspondence to changes of locations of the ears of the audience or a change of the audience. 14. The multichannel sound signal localizing apparatus of claim 10, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a phase inverse filter for inversing a phase of at least one sound signal included in the at least one frequency range of the dynamic cue; and the second filterer applies the phase inverse filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 15. The multichannel sound signal localizing apparatus of claim 10, wherein the second filter comprises an amplitude adjusting filter for adjusting amplitudes of at least one sound signal included in the at least one frequency range of the dynamic cue. 16. The multichannel sound signal localizing apparatus of claim 10, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a delay filter for delaying at least one sound signal included in the at least one frequency range of the dynamic cue; and the second filterer applies the delay filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 17. The multichannel sound signal localizing apparatus of claim 10, further comprising an amplitude adjuster configured to adjust amplitudes of sound signals of respective channels in the multichannel sound signal, such that the virtual speaker is located on a predetermined position on a horizontal surface including the virtual speaker at the predetermined elevation. 18. A method of localizing a multichannel sound signal, the method comprising: determining at least one frequency range of a dynamic cue according to change of a head-related transfer function (HRTF) indicating information regarding paths from a spatial location of an actual speaker to ears of an audience; and applying a second filter to at least one sound signal, corresponding to the determined at least one frequency range, of at least one channel in a multichannel sound signal to change the at least one sound signal so as to remove or to reduce the dynamic cue when the multichannel sound signal is output. 19. The method of claim 18, wherein the determining the at least one frequency range of the dynamic cue comprises determining, as the at least one frequency range of the dynamic cue, at least one frequency range in the frequency domain of the HRTF that changes in correspondence to changes of locations of the ears the an audience or a change of the audience. 20. The method of claim 18, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a phase inverse filter for inversing a phase of at least one sound signal included in the at least one frequency range of the dynamic cue; and the applying the second filter to the at least one sound signal of the at least one channel in the multichannel sound signal comprises applying the phase inverse filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 21. The method of claim 18, wherein the second filter comprises an amplitude adjusting filter for adjusting amplitudes of at least one sound signal included in the at least one frequency range of the dynamic cue. 22. The method of claim 18, wherein: the multichannel sound signal comprises a stereo sound signal; the second filter comprises a delay filter for delaying at least one sound signal included in the at least one frequency range of the dynamic cue; and the applying the second filter to the at least one sound signal of the at least one channel in the multichannel sound signal comprises applying the delay filter to at least one sound signal, in the at least one frequency range, of one channel from among channels of the stereo sound signal. 23. A non-transitory computer-readable recording medium having recorded thereon a computer program for implementing the method of claim 18.
2,600
10,470
10,470
15,598,852
2,666
Various methods an apparatus to use facial recognition in a computing device are disclosed. In one aspect, a method of controlling a component of a computing device is provided. The method includes taking an IR image of a user and a background with an IR sensor of a computing device. The computing device is in a location. The IR image is segmented into user image data and background image data. An ambient temperature of the location is determined using the background image data. An aspect of the component is controlled based on the ambient temperature.
1. A method of computing, comprising: taking an IR image of a user and a background with an IR sensor of a computing device, the computing device being in a location; segmenting the IR image into user image data and background image data; and determining an ambient temperature of the location using the background image data. 2. The method of claim 1, wherein the segmenting comprises performing facial recognition on the IR image. 3. The method of claim 2, comprising using the facial recognition to authenticate the user to the computing device. 4. The method of claim 2, comprising using the facial recognition to authenticate the user to an application running on the computing device. 5. The method of claim 1, comprising controlling an aspect of the computing device based on the determined ambient temperature. 6. The method of claim 5, wherein computing device comprises a processor and a cooling fan, the controlled aspect comprises processor power or fan movement. 7. A method of controlling a component of a computing device, comprising: taking an IR image of a user and a background with an IR sensor of a computing device, the computing device being in a location; segmenting the IR image into user image data and background image data; determining an ambient temperature of the location using the background image data; and controlling an aspect of the component based on the ambient temperature. 8. The method of claim 7, wherein the segmenting comprises performing facial recognition on the IR image. 9. The method of claim 8, comprising using the facial recognition to authenticate the user to the computing device. 10. The method of claim 8, comprising using the facial recognition to authenticate the user to an application running on the computing device. 11. The method of claim 7, wherein the component comprises a processor, the controlled aspect comprises clock speed or core voltage. 12. The method of claim 7, wherein the component comprises a cooling fan, the controlled aspect comprises fan movements. 13. The method of claim 7, comprising calculating a skin temperature margin based on the ambient temperature. 14. A computing device, comprising: an IR sensor configured to take an IR image of a user and a background, the computing device being in a location; and a processor programmed to segmenting the IR image into user image data and background image data and determine an ambient temperature of the location using the background image data. 15. The computing device of claim 14, wherein the segmenting comprises performing facial recognition on the IR image. 16. The computing device of claim 15, wherein the facial recognition is configured to authenticate the user to the computing device. 17. The computing device of claim 15, wherein the facial recognition is configured to authenticate the user to an application running on the computing device. 18. The computing device of claim 14, wherein the processor is programmed to control an aspect of the computing device based on the determined ambient temperature. 19. The computing device of claim 18, wherein computing device comprises a cooling fan, the controlled aspect comprises fan movement. 20. The computing device of claim 18, wherein controlled aspect comprises processor clock speed or core voltage.
Various methods an apparatus to use facial recognition in a computing device are disclosed. In one aspect, a method of controlling a component of a computing device is provided. The method includes taking an IR image of a user and a background with an IR sensor of a computing device. The computing device is in a location. The IR image is segmented into user image data and background image data. An ambient temperature of the location is determined using the background image data. An aspect of the component is controlled based on the ambient temperature.1. A method of computing, comprising: taking an IR image of a user and a background with an IR sensor of a computing device, the computing device being in a location; segmenting the IR image into user image data and background image data; and determining an ambient temperature of the location using the background image data. 2. The method of claim 1, wherein the segmenting comprises performing facial recognition on the IR image. 3. The method of claim 2, comprising using the facial recognition to authenticate the user to the computing device. 4. The method of claim 2, comprising using the facial recognition to authenticate the user to an application running on the computing device. 5. The method of claim 1, comprising controlling an aspect of the computing device based on the determined ambient temperature. 6. The method of claim 5, wherein computing device comprises a processor and a cooling fan, the controlled aspect comprises processor power or fan movement. 7. A method of controlling a component of a computing device, comprising: taking an IR image of a user and a background with an IR sensor of a computing device, the computing device being in a location; segmenting the IR image into user image data and background image data; determining an ambient temperature of the location using the background image data; and controlling an aspect of the component based on the ambient temperature. 8. The method of claim 7, wherein the segmenting comprises performing facial recognition on the IR image. 9. The method of claim 8, comprising using the facial recognition to authenticate the user to the computing device. 10. The method of claim 8, comprising using the facial recognition to authenticate the user to an application running on the computing device. 11. The method of claim 7, wherein the component comprises a processor, the controlled aspect comprises clock speed or core voltage. 12. The method of claim 7, wherein the component comprises a cooling fan, the controlled aspect comprises fan movements. 13. The method of claim 7, comprising calculating a skin temperature margin based on the ambient temperature. 14. A computing device, comprising: an IR sensor configured to take an IR image of a user and a background, the computing device being in a location; and a processor programmed to segmenting the IR image into user image data and background image data and determine an ambient temperature of the location using the background image data. 15. The computing device of claim 14, wherein the segmenting comprises performing facial recognition on the IR image. 16. The computing device of claim 15, wherein the facial recognition is configured to authenticate the user to the computing device. 17. The computing device of claim 15, wherein the facial recognition is configured to authenticate the user to an application running on the computing device. 18. The computing device of claim 14, wherein the processor is programmed to control an aspect of the computing device based on the determined ambient temperature. 19. The computing device of claim 18, wherein computing device comprises a cooling fan, the controlled aspect comprises fan movement. 20. The computing device of claim 18, wherein controlled aspect comprises processor clock speed or core voltage.
2,600
10,471
10,471
15,514,121
2,674
In an example implementation, a method of providing image registration information in a digital printing press includes imaging a printed page on an impression drum of a digital printing press, determining image registration information from printed content on the page, and displaying a graphical visualization of the image registration information on a user interface screen of the digital printing press.
1. A method of providing image registration information in a digital printing press comprising: imaging a printed page on an impression drum of a digital printing press; determining image registration information from printed content on the page; and displaying a graphical visualization of the image registration information on a user interface screen of the digital printing press. 2. A method as in claim 1, wherein imaging a printed page comprises capturing an image of target registration points on the page with a camera. 3. A method as in claim 2, wherein determining image registration information comprises: measuring Y distances from a leading edge of the page to target registration points printed on the page, and X distances from side edges of the page to the target registration points; and calculating X and Y offset data from the measured distances based on expected locations of the target registration points. 4. A method as in claim 3, wherein displaying the graphical visualization on a user interface screen comprises transforming the X and Y offset data into graphical form on the user interface screen. 5. A method as in claim 1, wherein displaying the graphical visualization on a user interface screen comprises displaying the graphical visualization on the user interface screen in real-time as the printed page travels along a print path within the digital press. 6. A method as in claim 1, wherein imaging a printed page comprises imaging multiple printed pages, and wherein displaying the graphical visualization comprises displaying a graphical visualization of image registration information from each of the multiple printed pages in real-time as the pages travel along a print path within the digital press. 7. A method as in claim 1, further comprising performing functions related to the image registration information, the functions selected from the group consisting of displaying statistical information for a pre-defined number of printed pages, providing an alert when image registration data falls outside a pre-set limit, automatically removing a printed page from a group of printed pages whose image registration data falls outside a pre-set limit, identifying a printed page for manual removal whose image registration data falls outside a pre-set limit, providing a trend analysis to indicate an increasing or decreasing image registration trend, and resetting a starting point for determining the image registration information. 8. A digital printing press comprising: a user interface screen; an imaging device to image a printed page as the printed page travels on an impression drum of the printing press; a measurement module to determine from the imaged page, registration information from target points on the printed page; and a visualization module to display the registration information in graphical form on the user interface screen. 9. A digital printing press as in claim 8, further comprising print data and target point data stored in a memory, the print data and target point data to be printed on the printed page. 10. A digital printing press as in claim 8, wherein the printed page comprises: a print job image printed on the printed page; wherein the target points are printed on the printed page outside an area of the print job image. 11. A digital printing press as in claim 8, wherein the target points are printed adjacent to a leading edge and a trailing edge of the printed page. 12. A non-transitory machine-readable storage medium storing instructions that when executed by a processor of a printing device, cause the printing device to: print a print job image on a page; print target registration points on the page; measure distances between edges of the page and the target registration points to determine image registration information; display the image registration information in a visualized graphical form on a user interface screen of the printing device in real-time. 13. A medium as in claim 12, wherein measuring distances between edges of the page and the target registration points comprises imaging the page with a camera as the page is carried on an impression drum of the printing device. 14. A medium as in claim 12, wherein the print job image and target registration points are printed on multiple pages, and determining the image registration information comprises: measuring Y distances from a leading edge of each page to the target registration points; measuring X distances from side edges of each page to the target registration points; for each target registration point, calculating a Y range for the Y distances and an X range for the X distances; calculating a front-to-front Y registration offset as a maximum Y range; and calculating a front-to-front X registration offset as a maximum X range. 15. A medium as in claim 12, wherein determining the image registration information comprises updating the image registration information in real-time as each page is printed.
In an example implementation, a method of providing image registration information in a digital printing press includes imaging a printed page on an impression drum of a digital printing press, determining image registration information from printed content on the page, and displaying a graphical visualization of the image registration information on a user interface screen of the digital printing press.1. A method of providing image registration information in a digital printing press comprising: imaging a printed page on an impression drum of a digital printing press; determining image registration information from printed content on the page; and displaying a graphical visualization of the image registration information on a user interface screen of the digital printing press. 2. A method as in claim 1, wherein imaging a printed page comprises capturing an image of target registration points on the page with a camera. 3. A method as in claim 2, wherein determining image registration information comprises: measuring Y distances from a leading edge of the page to target registration points printed on the page, and X distances from side edges of the page to the target registration points; and calculating X and Y offset data from the measured distances based on expected locations of the target registration points. 4. A method as in claim 3, wherein displaying the graphical visualization on a user interface screen comprises transforming the X and Y offset data into graphical form on the user interface screen. 5. A method as in claim 1, wherein displaying the graphical visualization on a user interface screen comprises displaying the graphical visualization on the user interface screen in real-time as the printed page travels along a print path within the digital press. 6. A method as in claim 1, wherein imaging a printed page comprises imaging multiple printed pages, and wherein displaying the graphical visualization comprises displaying a graphical visualization of image registration information from each of the multiple printed pages in real-time as the pages travel along a print path within the digital press. 7. A method as in claim 1, further comprising performing functions related to the image registration information, the functions selected from the group consisting of displaying statistical information for a pre-defined number of printed pages, providing an alert when image registration data falls outside a pre-set limit, automatically removing a printed page from a group of printed pages whose image registration data falls outside a pre-set limit, identifying a printed page for manual removal whose image registration data falls outside a pre-set limit, providing a trend analysis to indicate an increasing or decreasing image registration trend, and resetting a starting point for determining the image registration information. 8. A digital printing press comprising: a user interface screen; an imaging device to image a printed page as the printed page travels on an impression drum of the printing press; a measurement module to determine from the imaged page, registration information from target points on the printed page; and a visualization module to display the registration information in graphical form on the user interface screen. 9. A digital printing press as in claim 8, further comprising print data and target point data stored in a memory, the print data and target point data to be printed on the printed page. 10. A digital printing press as in claim 8, wherein the printed page comprises: a print job image printed on the printed page; wherein the target points are printed on the printed page outside an area of the print job image. 11. A digital printing press as in claim 8, wherein the target points are printed adjacent to a leading edge and a trailing edge of the printed page. 12. A non-transitory machine-readable storage medium storing instructions that when executed by a processor of a printing device, cause the printing device to: print a print job image on a page; print target registration points on the page; measure distances between edges of the page and the target registration points to determine image registration information; display the image registration information in a visualized graphical form on a user interface screen of the printing device in real-time. 13. A medium as in claim 12, wherein measuring distances between edges of the page and the target registration points comprises imaging the page with a camera as the page is carried on an impression drum of the printing device. 14. A medium as in claim 12, wherein the print job image and target registration points are printed on multiple pages, and determining the image registration information comprises: measuring Y distances from a leading edge of each page to the target registration points; measuring X distances from side edges of each page to the target registration points; for each target registration point, calculating a Y range for the Y distances and an X range for the X distances; calculating a front-to-front Y registration offset as a maximum Y range; and calculating a front-to-front X registration offset as a maximum X range. 15. A medium as in claim 12, wherein determining the image registration information comprises updating the image registration information in real-time as each page is printed.
2,600
10,472
10,472
15,785,803
2,658
A speech recognition engine is provided voice data indicative of at least a brand of a target appliance. The speech recognition engine uses the voice data indicative of at least a brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the brand of the target appliance. The at least one codeset so identified is then caused to be provisioned to the controlling device for use in commanding functional operations of the target appliance.
1. A method for configuring a controlling device to command functional operations of a target appliance, the method comprising: receiving at a speech recognition engine voice data indicative of at least a type for and a brand of the target appliance whereupon the speech recognition engine uses the voice data indicative of at least the type for and the brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the brand of the target appliance; and causing the at least one codeset to be provisioned to the controlling device for use in commanding functional operations of the target appliance. 2. The method as recited in claim 1, wherein the speech recognition engine identifies a plurality of codesets that are cross-referenced to the type for and the brand of the target appliance and the method further comprises receiving at the speech recognition engine voice data indicative of at least a model of the target appliance whereupon the speech recognition engine uses the voice data indicative of at least the model of the target appliance to identify within the plurality of codesets at least one codeset that is cross-referenced to the model of the target appliance and wherein the at least one codeset that is provisioned to controlling device is the at least one codeset that is also cross-referenced to the model of the target appliance. 3. The method as recited in claim 2, wherein the controlling device comprises a microphone for receiving voice input used in creating the voice data. 4. The method as recited in claim 3, wherein the controlling device comprises a memory having stored therein the library of codesets. 5. The method as recited in claim 4, wherein the controlling device comprises a processing device and instructions for providing the speech recognition engine. 6. The method as recited in claim 3, wherein the speech recognition engine is executed on a computing device remote from the controlling device. 7. The method as recited in claim 6, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 8. The method as recited in claim 1, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 9. The method as recited in claim 1, wherein the controlling device has a plurality of function keys activatable to cause a transmission of a command to the target appliance and wherein the method comprises receiving at the speech recognition engine voice data indicative of a command to be assigned to a function key of the controlling device whereupon the speech recognition engine uses the voice data indicative of a command to be assigned to a function key to identify within the at least one codeset command data that is cross-referenced to the command and causing the command data of the at least one codeset to be used by the controlling device in response to the function key being subsequently activated to cause a transmission of a command to the target device. 10. The method as recited in claim 1, further comprising receiving at the speech recognition engine voice data indicative of a command to be transmitted from the controlling device whereupon the speech recognition engine uses the voice data indicative of a command to be transmitted to identify within the at least one codeset command data that is cross-referenced to the command and causing the command data of the at least one codeset to be used to transmit a command to the target appliance. 11. The method as recited in claim 1, wherein location data is additionally utilized in the process of identifying the at least one codeset that is cross-referenced to the type for and the brand of the target appliance. 12. The method as recited in claim 2, wherein location data is additionally utilized in the process of identifying the at least one codeset that is cross-reference to the model of the target appliance. 13. A system for configuring a controlling device to command functional operations of a target appliance, the system comprising: a processing device having associated instructions stored on a non-transient readable media which instructions, when executed by the processing device, cause a speech recognition engine to use received voice data indicative of at least a type for and a brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the type for and the brand of the target appliance and to cause the at least one codeset to be provisioned to the controlling device for use in commanding functional operations of the target appliance. 14. The system as recited in claim 13, wherein the instructions are downloaded to the controlling device in a downloadable app. 15. The system as recited in claim 14, wherein the controlling device comprises one of a smart phone or a tablet computing device. 16. The system as recited in claim 13, wherein the speech recognition engine functions to identify a plurality of codesets that are cross-referenced to the type for and the brand of the appliance and the instructions further cause the speech recognition engine to use received voice data indicative of a model of the target appliance to identify within the plurality of codesets at least one codeset that is cross-referenced to a model of the target appliance whereupon the at least one codeset that is provisioned to controlling device is the at least one codeset that is also cross-referenced to the model of the target appliance. 17. The system as recited in claim 16, wherein the controlling device comprises a memory having stored therein the library of codesets. 18. The system as recited in claim 16, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 19. The system as recited in claim 13, wherein the processing device comprises one or more computing devices located remotely from the controlling device. 20. The system as recited in claim 19, wherein the controlling device comprises a memory having stored therein the library of codesets. 21. The system as recited in claim 19, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 22. The system as recited in claim 13, wherein the controlling device has a plurality of function keys activatable to cause a transmission of a command to the target appliance and wherein the instructions cause the speech recognition engine to use voice data indicative of a command to be assigned to a function key of the controlling device to identify within the at least one codeset command data that is cross-referenced to the command and to cause the command data of the at least one codeset to be used by the controlling device in response to the function key being subsequently activated to cause a transmission of a command to the target device. 23. The system as recited in claim 13, wherein the instructions cause the speech recognition engine to use voice data indicative of a command to be transmitted from the controlling device to identify within the at least one codeset command data that is cross-referenced to the command and to cause the command data of the at least one codeset to be used to transmit a command to the target appliance. 24. The system as recited in claim 13, wherein the instructions additionally cause location data to be considered when identifying the at least one codeset that is cross-referenced to the brand of the target appliance. 25. The system as recited in claim 16, wherein the instructions additionally cause location data to be considered when identifying the at least one codeset that is cross-reference to the model of the target appliance.
A speech recognition engine is provided voice data indicative of at least a brand of a target appliance. The speech recognition engine uses the voice data indicative of at least a brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the brand of the target appliance. The at least one codeset so identified is then caused to be provisioned to the controlling device for use in commanding functional operations of the target appliance.1. A method for configuring a controlling device to command functional operations of a target appliance, the method comprising: receiving at a speech recognition engine voice data indicative of at least a type for and a brand of the target appliance whereupon the speech recognition engine uses the voice data indicative of at least the type for and the brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the brand of the target appliance; and causing the at least one codeset to be provisioned to the controlling device for use in commanding functional operations of the target appliance. 2. The method as recited in claim 1, wherein the speech recognition engine identifies a plurality of codesets that are cross-referenced to the type for and the brand of the target appliance and the method further comprises receiving at the speech recognition engine voice data indicative of at least a model of the target appliance whereupon the speech recognition engine uses the voice data indicative of at least the model of the target appliance to identify within the plurality of codesets at least one codeset that is cross-referenced to the model of the target appliance and wherein the at least one codeset that is provisioned to controlling device is the at least one codeset that is also cross-referenced to the model of the target appliance. 3. The method as recited in claim 2, wherein the controlling device comprises a microphone for receiving voice input used in creating the voice data. 4. The method as recited in claim 3, wherein the controlling device comprises a memory having stored therein the library of codesets. 5. The method as recited in claim 4, wherein the controlling device comprises a processing device and instructions for providing the speech recognition engine. 6. The method as recited in claim 3, wherein the speech recognition engine is executed on a computing device remote from the controlling device. 7. The method as recited in claim 6, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 8. The method as recited in claim 1, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 9. The method as recited in claim 1, wherein the controlling device has a plurality of function keys activatable to cause a transmission of a command to the target appliance and wherein the method comprises receiving at the speech recognition engine voice data indicative of a command to be assigned to a function key of the controlling device whereupon the speech recognition engine uses the voice data indicative of a command to be assigned to a function key to identify within the at least one codeset command data that is cross-referenced to the command and causing the command data of the at least one codeset to be used by the controlling device in response to the function key being subsequently activated to cause a transmission of a command to the target device. 10. The method as recited in claim 1, further comprising receiving at the speech recognition engine voice data indicative of a command to be transmitted from the controlling device whereupon the speech recognition engine uses the voice data indicative of a command to be transmitted to identify within the at least one codeset command data that is cross-referenced to the command and causing the command data of the at least one codeset to be used to transmit a command to the target appliance. 11. The method as recited in claim 1, wherein location data is additionally utilized in the process of identifying the at least one codeset that is cross-referenced to the type for and the brand of the target appliance. 12. The method as recited in claim 2, wherein location data is additionally utilized in the process of identifying the at least one codeset that is cross-reference to the model of the target appliance. 13. A system for configuring a controlling device to command functional operations of a target appliance, the system comprising: a processing device having associated instructions stored on a non-transient readable media which instructions, when executed by the processing device, cause a speech recognition engine to use received voice data indicative of at least a type for and a brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the type for and the brand of the target appliance and to cause the at least one codeset to be provisioned to the controlling device for use in commanding functional operations of the target appliance. 14. The system as recited in claim 13, wherein the instructions are downloaded to the controlling device in a downloadable app. 15. The system as recited in claim 14, wherein the controlling device comprises one of a smart phone or a tablet computing device. 16. The system as recited in claim 13, wherein the speech recognition engine functions to identify a plurality of codesets that are cross-referenced to the type for and the brand of the appliance and the instructions further cause the speech recognition engine to use received voice data indicative of a model of the target appliance to identify within the plurality of codesets at least one codeset that is cross-referenced to a model of the target appliance whereupon the at least one codeset that is provisioned to controlling device is the at least one codeset that is also cross-referenced to the model of the target appliance. 17. The system as recited in claim 16, wherein the controlling device comprises a memory having stored therein the library of codesets. 18. The system as recited in claim 16, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 19. The system as recited in claim 13, wherein the processing device comprises one or more computing devices located remotely from the controlling device. 20. The system as recited in claim 19, wherein the controlling device comprises a memory having stored therein the library of codesets. 21. The system as recited in claim 19, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 22. The system as recited in claim 13, wherein the controlling device has a plurality of function keys activatable to cause a transmission of a command to the target appliance and wherein the instructions cause the speech recognition engine to use voice data indicative of a command to be assigned to a function key of the controlling device to identify within the at least one codeset command data that is cross-referenced to the command and to cause the command data of the at least one codeset to be used by the controlling device in response to the function key being subsequently activated to cause a transmission of a command to the target device. 23. The system as recited in claim 13, wherein the instructions cause the speech recognition engine to use voice data indicative of a command to be transmitted from the controlling device to identify within the at least one codeset command data that is cross-referenced to the command and to cause the command data of the at least one codeset to be used to transmit a command to the target appliance. 24. The system as recited in claim 13, wherein the instructions additionally cause location data to be considered when identifying the at least one codeset that is cross-referenced to the brand of the target appliance. 25. The system as recited in claim 16, wherein the instructions additionally cause location data to be considered when identifying the at least one codeset that is cross-reference to the model of the target appliance.
2,600
10,473
10,473
15,353,763
2,653
Embodiments of the present invention disclose a method, computer program product, and system for real-time determination of attentiveness of an audience in a room to a speaker. A computer determines amounts of wireless activity of a computing device within the room of the audience over time during the presentation. Based in part on the amounts of wireless activity of the computing device, the computer determines and initiates display of measures of attentiveness of the audience over the time during the presentation, to display changes in the measures of attentiveness of the audience to the speaker during the presentation. In another embodiment, the computer determines amounts of attentiveness of the audience within the room over time during the presentation utilizing sensors located throughout the room.
1. A computer system for real-time determination of attentiveness of an audience in a room to a speaker making a presentation, the computer system comprising: one or more computer processors; one or more computer-readable storage devices; program instructions stored on the computer-readable storage media for execution by at least one of the one or more processors, the program instructions comprising: program instructions to determine amounts of attentiveness of the audience within the room over time during the presentation utilizing sensors located throughout the room; and based in part on amounts of attentiveness the audience over time, program instructions to determine and initiate display of measures of attentiveness of the audience over the time during the presentation, to display changes in the measures of attentiveness of the audience to the speaker during the presentation. 2. The computer system of claim 1, further comprising program instructions to: monitor amounts of movement of members of the audience over the time during the presentation; and wherein the measures of attentiveness of the audience is also based in part on amounts of movement of the members of the audience. 3. The computer system of claim 1, further comprising program instructions to: monitor levels of audible noise of members of the audience over the time during the presentation; and wherein the measures of attentiveness of the audience is also based in part on the amounts of audible noise of the members of the audience. 4. The computer system of claim 3, wherein the audible noise of members of the audience does not include applause from members of the audience 5. The computer system of claim 1, further comprising program instructions to: record the presentation correlated with the determined measures of attentiveness of the audience, wherein the determined measures of attentiveness of the audience correlate to the corresponding time in the presentation. 6. The computer system of claim 1, wherein the program instructions to initiate display of measures of attentiveness of the audience over time includes an indication of whether the speaker has lost the audience.
Embodiments of the present invention disclose a method, computer program product, and system for real-time determination of attentiveness of an audience in a room to a speaker. A computer determines amounts of wireless activity of a computing device within the room of the audience over time during the presentation. Based in part on the amounts of wireless activity of the computing device, the computer determines and initiates display of measures of attentiveness of the audience over the time during the presentation, to display changes in the measures of attentiveness of the audience to the speaker during the presentation. In another embodiment, the computer determines amounts of attentiveness of the audience within the room over time during the presentation utilizing sensors located throughout the room.1. A computer system for real-time determination of attentiveness of an audience in a room to a speaker making a presentation, the computer system comprising: one or more computer processors; one or more computer-readable storage devices; program instructions stored on the computer-readable storage media for execution by at least one of the one or more processors, the program instructions comprising: program instructions to determine amounts of attentiveness of the audience within the room over time during the presentation utilizing sensors located throughout the room; and based in part on amounts of attentiveness the audience over time, program instructions to determine and initiate display of measures of attentiveness of the audience over the time during the presentation, to display changes in the measures of attentiveness of the audience to the speaker during the presentation. 2. The computer system of claim 1, further comprising program instructions to: monitor amounts of movement of members of the audience over the time during the presentation; and wherein the measures of attentiveness of the audience is also based in part on amounts of movement of the members of the audience. 3. The computer system of claim 1, further comprising program instructions to: monitor levels of audible noise of members of the audience over the time during the presentation; and wherein the measures of attentiveness of the audience is also based in part on the amounts of audible noise of the members of the audience. 4. The computer system of claim 3, wherein the audible noise of members of the audience does not include applause from members of the audience 5. The computer system of claim 1, further comprising program instructions to: record the presentation correlated with the determined measures of attentiveness of the audience, wherein the determined measures of attentiveness of the audience correlate to the corresponding time in the presentation. 6. The computer system of claim 1, wherein the program instructions to initiate display of measures of attentiveness of the audience over time includes an indication of whether the speaker has lost the audience.
2,600
10,474
10,474
15,213,786
2,651
An external headpiece of an implantable hearing aid system, including an RF coil, a sound processing apparatus, a battery, and a magnet configured to support the headpiece against skin of the recipient via a transcutaneous magnetic coupling with an implanted magnet implanted in a recipient, wherein a longitudinal axis of the cylindrical battery extends through the magnet.
1. An external headpiece of a hearing prosthesis, comprising: an RF coil; a sound processing apparatus; a cylindrical battery; and a magnet configured to support the headpiece against skin of the recipient via a transcutaneous magnetic coupling with an implanted magnet implanted in a recipient, wherein a longitudinal axis of the cylindrical battery extends through the magnet. 2. The external headpiece of claim 1, wherein the external headpiece is a button sound processor. 3. The external headpiece of claim 1, further comprising: a housing apparatus, wherein the magnet is located within the housing apparatus, and wherein the magnet retains the battery locationally within the housing apparatus. 4. The external headpiece of claim 1, further comprising: a housing apparatus, wherein the magnet is located within the housing apparatus, and wherein the magnet retains the battery against an electrical contact in electrical communication with the sound processing apparatus. 5. The external headpiece of claim 4, wherein: the magnet is part of a magnet assembly, and wherein the contact is established by the magnet assembly. 6. The external headpiece of claim 1, wherein: the magnet, the battery and the RF coil are coaxial with one another. 7. The external headpiece of claim 1, wherein: the external headpiece is configured such that an additional magnet can be added to the external headpiece, wherein the addition of the magnet changes the location of the battery relative to that which was the case prior to the addition of the additional magnet. 8. The external headpiece of claim 1, further comprising: a housing encasing the magnet, wherein the magnet is fixed relative to the housing. 9. An external component of a hearing prosthesis, comprising: a battery; an electrically powered component; and a magnet apparatus, wherein the magnet apparatus provides a path for electricity to flow from the battery to the electrically powered component or provides a path to complete the circuit from the electrically powered component to the battery. 10. The external component of claim 9, wherein: the external component is a button sound processor. 11. The external component of claim 9, wherein: the battery is an air battery having an anode can surface in direct contact with the magnet apparatus. 12. The external component of claim 9, wherein: the battery is an air battery having an anode can surface in direct contact with the magnet apparatus such that the magnet apparatus forms a negative contact of the circuit in which the electrically powered component is apart. 13. The external component of claim 9, further comprising: a plurality of magnets apparatuses including the magnet apparatus, wherein the plurality of magnet apparatus provides the path for electricity to flow from the battery to the electrically powered component or provide the path to complete the circuit from the electrically powered component to the battery. 14. The external component of claim 9, wherein: the external component is configured such that the battery is variably positionable within the external component to accommodate a variable volume taken up by one or more magnetic components configured to adhere the external component to a recipient via a transcutaneous magnetic link, the one or more magnetic components including the magnet apparatus. 15. The external component of claim 9, wherein: the battery and the magnet apparatus are aligned with respect to their longitudinal axes. 16. An external component of a prosthesis, comprising: a battery; and a magnet apparatus, wherein the external component is configured such that a magnetic force generated by the magnet apparatus applies a force onto the battery such that the battery is urged against an electrical contact of a circuit of which the battery is apart. 17. The external component of claim 16, wherein: the external component is an external headpiece of an implantable hearing prosthesis; the external component includes a sound processing apparatus; and the battery is concentric with the magnet apparatus. 18. The external component of claim 16, wherein: the external component is configured such that the magnetic force pulls the battery against the electrical contact. 19. The external component of claim 16, wherein: the electrical contact is a component separate from the magnet apparatus. 20. The external component of claim 16, wherein: the electrical contact is the magnet apparatus. 21. The external component of claim 16, wherein: the external component is devoid of any battery force application components beyond that resulting from the magnetic force of the magnet apparatus. 22. The external component of claim 16, wherein: the battery and the magnet apparatus are physically separated by a partition. 23. The external component of claim 16, wherein: the external component includes an RF inductance coil; and the location of the battery with respect to a plane on which the coil extends is such that the Q factor of the coil is higher than that which would be the case if the battery was located at any other location in a direction parallel to that plane within the external component. 24. A method, comprising: obtaining a headpiece for a prosthesis, the headpiece including an electronic component of the prosthesis; attaching a magnet to the headpiece, the magnet establishing a magnetic field that extends external to the headpiece; and attaching a battery to the headpiece, wherein the action of attaching the magnet to the headpiece controls a location of the battery. 25. The method of claim 24, wherein: the battery is held in place within the headpiece as a result of the magnetic field generated by the magnet. 26. The method of claim 24, further comprising: before the action of attaching the magnet to the headpiece, wearing the headpiece against skin of the recipient supported via a first transcutaneous magnetic coupling established by another magnet in the headpiece; and wearing the headpiece against skin of the recipient supported via a second transcutaneous magnetic coupling established by the magnet. 27. The method of claim 24, wherein: the action of attaching the battery to the headpiece includes placing the battery into the magnetic field established by the magnet such that the battery is attracted towards the magnet. 28. The method of claim 24, wherein: the action of attaching the battery to the headpiece includes placing the battery into electrical conductivity with a component of the battery assembly of which the battery is apart. 29. The method of claim 24, wherein: the action of attaching the magnet to the headpiece includes placing the magnet over another magnet already in the headpiece, thereby increasing a strength of a magnetic field generated by the headpiece, wherein the magnetic field is configured to adhere the headpiece against a head of a recipient via a transcutaneous magnetic coupling established at least in part by the magnetic field. 30. The method of claim 24, wherein: the action of attaching the magnet to the headpiece includes placing the magnet over a non-magnetic spacer already in the headpiece.
An external headpiece of an implantable hearing aid system, including an RF coil, a sound processing apparatus, a battery, and a magnet configured to support the headpiece against skin of the recipient via a transcutaneous magnetic coupling with an implanted magnet implanted in a recipient, wherein a longitudinal axis of the cylindrical battery extends through the magnet.1. An external headpiece of a hearing prosthesis, comprising: an RF coil; a sound processing apparatus; a cylindrical battery; and a magnet configured to support the headpiece against skin of the recipient via a transcutaneous magnetic coupling with an implanted magnet implanted in a recipient, wherein a longitudinal axis of the cylindrical battery extends through the magnet. 2. The external headpiece of claim 1, wherein the external headpiece is a button sound processor. 3. The external headpiece of claim 1, further comprising: a housing apparatus, wherein the magnet is located within the housing apparatus, and wherein the magnet retains the battery locationally within the housing apparatus. 4. The external headpiece of claim 1, further comprising: a housing apparatus, wherein the magnet is located within the housing apparatus, and wherein the magnet retains the battery against an electrical contact in electrical communication with the sound processing apparatus. 5. The external headpiece of claim 4, wherein: the magnet is part of a magnet assembly, and wherein the contact is established by the magnet assembly. 6. The external headpiece of claim 1, wherein: the magnet, the battery and the RF coil are coaxial with one another. 7. The external headpiece of claim 1, wherein: the external headpiece is configured such that an additional magnet can be added to the external headpiece, wherein the addition of the magnet changes the location of the battery relative to that which was the case prior to the addition of the additional magnet. 8. The external headpiece of claim 1, further comprising: a housing encasing the magnet, wherein the magnet is fixed relative to the housing. 9. An external component of a hearing prosthesis, comprising: a battery; an electrically powered component; and a magnet apparatus, wherein the magnet apparatus provides a path for electricity to flow from the battery to the electrically powered component or provides a path to complete the circuit from the electrically powered component to the battery. 10. The external component of claim 9, wherein: the external component is a button sound processor. 11. The external component of claim 9, wherein: the battery is an air battery having an anode can surface in direct contact with the magnet apparatus. 12. The external component of claim 9, wherein: the battery is an air battery having an anode can surface in direct contact with the magnet apparatus such that the magnet apparatus forms a negative contact of the circuit in which the electrically powered component is apart. 13. The external component of claim 9, further comprising: a plurality of magnets apparatuses including the magnet apparatus, wherein the plurality of magnet apparatus provides the path for electricity to flow from the battery to the electrically powered component or provide the path to complete the circuit from the electrically powered component to the battery. 14. The external component of claim 9, wherein: the external component is configured such that the battery is variably positionable within the external component to accommodate a variable volume taken up by one or more magnetic components configured to adhere the external component to a recipient via a transcutaneous magnetic link, the one or more magnetic components including the magnet apparatus. 15. The external component of claim 9, wherein: the battery and the magnet apparatus are aligned with respect to their longitudinal axes. 16. An external component of a prosthesis, comprising: a battery; and a magnet apparatus, wherein the external component is configured such that a magnetic force generated by the magnet apparatus applies a force onto the battery such that the battery is urged against an electrical contact of a circuit of which the battery is apart. 17. The external component of claim 16, wherein: the external component is an external headpiece of an implantable hearing prosthesis; the external component includes a sound processing apparatus; and the battery is concentric with the magnet apparatus. 18. The external component of claim 16, wherein: the external component is configured such that the magnetic force pulls the battery against the electrical contact. 19. The external component of claim 16, wherein: the electrical contact is a component separate from the magnet apparatus. 20. The external component of claim 16, wherein: the electrical contact is the magnet apparatus. 21. The external component of claim 16, wherein: the external component is devoid of any battery force application components beyond that resulting from the magnetic force of the magnet apparatus. 22. The external component of claim 16, wherein: the battery and the magnet apparatus are physically separated by a partition. 23. The external component of claim 16, wherein: the external component includes an RF inductance coil; and the location of the battery with respect to a plane on which the coil extends is such that the Q factor of the coil is higher than that which would be the case if the battery was located at any other location in a direction parallel to that plane within the external component. 24. A method, comprising: obtaining a headpiece for a prosthesis, the headpiece including an electronic component of the prosthesis; attaching a magnet to the headpiece, the magnet establishing a magnetic field that extends external to the headpiece; and attaching a battery to the headpiece, wherein the action of attaching the magnet to the headpiece controls a location of the battery. 25. The method of claim 24, wherein: the battery is held in place within the headpiece as a result of the magnetic field generated by the magnet. 26. The method of claim 24, further comprising: before the action of attaching the magnet to the headpiece, wearing the headpiece against skin of the recipient supported via a first transcutaneous magnetic coupling established by another magnet in the headpiece; and wearing the headpiece against skin of the recipient supported via a second transcutaneous magnetic coupling established by the magnet. 27. The method of claim 24, wherein: the action of attaching the battery to the headpiece includes placing the battery into the magnetic field established by the magnet such that the battery is attracted towards the magnet. 28. The method of claim 24, wherein: the action of attaching the battery to the headpiece includes placing the battery into electrical conductivity with a component of the battery assembly of which the battery is apart. 29. The method of claim 24, wherein: the action of attaching the magnet to the headpiece includes placing the magnet over another magnet already in the headpiece, thereby increasing a strength of a magnetic field generated by the headpiece, wherein the magnetic field is configured to adhere the headpiece against a head of a recipient via a transcutaneous magnetic coupling established at least in part by the magnetic field. 30. The method of claim 24, wherein: the action of attaching the magnet to the headpiece includes placing the magnet over a non-magnetic spacer already in the headpiece.
2,600
10,475
10,475
14,691,770
2,626
An expansion device couples to a capacitive mat disposed on a desktop surface and communicates power and data through the capacitive mat to perform functions in support of an information handling system. For example, the expansion device powers an inductive charger to charge a peripheral device, accepts end user touch inputs with a capacitive touch sensor, presents visual information at an integrated display in response to pixel values provided by the information handling system though the capacitive mat.
1. An information handling system modular capacitive mat comprising: a capacitive mat having an integrated display operable to present visual images generated by an information handling system, the capacitive mat disposed on a desktop surface and including capacitive sensors operable to detect touches and communicate touch locations to the information handling system; an expansion coupling device disposed at a side of the capacitive mat and operable to couple with an expansion device and communicate electrical signals between the capacitive mat and the expansion device; and an expansion device configured to couple to the coupling device, the expansion device having an upper surface operable to perform a function supported by the electrical signals. 2. The information handling system modular capacitive mat of claim 1 further comprising: an inductive charger disposed proximate the upper surface and operable to inductively charge a device placed on the upper surface; and a power controller disposed in the expansion device and operable to accept power from the electrical signals and selectively apply power to the inductive charger. 3. The information handling system modular capacitive mat of claim 2 further comprising illumination integrated in the expansion device and operable to identify the inductive charger location as an aid to placement of the device on the upper surface. 4. The information handling system modular capacitive mat of claim 3 further comprising a proximity sensor interfaced with the power controller and operable to selectively apply power to the inductive charger upon placement of a device on the charger location. 5. The information handling system modular capacitive mat of claim 2 further comprising: first and second rails extending outward from the capacitive mat to create an opening sized to fit the expansion device; and a power interface extending through the rails to communicate power to the inductive charger. 6. The information handling system modular capacitive mat of claim 1 further comprising capacitive sensors disposed proximate the upper surface to perform a touch pad function. 7. The information handling system modular capacitive mat of claim 1 further comprising: a display integrated in the expansion device and operable to present visual images; and a display interface integrated with the expansion coupling device to accept visual information from the information handling system sent through the capacitive mat. 8. The information handling system modular capacitive mat of claim 7 wherein the display interface comprises a DisplayPort interface operable to daisychain visual information from the information handling system. 9. The information handling system modular capacitive mat of claim 1 further comprising: capacitive sensors disposed proximate the upper surface to detect touch inputs; and a totem disposed on the upper surface and having identifying capacitive information detectable by the capacitive sensors to distinguish end user inputs to the totem as having a predetermined totem function. 10. A method for expanding a capacitive mat functional surface, the method comprising: coupling an expansion device to a coupling device of the capacitive mat; communicating power from the capacitive mat to the expansion device through the coupling device; and applying the power to perform a function at an upper surface of the expansion device. 11. The method of claim 10 further comprising: presenting visual images with a display integrated in the capacitive mat; communicating visual information from the capacitive mat to the expansion device; and presenting visual images using the visual information with a display integrated in the expansion device. 12. The method of claim 11 wherein the coupling device comprises a DisplayPort interface. 13. The method of claim 11 further comprising: accepting end user touch inputs at the expansion device upper surface; and communicating the end user touch inputs through the coupling device to the capacitive mat. 14. The method of claim 10 wherein the applying the power further comprises: applying power to an inductive charger disposed below the upper surface; placing a peripheral device on the upper surface; and charging a battery of the peripheral device with power communicated from the inductive charger. 15. The method of claim 14 further comprising illuminating the upper surface to indicate a location for placement of the peripheral device to accept charging. 16. The method of claim 14 wherein the peripheral device comprises a totem having an integrated processing device and battery. 17. An information handling system comprising: a processor; memory interfaced with the processor; a graphics system interfaced with the processor and memory, the graphics system operable to process visual information into pixel values for presentation at a display; a capacitive mat interfaced with the processor and disposed on a desktop surface, the capacitive mat having a capacitive touch sensor operable to detect touches and send touch positions to the processor; and an expansion device adapted to couple to a side of the capacitive mat and rest on the desktop surface, the expansion device operable to accept power from the capacitive mat and to apply the power to perform a function. 18. The information handling system of claim 17 wherein the function of the expansion device comprises applying power to an inductive charger to inductively charge a peripheral placed on top of the expansion device. 19. The information handling system of claim 17 wherein the function of the expansion device comprises accepting touch inputs with a capacitive touch sensor and communicating the touch inputs through the capacitive mat to the processor. 20. The information handling system of claim 17 wherein the function of the expansion device comprises presenting visual images with a display integrated in the expansion device, the visual images presented with pixel values communicated from the graphics system through the capacitive mat.
An expansion device couples to a capacitive mat disposed on a desktop surface and communicates power and data through the capacitive mat to perform functions in support of an information handling system. For example, the expansion device powers an inductive charger to charge a peripheral device, accepts end user touch inputs with a capacitive touch sensor, presents visual information at an integrated display in response to pixel values provided by the information handling system though the capacitive mat.1. An information handling system modular capacitive mat comprising: a capacitive mat having an integrated display operable to present visual images generated by an information handling system, the capacitive mat disposed on a desktop surface and including capacitive sensors operable to detect touches and communicate touch locations to the information handling system; an expansion coupling device disposed at a side of the capacitive mat and operable to couple with an expansion device and communicate electrical signals between the capacitive mat and the expansion device; and an expansion device configured to couple to the coupling device, the expansion device having an upper surface operable to perform a function supported by the electrical signals. 2. The information handling system modular capacitive mat of claim 1 further comprising: an inductive charger disposed proximate the upper surface and operable to inductively charge a device placed on the upper surface; and a power controller disposed in the expansion device and operable to accept power from the electrical signals and selectively apply power to the inductive charger. 3. The information handling system modular capacitive mat of claim 2 further comprising illumination integrated in the expansion device and operable to identify the inductive charger location as an aid to placement of the device on the upper surface. 4. The information handling system modular capacitive mat of claim 3 further comprising a proximity sensor interfaced with the power controller and operable to selectively apply power to the inductive charger upon placement of a device on the charger location. 5. The information handling system modular capacitive mat of claim 2 further comprising: first and second rails extending outward from the capacitive mat to create an opening sized to fit the expansion device; and a power interface extending through the rails to communicate power to the inductive charger. 6. The information handling system modular capacitive mat of claim 1 further comprising capacitive sensors disposed proximate the upper surface to perform a touch pad function. 7. The information handling system modular capacitive mat of claim 1 further comprising: a display integrated in the expansion device and operable to present visual images; and a display interface integrated with the expansion coupling device to accept visual information from the information handling system sent through the capacitive mat. 8. The information handling system modular capacitive mat of claim 7 wherein the display interface comprises a DisplayPort interface operable to daisychain visual information from the information handling system. 9. The information handling system modular capacitive mat of claim 1 further comprising: capacitive sensors disposed proximate the upper surface to detect touch inputs; and a totem disposed on the upper surface and having identifying capacitive information detectable by the capacitive sensors to distinguish end user inputs to the totem as having a predetermined totem function. 10. A method for expanding a capacitive mat functional surface, the method comprising: coupling an expansion device to a coupling device of the capacitive mat; communicating power from the capacitive mat to the expansion device through the coupling device; and applying the power to perform a function at an upper surface of the expansion device. 11. The method of claim 10 further comprising: presenting visual images with a display integrated in the capacitive mat; communicating visual information from the capacitive mat to the expansion device; and presenting visual images using the visual information with a display integrated in the expansion device. 12. The method of claim 11 wherein the coupling device comprises a DisplayPort interface. 13. The method of claim 11 further comprising: accepting end user touch inputs at the expansion device upper surface; and communicating the end user touch inputs through the coupling device to the capacitive mat. 14. The method of claim 10 wherein the applying the power further comprises: applying power to an inductive charger disposed below the upper surface; placing a peripheral device on the upper surface; and charging a battery of the peripheral device with power communicated from the inductive charger. 15. The method of claim 14 further comprising illuminating the upper surface to indicate a location for placement of the peripheral device to accept charging. 16. The method of claim 14 wherein the peripheral device comprises a totem having an integrated processing device and battery. 17. An information handling system comprising: a processor; memory interfaced with the processor; a graphics system interfaced with the processor and memory, the graphics system operable to process visual information into pixel values for presentation at a display; a capacitive mat interfaced with the processor and disposed on a desktop surface, the capacitive mat having a capacitive touch sensor operable to detect touches and send touch positions to the processor; and an expansion device adapted to couple to a side of the capacitive mat and rest on the desktop surface, the expansion device operable to accept power from the capacitive mat and to apply the power to perform a function. 18. The information handling system of claim 17 wherein the function of the expansion device comprises applying power to an inductive charger to inductively charge a peripheral placed on top of the expansion device. 19. The information handling system of claim 17 wherein the function of the expansion device comprises accepting touch inputs with a capacitive touch sensor and communicating the touch inputs through the capacitive mat to the processor. 20. The information handling system of claim 17 wherein the function of the expansion device comprises presenting visual images with a display integrated in the expansion device, the visual images presented with pixel values communicated from the graphics system through the capacitive mat.
2,600
10,476
10,476
15,288,052
2,646
The present disclosure generally discloses an interference mitigation capability. The present disclosure discloses use of dirty paper coding in a wireless communication network in order to mitigate interference in the wireless communication network. The wireless communication network may be a heterogeneous wireless communication network, where heterogeneity may be based on wireless access device technology type, wireless access device transmit power, or the like. For example, the wireless communication network may be a heterogeneous wireless communication network including a first type of wireless access device (e.g., a small cell device, such as a metro cell, microcell, picocell, femtocell, or the like) and a second type of wireless access device (e.g., a large cell device, such as a macro cell), where the first type of wireless access device is configured to use dirty paper coding to mitigate interference from the second type of wireless access device.
1. An apparatus, comprising: a processor and a memory communicatively connected to the processor, the processor configured to: receive, by a first wireless access device from a wireless end device associated with the first wireless access device, feedback information comprising information indicative of channel estimate information for a channel between the wireless end device and the first wireless access device and information indicative of channel estimate information for a channel between the wireless end device and a second wireless access device; receive, by the first wireless access device from the second wireless access device, information indicative of a transmit sequence to be transmitted by the second wireless access device using a set of wireless resources; determine, by the first wireless access device using a dirty paper coding scheme and based on the feedback information and the information indicative of the transmit sequence to be transmitted by the second wireless access device, a transmit sequence for transmission by the first wireless access device toward the wireless end device using the set of wireless resources; and transmit the transmit sequence toward the wireless end device using the set of wireless resources. 2. The apparatus of claim 1, wherein the information indicative of the channel estimate information for the channel between the wireless end device and the first wireless access device comprises the channel estimate information for the channel between the wireless end device and the first wireless access device. 3. The apparatus of claim 2, wherein the channel estimate information for the channel between the wireless end device and the first wireless access device comprises a product of estimated channel information for the channel between the wireless end device and the first wireless access device and a filter vector of a receiver of the wireless end device. 4. The apparatus of claim 1, wherein the information indicative of the channel estimate information for the channel between the wireless end device and the first wireless access device comprises estimated channel information for the channel between the wireless end device and the first wireless access device and a filter vector of a receiver of the wireless end device. 5. The apparatus of claim 4, wherein the processor is configured to: compute the channel estimate information for the channel between the wireless end device and the first wireless access device as a product of the estimated channel information for the channel between the wireless end device and the first wireless access device and the filter vector of the receiver of the wireless end device. 6. The apparatus of claim 1, wherein the feedback information further comprises a strength of a sum of noise and uncancelled interference at the wireless end device. 7. The apparatus of claim 1, wherein the processor is configured to: send, from the first wireless access device toward the wireless end device based on a determination that the first wireless access device is scheduling a transmission to the wireless end device, a request for the wireless end device to provide the feedback information to the first wireless access device. 8. The apparatus of claim 1, wherein the information indicative of the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources comprises the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources. 9. The apparatus of claim 1, wherein the information indicative of the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources comprises: a set of raw information bits to be transmitted by the second wireless access device using the set of wireless resources; and an indication of a Modulation and Coding Scheme (MCS) to be used by the second wireless access device to transmit the raw information bits using the set of wireless resources. 10. The apparatus of claim 9, wherein the processor is configured to: determine the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources based on the raw information bits to be transmitted by the second wireless access device using the set of wireless resources and the indication of the MCS to be used by the second wireless access device to transmit the raw information bits using the set of wireless resources. 11. The apparatus of claim 1, wherein the processor is configured to: send, from the first wireless access device toward the second wireless access device, a request for the information indicative of the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources. 12. The apparatus of claim 1, wherein the processor is configured to: select the second wireless access device from a set of candidate wireless access devices. 13. The apparatus of claim 1, wherein the processor is configured to: transmit the transmit sequence toward the wireless end device using the set of wireless resources. 14. The apparatus of claim 1, wherein the first wireless access device comprises a first type of wireless access device and the second wireless access device comprises a second type of wireless access device. 15. The apparatus of claim 14, wherein the first type of wireless access device comprises a metro cell device, wherein the second type of wireless access device comprises a macro cell device. 16. An apparatus, comprising: a processor and a memory communicatively connected to the processor, the processor configured to: determine, by a wireless end device connected to a first wireless access device, feedback information comprising information indicative of channel estimate information for a channel between the wireless end device and the first wireless access device and information indicative of channel estimate information for a channel between the wireless end device and a second wireless access device; send the feedback information from the wireless end device toward the first wireless access device; and receive, by the wireless end device from the first wireless access device, a wireless received sequence. 17. The apparatus of claim 16, wherein the first wireless access device is a first type of wireless access device and the second wireless access device is a second type of wireless access device. 18. The apparatus of claim 17, wherein the first type of wireless access device comprises a metro cell device, wherein the second type of wireless access device comprises a macro cell device. 19. The apparatus of claim 16, wherein the information indicative of the channel estimate information for the channel between the wireless end device and the first wireless access device comprises the channel estimate information for the channel between the wireless end device and the first wireless access device. 20. The apparatus of claim 19, wherein the channel estimate information for the channel between the wireless end device and the first wireless access device comprises a product of estimated channel information for the channel between the wireless end device and the first wireless access device and a filter vector of a receiver of the wireless end device. 21. The apparatus of claim 16, wherein the information indicative of the channel estimate information for the channel between the wireless end device and the first wireless access device comprises estimated channel information for the channel between the wireless end device and the first wireless access device and a filter vector of a receiver of the wireless end device. 22. The apparatus of claim 16, wherein the feedback information further comprises a strength of a sum of noise and uncancelled interference at the wireless end device. 23. The apparatus of claim 16, wherein the processor is configured to determine and send the feedback information based on an instruction for the wireless end device to determine and send feedback information to the first wireless access device. 24. An apparatus, comprising: a processor and a memory communicatively connected to the processor, the processor configured to: receive, at a first wireless access device from a second wireless access device, a request for information indicative of a transmit sequence to be transmitted by the first wireless access device using a set of wireless resources; and send, from the first wireless access device toward the second wireless access device, a response including the information indicative of the transmit sequence to be transmitted by the first wireless access device using the set of wireless resources. 25. The apparatus of claim 24, wherein the information indicative of transmit sequence to be transmitted by first wireless access device using the set of wireless resources comprises the transmit sequence to be transmitted by the first wireless access device using the set of wireless resources. 26. The apparatus of claim 24, wherein the information indicative of transmit sequence to be transmitted by the first wireless access device using the set of wireless resources comprises a set of raw information bits to be transmitted by the first wireless access device and an indication of a modulation and coding scheme (MCS) to be used by the first wireless access device to transmit the raw information bits.
The present disclosure generally discloses an interference mitigation capability. The present disclosure discloses use of dirty paper coding in a wireless communication network in order to mitigate interference in the wireless communication network. The wireless communication network may be a heterogeneous wireless communication network, where heterogeneity may be based on wireless access device technology type, wireless access device transmit power, or the like. For example, the wireless communication network may be a heterogeneous wireless communication network including a first type of wireless access device (e.g., a small cell device, such as a metro cell, microcell, picocell, femtocell, or the like) and a second type of wireless access device (e.g., a large cell device, such as a macro cell), where the first type of wireless access device is configured to use dirty paper coding to mitigate interference from the second type of wireless access device.1. An apparatus, comprising: a processor and a memory communicatively connected to the processor, the processor configured to: receive, by a first wireless access device from a wireless end device associated with the first wireless access device, feedback information comprising information indicative of channel estimate information for a channel between the wireless end device and the first wireless access device and information indicative of channel estimate information for a channel between the wireless end device and a second wireless access device; receive, by the first wireless access device from the second wireless access device, information indicative of a transmit sequence to be transmitted by the second wireless access device using a set of wireless resources; determine, by the first wireless access device using a dirty paper coding scheme and based on the feedback information and the information indicative of the transmit sequence to be transmitted by the second wireless access device, a transmit sequence for transmission by the first wireless access device toward the wireless end device using the set of wireless resources; and transmit the transmit sequence toward the wireless end device using the set of wireless resources. 2. The apparatus of claim 1, wherein the information indicative of the channel estimate information for the channel between the wireless end device and the first wireless access device comprises the channel estimate information for the channel between the wireless end device and the first wireless access device. 3. The apparatus of claim 2, wherein the channel estimate information for the channel between the wireless end device and the first wireless access device comprises a product of estimated channel information for the channel between the wireless end device and the first wireless access device and a filter vector of a receiver of the wireless end device. 4. The apparatus of claim 1, wherein the information indicative of the channel estimate information for the channel between the wireless end device and the first wireless access device comprises estimated channel information for the channel between the wireless end device and the first wireless access device and a filter vector of a receiver of the wireless end device. 5. The apparatus of claim 4, wherein the processor is configured to: compute the channel estimate information for the channel between the wireless end device and the first wireless access device as a product of the estimated channel information for the channel between the wireless end device and the first wireless access device and the filter vector of the receiver of the wireless end device. 6. The apparatus of claim 1, wherein the feedback information further comprises a strength of a sum of noise and uncancelled interference at the wireless end device. 7. The apparatus of claim 1, wherein the processor is configured to: send, from the first wireless access device toward the wireless end device based on a determination that the first wireless access device is scheduling a transmission to the wireless end device, a request for the wireless end device to provide the feedback information to the first wireless access device. 8. The apparatus of claim 1, wherein the information indicative of the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources comprises the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources. 9. The apparatus of claim 1, wherein the information indicative of the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources comprises: a set of raw information bits to be transmitted by the second wireless access device using the set of wireless resources; and an indication of a Modulation and Coding Scheme (MCS) to be used by the second wireless access device to transmit the raw information bits using the set of wireless resources. 10. The apparatus of claim 9, wherein the processor is configured to: determine the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources based on the raw information bits to be transmitted by the second wireless access device using the set of wireless resources and the indication of the MCS to be used by the second wireless access device to transmit the raw information bits using the set of wireless resources. 11. The apparatus of claim 1, wherein the processor is configured to: send, from the first wireless access device toward the second wireless access device, a request for the information indicative of the transmit sequence to be transmitted by the second wireless access device using the set of wireless resources. 12. The apparatus of claim 1, wherein the processor is configured to: select the second wireless access device from a set of candidate wireless access devices. 13. The apparatus of claim 1, wherein the processor is configured to: transmit the transmit sequence toward the wireless end device using the set of wireless resources. 14. The apparatus of claim 1, wherein the first wireless access device comprises a first type of wireless access device and the second wireless access device comprises a second type of wireless access device. 15. The apparatus of claim 14, wherein the first type of wireless access device comprises a metro cell device, wherein the second type of wireless access device comprises a macro cell device. 16. An apparatus, comprising: a processor and a memory communicatively connected to the processor, the processor configured to: determine, by a wireless end device connected to a first wireless access device, feedback information comprising information indicative of channel estimate information for a channel between the wireless end device and the first wireless access device and information indicative of channel estimate information for a channel between the wireless end device and a second wireless access device; send the feedback information from the wireless end device toward the first wireless access device; and receive, by the wireless end device from the first wireless access device, a wireless received sequence. 17. The apparatus of claim 16, wherein the first wireless access device is a first type of wireless access device and the second wireless access device is a second type of wireless access device. 18. The apparatus of claim 17, wherein the first type of wireless access device comprises a metro cell device, wherein the second type of wireless access device comprises a macro cell device. 19. The apparatus of claim 16, wherein the information indicative of the channel estimate information for the channel between the wireless end device and the first wireless access device comprises the channel estimate information for the channel between the wireless end device and the first wireless access device. 20. The apparatus of claim 19, wherein the channel estimate information for the channel between the wireless end device and the first wireless access device comprises a product of estimated channel information for the channel between the wireless end device and the first wireless access device and a filter vector of a receiver of the wireless end device. 21. The apparatus of claim 16, wherein the information indicative of the channel estimate information for the channel between the wireless end device and the first wireless access device comprises estimated channel information for the channel between the wireless end device and the first wireless access device and a filter vector of a receiver of the wireless end device. 22. The apparatus of claim 16, wherein the feedback information further comprises a strength of a sum of noise and uncancelled interference at the wireless end device. 23. The apparatus of claim 16, wherein the processor is configured to determine and send the feedback information based on an instruction for the wireless end device to determine and send feedback information to the first wireless access device. 24. An apparatus, comprising: a processor and a memory communicatively connected to the processor, the processor configured to: receive, at a first wireless access device from a second wireless access device, a request for information indicative of a transmit sequence to be transmitted by the first wireless access device using a set of wireless resources; and send, from the first wireless access device toward the second wireless access device, a response including the information indicative of the transmit sequence to be transmitted by the first wireless access device using the set of wireless resources. 25. The apparatus of claim 24, wherein the information indicative of transmit sequence to be transmitted by first wireless access device using the set of wireless resources comprises the transmit sequence to be transmitted by the first wireless access device using the set of wireless resources. 26. The apparatus of claim 24, wherein the information indicative of transmit sequence to be transmitted by the first wireless access device using the set of wireless resources comprises a set of raw information bits to be transmitted by the first wireless access device and an indication of a modulation and coding scheme (MCS) to be used by the first wireless access device to transmit the raw information bits.
2,600
10,477
10,477
15,960,697
2,672
On a touch-panel display of an image forming apparatus, pieces of information are displayed divided into five areas, that is, a system area, a function selecting area, a preview area, an action panel area and a task trigger area, of which arrangement is kept unchanged even when operational modes are switched. With such an arrangement, the same or similar pieces of information are displayed on an area arranged at the same position even in different operational modes.
1. (canceled) 2. An image processing apparatus provided with an operation key and having a normal state and an energy-saving state, wherein in response to an operation of said operation key, a process that differs depending on whether said image processing apparatus is in said normal state or said energy-saving state when said operation key is operated is executed. 3. The image processing apparatus according to claim 2, wherein a process related to a function corresponding to said operation key is executed if said image processing apparatus is in said normal state when said operation key is operated; and a process for returning to said normal state is executed if said image processing apparatus is in said energy-saving state when said operation key is operated. 4. The image processing apparatus according to claim 2, wherein said operation key is an energy-saving key causing said image processing apparatus to make a transition from said normal state to said energy-saving state. 5. The image processing apparatus according to claim 3, wherein said operation key is an energy-saving key causing said image processing apparatus to make a transition from said normal state to said energy-saving state. 6. A method of controlling an image processing apparatus provided with an operation key and having a normal state and an energy-saving state, comprising the steps of: causing said image processing apparatus to receive an operation of said operation key; and causing said image processing apparatus to execute, in response to an operation of said operation key, a process that differs depending on whether said image processing apparatus is in said normal state or said energy-saving state when said operation key is operated. 7. A non-transitory computer-readable recording medium recording a program for controlling an image processing apparatus provided with an operation key and having a normal state and an energy-saving state, wherein said program causes said image processing apparatus to execute the steps of: receiving an operation of said operation key; and in response to an operation of said operation key, executing a process that differs depending on whether said image processing apparatus is in said normal state or said energy-saving state when said operation key is operated.
On a touch-panel display of an image forming apparatus, pieces of information are displayed divided into five areas, that is, a system area, a function selecting area, a preview area, an action panel area and a task trigger area, of which arrangement is kept unchanged even when operational modes are switched. With such an arrangement, the same or similar pieces of information are displayed on an area arranged at the same position even in different operational modes.1. (canceled) 2. An image processing apparatus provided with an operation key and having a normal state and an energy-saving state, wherein in response to an operation of said operation key, a process that differs depending on whether said image processing apparatus is in said normal state or said energy-saving state when said operation key is operated is executed. 3. The image processing apparatus according to claim 2, wherein a process related to a function corresponding to said operation key is executed if said image processing apparatus is in said normal state when said operation key is operated; and a process for returning to said normal state is executed if said image processing apparatus is in said energy-saving state when said operation key is operated. 4. The image processing apparatus according to claim 2, wherein said operation key is an energy-saving key causing said image processing apparatus to make a transition from said normal state to said energy-saving state. 5. The image processing apparatus according to claim 3, wherein said operation key is an energy-saving key causing said image processing apparatus to make a transition from said normal state to said energy-saving state. 6. A method of controlling an image processing apparatus provided with an operation key and having a normal state and an energy-saving state, comprising the steps of: causing said image processing apparatus to receive an operation of said operation key; and causing said image processing apparatus to execute, in response to an operation of said operation key, a process that differs depending on whether said image processing apparatus is in said normal state or said energy-saving state when said operation key is operated. 7. A non-transitory computer-readable recording medium recording a program for controlling an image processing apparatus provided with an operation key and having a normal state and an energy-saving state, wherein said program causes said image processing apparatus to execute the steps of: receiving an operation of said operation key; and in response to an operation of said operation key, executing a process that differs depending on whether said image processing apparatus is in said normal state or said energy-saving state when said operation key is operated.
2,600
10,478
10,478
15,989,745
2,625
An information processing apparatus includes a sensor electrode, a detection unit, and a determination unit. The sensor electrode has a capacitance changed in accordance with an operation to an operation surface. The detection unit is configured to detect, based on a change of the capacitance, a displacement of an operation point of the operation surface. The determination unit is configured to determine a press to the operation surface based on a displacement of a capacitance value of the sensor electrode and the displacement of the operation point.
1-11. (canceled) 12. An information processing apparatus, comprising: a display; a touch panel which includes an x-electrode sensor and a y-electrode sensor that have capacitance changed in accordance with an operation by an operation object made upon or in proximity to an operation surface; and a circuitry configured to detect, based on change of capacitance, a displacement of a coordinate value of an operation point of the operation surface, wherein the operation point corresponds to a point at which the operation object comes into contact with or comes within a proximity range to the operation surface, and determine a press to the operation surface based on change of capacitance of at least one of the x-electrode sensor and the y-electrode sensor at the operation point and the displacement of the coordinate value of the operation point. 13. The information processing apparatus according to claim 12, wherein the circuitry detects the displacement based on capacitance values of the x-electrode sensor and the y-electrode sensor that have been subjected to signal processing. 14. The information processing apparatus according to claim 13, wherein the signal processing comprises a filter processing of the capacitance values of the x-electrode sensor and the y-electrode sensor. 15. The information processing apparatus according to claim 12, wherein the circuitry determines the press based on capacitance values of the x-electrode sensor and the y-electrode sensor that have been subjected to signal processing. 16. The information processing apparatus according to claim 15, wherein the signal processing comprises a filter processing of the capacitance values of the x-electrode sensor and the y-electrode sensor. 17. The information processing apparatus according to claim 12, wherein the circuitry determines the press based on capacitance values of the x-electrode sensor and the y-electrode sensor that have been subjected to signal processing and normalized. 18. The information processing apparatus according to claim 12, wherein the x-electrode sensor and the y-electrode sensor are each formed of a transparent material. 19. The information processing apparatus according to claim 12, wherein the circuitry is further configured to execute a predetermined process based on the determined press. 20. The information processing apparatus according to claim 12, wherein the circuitry is further configured to determine a plurality of simultaneous presses to the operation surface. 21. The information processing apparatus according to claim 12, wherein the circuitry determines the press to the operation surface based on change over a time period of capacitance of at least one of the x-electrode sensor and the y-electrode sensor at the operation point and the displacement of the coordinate value of the operation point. 22. The information processing apparatus according to claim 12, wherein the circuitry is further configured to calculate the coordinate value of the operation point, and determine the press based on a correlation between the change of capacitance and the displacement of the coordinate value. 23. The information processing apparatus according to claim 22, wherein the circuitry is further configured to determine the press based on an inclination of a regression line of a value of the capacitance and the coordinate value. 24. The information processing apparatus according to claim 22, wherein the circuitry is further configured to determine the press based on a correlation coefficient between a value of the capacitance and the coordinate value. 25. The information processing apparatus according to claim 22, wherein the circuitry is further configured to calculate the coordinate value of each of a plurality of operation points, and determine the press at each of the plurality of operation points. 26. An information processing method, comprising: detecting a displacement of a coordinate value of an operation point of an operation surface of a touch panel, based on capacitance of an x-electrode sensor and a y-electrode sensor that change in accordance with an operation by the operation object made upon or in proximity to the operation surface, wherein the operation point corresponds to a point at which the operation object comes into contact with or comes within a proximity range to the operation surface; and determining a press to the operation surface based on change of capacitance of at least one of the x-electrode sensor and the y-electrode sensor at the operation point and the displacement of the coordinate value of the operation point. 27. A non-transitory computer-readable medium having embodied thereon a program, which when executed by a computer causes the computer to execute a method, the method comprising: detecting a displacement of a coordinate value of an operation point of an operation surface of a touch panel, based on capacitance of an x-electrode sensor and a y-electrode sensor that change in accordance with an operation by the operation object made upon or in proximity to the operation surface, wherein the operation point corresponds to a point at which the operation object comes into contact with or comes within a proximity range to the operation surface; and determining a press to the operation surface based on change of capacitance of at least one of the x-electrode sensor and the y-electrode sensor at the operation point and the displacement of the coordinate value of the operation point.
An information processing apparatus includes a sensor electrode, a detection unit, and a determination unit. The sensor electrode has a capacitance changed in accordance with an operation to an operation surface. The detection unit is configured to detect, based on a change of the capacitance, a displacement of an operation point of the operation surface. The determination unit is configured to determine a press to the operation surface based on a displacement of a capacitance value of the sensor electrode and the displacement of the operation point.1-11. (canceled) 12. An information processing apparatus, comprising: a display; a touch panel which includes an x-electrode sensor and a y-electrode sensor that have capacitance changed in accordance with an operation by an operation object made upon or in proximity to an operation surface; and a circuitry configured to detect, based on change of capacitance, a displacement of a coordinate value of an operation point of the operation surface, wherein the operation point corresponds to a point at which the operation object comes into contact with or comes within a proximity range to the operation surface, and determine a press to the operation surface based on change of capacitance of at least one of the x-electrode sensor and the y-electrode sensor at the operation point and the displacement of the coordinate value of the operation point. 13. The information processing apparatus according to claim 12, wherein the circuitry detects the displacement based on capacitance values of the x-electrode sensor and the y-electrode sensor that have been subjected to signal processing. 14. The information processing apparatus according to claim 13, wherein the signal processing comprises a filter processing of the capacitance values of the x-electrode sensor and the y-electrode sensor. 15. The information processing apparatus according to claim 12, wherein the circuitry determines the press based on capacitance values of the x-electrode sensor and the y-electrode sensor that have been subjected to signal processing. 16. The information processing apparatus according to claim 15, wherein the signal processing comprises a filter processing of the capacitance values of the x-electrode sensor and the y-electrode sensor. 17. The information processing apparatus according to claim 12, wherein the circuitry determines the press based on capacitance values of the x-electrode sensor and the y-electrode sensor that have been subjected to signal processing and normalized. 18. The information processing apparatus according to claim 12, wherein the x-electrode sensor and the y-electrode sensor are each formed of a transparent material. 19. The information processing apparatus according to claim 12, wherein the circuitry is further configured to execute a predetermined process based on the determined press. 20. The information processing apparatus according to claim 12, wherein the circuitry is further configured to determine a plurality of simultaneous presses to the operation surface. 21. The information processing apparatus according to claim 12, wherein the circuitry determines the press to the operation surface based on change over a time period of capacitance of at least one of the x-electrode sensor and the y-electrode sensor at the operation point and the displacement of the coordinate value of the operation point. 22. The information processing apparatus according to claim 12, wherein the circuitry is further configured to calculate the coordinate value of the operation point, and determine the press based on a correlation between the change of capacitance and the displacement of the coordinate value. 23. The information processing apparatus according to claim 22, wherein the circuitry is further configured to determine the press based on an inclination of a regression line of a value of the capacitance and the coordinate value. 24. The information processing apparatus according to claim 22, wherein the circuitry is further configured to determine the press based on a correlation coefficient between a value of the capacitance and the coordinate value. 25. The information processing apparatus according to claim 22, wherein the circuitry is further configured to calculate the coordinate value of each of a plurality of operation points, and determine the press at each of the plurality of operation points. 26. An information processing method, comprising: detecting a displacement of a coordinate value of an operation point of an operation surface of a touch panel, based on capacitance of an x-electrode sensor and a y-electrode sensor that change in accordance with an operation by the operation object made upon or in proximity to the operation surface, wherein the operation point corresponds to a point at which the operation object comes into contact with or comes within a proximity range to the operation surface; and determining a press to the operation surface based on change of capacitance of at least one of the x-electrode sensor and the y-electrode sensor at the operation point and the displacement of the coordinate value of the operation point. 27. A non-transitory computer-readable medium having embodied thereon a program, which when executed by a computer causes the computer to execute a method, the method comprising: detecting a displacement of a coordinate value of an operation point of an operation surface of a touch panel, based on capacitance of an x-electrode sensor and a y-electrode sensor that change in accordance with an operation by the operation object made upon or in proximity to the operation surface, wherein the operation point corresponds to a point at which the operation object comes into contact with or comes within a proximity range to the operation surface; and determining a press to the operation surface based on change of capacitance of at least one of the x-electrode sensor and the y-electrode sensor at the operation point and the displacement of the coordinate value of the operation point.
2,600
10,479
10,479
15,708,538
2,689
Embodiments herein relate to a method for handling a driver's use of an object displaying device comprised in a vehicle. The driver's use of the object displaying device is monitored. Based on the monitoring, a likelihood that the driver has detected a representation of the object when using the object displaying device is determined. The representation of the object is visible to the driver in the object displaying device, and the object is located in the surroundings of the vehicle.
1. A method for handling a driver's use of an object displaying device included in a vehicle, the method comprising: monitoring the driver's use of the object displaying device; and based on the monitoring, determining a likelihood that the driver has detected a representation of an object when using the object displaying device, wherein the representation of the object is visible to the driver in the object displaying device, and wherein the object is located in surroundings of the vehicle. 2. The method according to claim 1 further comprising: based on the determined likelihood, adapting safety and driver support systems. 3. The method according to claim 1 wherein the vehicle comprises a plurality of object displaying devices, wherein the representation of the object is visible in at least two of the object displaying devices, and wherein a weight parameter is associated with each of the at least two object displaying devices in the likelihood determination. 4. The method according to claim 3 wherein the weight parameter is based on at least one of: a distance between the object and the vehicle, and a size of the object. 5. The method according to claim 1 wherein the monitoring provides at least one of glance time information, head direction information, gaze direction information and eye opening information, wherein the glance time information indicates a glance time for the driver's glance into the object displaying device, wherein the head direction information indicates a head direction of the driver in relation to the object displaying device, and wherein the eye opening information indicates a degree of opening of the driver's eyes when glancing into the object displaying device. 6. The method according to claim 5 wherein the likelihood determination is further based on at least one of: a size of the object, distance between the object and the vehicle, contrast of the object in relation to the vehicle's surroundings, light conditions, sun angle in relation to motion of the vehicle and object. 7. The method according to claim 1 wherein the likelihood determination is based on at least one of: a size of the object, distance between the object and the vehicle, contrast of the object in relation to the vehicle's surroundings, light conditions, sun angle in relation to motion of the vehicle and object. 8. The method according to claim 1 further comprising: obtaining a confirmation that the driver has detected the representation of the object when using the object displaying device. 9. The method according to claim 1 further comprising: based on the determined likelihood, providing feedback to the driver related to the driver's usage of the object displaying device. 10. The method according to claim 9 wherein the likelihood is determined based on whether at least one of a haptic feedback device or an acoustic feedback device is activated or not, and wherein the at least one of the haptic feedback device and the acoustic feedback device indicate to the driver that the object is located in the surroundings of the vehicle. 11. The method according to claim 1 wherein the likelihood is determined based on whether at least one of a haptic feedback device or an acoustic feedback device is activated or not, and wherein the at least one of the haptic feedback device and the acoustic feedback device indicate to the driver that the object is located in the surroundings of the vehicle. 12. The method according to claim 1 further comprising: storing monitoring information of the driver's use of the object displaying device over time. 13. The method according to claim 1 wherein the object displaying device is at least one of a mirror or display which makes the object or a representation of the object visible to the driver. 14. A system for handling a driver's use of an object displaying device included in a vehicle, the system being configured to: monitor the driver's use of the object displaying device; and based on the monitoring, determine a likelihood that the driver has detected a representation of an object when using the object displaying device, wherein the representation of the object is visible to the driver in the object displaying device, and wherein the object is located in surroundings of the vehicle. 15. A vehicle comprising the system according to claim 14. 16. A non-transitory storage medium having stored computer executable instructions which, when executed on at least one processor of a system, cause the system to: monitor a driver's use of an object displaying device; and based on the monitored use of the object displaying device, determine a likelihood that the driver has detected a representation of an object when using the object displaying device, wherein the representation of the object is visible to the driver in the object displaying device, and wherein the object is located in surroundings of the vehicle. 17. The storage medium of claim 16 wherein the medium comprises a memory.
Embodiments herein relate to a method for handling a driver's use of an object displaying device comprised in a vehicle. The driver's use of the object displaying device is monitored. Based on the monitoring, a likelihood that the driver has detected a representation of the object when using the object displaying device is determined. The representation of the object is visible to the driver in the object displaying device, and the object is located in the surroundings of the vehicle.1. A method for handling a driver's use of an object displaying device included in a vehicle, the method comprising: monitoring the driver's use of the object displaying device; and based on the monitoring, determining a likelihood that the driver has detected a representation of an object when using the object displaying device, wherein the representation of the object is visible to the driver in the object displaying device, and wherein the object is located in surroundings of the vehicle. 2. The method according to claim 1 further comprising: based on the determined likelihood, adapting safety and driver support systems. 3. The method according to claim 1 wherein the vehicle comprises a plurality of object displaying devices, wherein the representation of the object is visible in at least two of the object displaying devices, and wherein a weight parameter is associated with each of the at least two object displaying devices in the likelihood determination. 4. The method according to claim 3 wherein the weight parameter is based on at least one of: a distance between the object and the vehicle, and a size of the object. 5. The method according to claim 1 wherein the monitoring provides at least one of glance time information, head direction information, gaze direction information and eye opening information, wherein the glance time information indicates a glance time for the driver's glance into the object displaying device, wherein the head direction information indicates a head direction of the driver in relation to the object displaying device, and wherein the eye opening information indicates a degree of opening of the driver's eyes when glancing into the object displaying device. 6. The method according to claim 5 wherein the likelihood determination is further based on at least one of: a size of the object, distance between the object and the vehicle, contrast of the object in relation to the vehicle's surroundings, light conditions, sun angle in relation to motion of the vehicle and object. 7. The method according to claim 1 wherein the likelihood determination is based on at least one of: a size of the object, distance between the object and the vehicle, contrast of the object in relation to the vehicle's surroundings, light conditions, sun angle in relation to motion of the vehicle and object. 8. The method according to claim 1 further comprising: obtaining a confirmation that the driver has detected the representation of the object when using the object displaying device. 9. The method according to claim 1 further comprising: based on the determined likelihood, providing feedback to the driver related to the driver's usage of the object displaying device. 10. The method according to claim 9 wherein the likelihood is determined based on whether at least one of a haptic feedback device or an acoustic feedback device is activated or not, and wherein the at least one of the haptic feedback device and the acoustic feedback device indicate to the driver that the object is located in the surroundings of the vehicle. 11. The method according to claim 1 wherein the likelihood is determined based on whether at least one of a haptic feedback device or an acoustic feedback device is activated or not, and wherein the at least one of the haptic feedback device and the acoustic feedback device indicate to the driver that the object is located in the surroundings of the vehicle. 12. The method according to claim 1 further comprising: storing monitoring information of the driver's use of the object displaying device over time. 13. The method according to claim 1 wherein the object displaying device is at least one of a mirror or display which makes the object or a representation of the object visible to the driver. 14. A system for handling a driver's use of an object displaying device included in a vehicle, the system being configured to: monitor the driver's use of the object displaying device; and based on the monitoring, determine a likelihood that the driver has detected a representation of an object when using the object displaying device, wherein the representation of the object is visible to the driver in the object displaying device, and wherein the object is located in surroundings of the vehicle. 15. A vehicle comprising the system according to claim 14. 16. A non-transitory storage medium having stored computer executable instructions which, when executed on at least one processor of a system, cause the system to: monitor a driver's use of an object displaying device; and based on the monitored use of the object displaying device, determine a likelihood that the driver has detected a representation of an object when using the object displaying device, wherein the representation of the object is visible to the driver in the object displaying device, and wherein the object is located in surroundings of the vehicle. 17. The storage medium of claim 16 wherein the medium comprises a memory.
2,600
10,480
10,480
15,004,134
2,641
A communication method and an apparatus are provided herein. The method includes: sending, by a mobility management entity (MME), a track area update (TAU) accept message to a user equipment (UE), the TAU accept message comprising an identifier constructed from at least a resource pool identifier (pool-ID) that identifies a resource pool in a public land mobile network (PLMN), a mobility management entity identifier (MME-ID) that uniquely identifies the MME within the resource pool, and a UE temporary identifier that uniquely identifies the UE within the MME; and receiving, by the MME, a TAU complete message from the UE.
1. A mobility management entity (MME), comprising: a transmitter that sends a track area update (TAU) accept message to a user equipment (UE), the TAU accept message comprising an identifier that uniquely identifies the UE in a public land mobile network (PLMN), and the identifier is constructed from at least a resource pool identifier (pool-ID) that identifies a resource pool in the PLMN, a mobility management entity identifier (MME-ID) that uniquely identifies the MME within the resource pool, and a UE temporary identifier that uniquely identifies the UE within the MME; and a receiver that receives a TAU complete message from the UE. 2. The MME according to claim 1, wherein the pool-ID is unique in the PLMN. 3. The MME according to claim 1, wherein the MME-ID is unique in the resource pool. 4. The MME according to claim 1, wherein the UE temporary identifier is unique in the MME. 5. The MME according to claim 1, wherein the MME-ID is unique in the resource pool and the UE temporary identifier is unique in the MME. 6. A communication method, comprising: sending, by a mobility management entity (MME), a track area update (TAU) accept message to a user equipment (UE), the TAU accept message comprising an identifier that uniquely identifies the UE in a public land mobile network (PLMN), and the identifier is constructed from at least a resource pool identifier (pool-ID) that identifies a resource pool in the PLMN, a mobility management entity identifier (MME-ID) that uniquely identifies the MME within the resource pool, and a UE temporary identifier that uniquely identifies the UE within the MME; and receiving, by the MME, a TAU complete message from the UE. 7. The method according to claim 6, wherein the pool-ID is unique in the PLMN. 8. The method according to claim 6, wherein the MME-ID is unique in the resource pool. 9. The method according to claim 6, wherein the UE temporary identifier is unique in the MME. 10. The method according to claim 6, wherein the MME-ID is unique in the resource pool and the UE temporary identifier is unique in the MME.
A communication method and an apparatus are provided herein. The method includes: sending, by a mobility management entity (MME), a track area update (TAU) accept message to a user equipment (UE), the TAU accept message comprising an identifier constructed from at least a resource pool identifier (pool-ID) that identifies a resource pool in a public land mobile network (PLMN), a mobility management entity identifier (MME-ID) that uniquely identifies the MME within the resource pool, and a UE temporary identifier that uniquely identifies the UE within the MME; and receiving, by the MME, a TAU complete message from the UE.1. A mobility management entity (MME), comprising: a transmitter that sends a track area update (TAU) accept message to a user equipment (UE), the TAU accept message comprising an identifier that uniquely identifies the UE in a public land mobile network (PLMN), and the identifier is constructed from at least a resource pool identifier (pool-ID) that identifies a resource pool in the PLMN, a mobility management entity identifier (MME-ID) that uniquely identifies the MME within the resource pool, and a UE temporary identifier that uniquely identifies the UE within the MME; and a receiver that receives a TAU complete message from the UE. 2. The MME according to claim 1, wherein the pool-ID is unique in the PLMN. 3. The MME according to claim 1, wherein the MME-ID is unique in the resource pool. 4. The MME according to claim 1, wherein the UE temporary identifier is unique in the MME. 5. The MME according to claim 1, wherein the MME-ID is unique in the resource pool and the UE temporary identifier is unique in the MME. 6. A communication method, comprising: sending, by a mobility management entity (MME), a track area update (TAU) accept message to a user equipment (UE), the TAU accept message comprising an identifier that uniquely identifies the UE in a public land mobile network (PLMN), and the identifier is constructed from at least a resource pool identifier (pool-ID) that identifies a resource pool in the PLMN, a mobility management entity identifier (MME-ID) that uniquely identifies the MME within the resource pool, and a UE temporary identifier that uniquely identifies the UE within the MME; and receiving, by the MME, a TAU complete message from the UE. 7. The method according to claim 6, wherein the pool-ID is unique in the PLMN. 8. The method according to claim 6, wherein the MME-ID is unique in the resource pool. 9. The method according to claim 6, wherein the UE temporary identifier is unique in the MME. 10. The method according to claim 6, wherein the MME-ID is unique in the resource pool and the UE temporary identifier is unique in the MME.
2,600
10,481
10,481
15,994,795
2,674
Systems and processes for operating an intelligent automated assistant are provided. In one example process, discourse input representing a user request is received. The process determines whether the discourse input relates to a device of an established location. In response to determining that the discourse input relates to a device of an established location, a data structure representing a set of devices of the established location is retrieved. The process determines, using the data structure, a user intent corresponding to the discourse input, the user intent associated with an action to be performed by a device of the set of devices, and a criterion to be satisfied prior to performing the action. The action and the device are stored in association with the criterion, where, in accordance with a determination that the criterion is satisfied, the action is performed by the device.
1. A method for operating a digital assistant, the method comprising: at an electronic device with a processor and memory: receiving discourse input representing a user request; determining whether the discourse input relates to a device of an established location; in response to determining that the discourse input relates to a device of an established location, retrieving a data structure representing a set of devices of the established location; determining, using the data structure and the discourse input, a user intent corresponding to: an action to be performed by a device of the set of devices and a criterion to be satisfied prior to performing the action; and storing the action and the device in association with the criterion, wherein the action is performed by the device in accordance with a determination that the criterion is satisfied. 2. The method of claim 1, wherein the criterion is associated with an actual device characteristic of a second device of the set of devices. 3. The method of claim 2, wherein determining the user intent further comprises: determining, based on the discourse input and data structure, the actual device characteristic of the second device; and determining, based on the data structure and the discourse input, the second device from the set of devices. 4. The method of claim 2, wherein the criterion comprises a requirement that an actual value representing the actual device characteristic is greater than, equal to, or less than a threshold value. 5. The method of claim 4, wherein determining the user intent further comprises determining, based on the discourse input, the threshold value. 6. The method of claim 1, wherein the criterion is associated with an operating state of a third device of the set of devices. 7. The method of claim 6, wherein the criterion comprises a requirement that the operating state of the third device is equal to a reference operating state. 8. The method of claim 6, wherein the criterion comprises a requirement that the operating state of the third device transitions from a second reference operating state to a third reference operating state. 9. The method of claim 1, wherein the criterion comprises a requirement that the action was performed less than a predetermined number of times within a predetermined period of time. 10. The method of claim 1, wherein the criterion comprises a requirement that a time of the electronic device is equal to or greater than a reference time. 11. The method of claim 10, wherein determining the user intent further comprises determining the reference time from the discourse input. 12. The method of claim 10, wherein determining the reference time further comprises: determining a second reference time from the discourse input; and determining a duration associated with the reference time, wherein the reference time is determined based on the second reference time and the duration. 13. The method of claim 1, further comprising: receiving data associated with the criterion; determining from the received data whether the criterion is satisfied; and in response to determining that the criterion is satisfied, providing instructions that cause the device of the set of devices to perform the action. 14. The method of claim 1, wherein the user intent is associated with a second criterion to be satisfied prior to performing the action. 15. The method of claim 14, wherein satisfying the second criterion requires the criterion to be satisfied. 16. The method of claim 14, further comprising: receiving second data associated with the second criterion; and determining from the received second data whether the second criterion is satisfied, wherein the instructions are provided in response to determining that the second criterion is satisfied. 17. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions which, when executed by one or more processors of an electronic device, cause the electronic device to: receive discourse input representing a user request; determine whether the discourse input relates to a device of an established location; in response to determining that the discourse input relates to a device of an established location, retrieve a data structure representing a set of devices of the established location; determine, using the data structure and the discourse input, a user intent corresponding to: an action to be performed by a device of the set of devices and a criterion to be satisfied prior to performing the action; and store the action and the device in association with the criterion, wherein the action is performed by the device in accordance with a determination that the criterion is satisfied. 18. The computer-readable storage medium of claim 17, wherein the criterion is associated with an actual device characteristic of a second device of the set of devices. 19. The computer-readable storage medium of claim 18, wherein determining the user intent further comprises: determining, based on the discourse input and data structure, the actual device characteristic of the second device; and determining, based on the data structure and the discourse input, the second device from the set of devices. 20. The computer-readable storage medium of claim 18, wherein the criterion comprises a requirement that an actual value representing the actual device characteristic is greater than, equal to, or less than a threshold value. 21. The computer-readable storage medium of claim 20, wherein determining the user intent further comprises determining, based on the discourse input, the threshold value. 22. The computer-readable storage medium of claim 17, wherein the criterion is associated with an operating state of a third device of the set of devices. 23. The computer-readable storage medium of claim 22, wherein the criterion comprises a requirement that the operating state of the third device is equal to a reference operating state. 24. The computer-readable storage medium of claim 22, wherein the criterion comprises a requirement that the operating state of the third device transitions from a second reference operating state to a third reference operating state. 25. The computer-readable storage medium of claim 17, wherein the criterion comprises a requirement that the action was performed less than a predetermined number of times within a predetermined period of time. 26. The computer-readable storage medium of claim 17, wherein the criterion comprises a requirement that a time of the electronic device is equal to or greater than a reference time. 27. The computer-readable storage medium of claim 26, wherein determining the user intent further comprises determining the reference time from the discourse input. 28. The computer-readable storage medium of claim 26, wherein determining the reference time further comprises: determining a second reference time from the discourse input; and determining a duration associated with the reference time, wherein the reference time is determined based on the second reference time and the duration. 29. The computer-readable storage medium of claim 17, further comprising: receiving data associated with the criterion; determining from the received data whether the criterion is satisfied; and in response to determining that the criterion is satisfied, providing instructions that cause the device of the set of devices to perform the action. 30. The computer-readable storage medium of claim 17, wherein the user intent is associated with a second criterion to be satisfied prior to performing the action. 31. The computer-readable storage medium of claim 30, wherein satisfying the second criterion requires the criterion to be satisfied. 32. The computer-readable storage medium of claim 30, further comprising: receiving second data associated with the second criterion; and determining from the received second data whether the second criterion is satisfied, wherein the instructions are provided in response to determining that the second criterion is satisfied. 33. An electronic device comprising: one or more processors; and memory storing one or more programs, the one or more programs including instructions which, when executed by the one or more processors, cause the one or more processors to: receive discourse input representing a user request; determine whether the discourse input relates to a device of an established location; in response to determining that the discourse input relates to a device of an established location, retrieve a data structure representing a set of devices of the established location; determine, using the data structure and the discourse input, a user intent corresponding to: an action to be performed by a device of the set of devices and a criterion to be satisfied prior to performing the action; and store the action and the device in association with the criterion, wherein the action is performed by the device in accordance with a determination that the criterion is satisfied.
Systems and processes for operating an intelligent automated assistant are provided. In one example process, discourse input representing a user request is received. The process determines whether the discourse input relates to a device of an established location. In response to determining that the discourse input relates to a device of an established location, a data structure representing a set of devices of the established location is retrieved. The process determines, using the data structure, a user intent corresponding to the discourse input, the user intent associated with an action to be performed by a device of the set of devices, and a criterion to be satisfied prior to performing the action. The action and the device are stored in association with the criterion, where, in accordance with a determination that the criterion is satisfied, the action is performed by the device.1. A method for operating a digital assistant, the method comprising: at an electronic device with a processor and memory: receiving discourse input representing a user request; determining whether the discourse input relates to a device of an established location; in response to determining that the discourse input relates to a device of an established location, retrieving a data structure representing a set of devices of the established location; determining, using the data structure and the discourse input, a user intent corresponding to: an action to be performed by a device of the set of devices and a criterion to be satisfied prior to performing the action; and storing the action and the device in association with the criterion, wherein the action is performed by the device in accordance with a determination that the criterion is satisfied. 2. The method of claim 1, wherein the criterion is associated with an actual device characteristic of a second device of the set of devices. 3. The method of claim 2, wherein determining the user intent further comprises: determining, based on the discourse input and data structure, the actual device characteristic of the second device; and determining, based on the data structure and the discourse input, the second device from the set of devices. 4. The method of claim 2, wherein the criterion comprises a requirement that an actual value representing the actual device characteristic is greater than, equal to, or less than a threshold value. 5. The method of claim 4, wherein determining the user intent further comprises determining, based on the discourse input, the threshold value. 6. The method of claim 1, wherein the criterion is associated with an operating state of a third device of the set of devices. 7. The method of claim 6, wherein the criterion comprises a requirement that the operating state of the third device is equal to a reference operating state. 8. The method of claim 6, wherein the criterion comprises a requirement that the operating state of the third device transitions from a second reference operating state to a third reference operating state. 9. The method of claim 1, wherein the criterion comprises a requirement that the action was performed less than a predetermined number of times within a predetermined period of time. 10. The method of claim 1, wherein the criterion comprises a requirement that a time of the electronic device is equal to or greater than a reference time. 11. The method of claim 10, wherein determining the user intent further comprises determining the reference time from the discourse input. 12. The method of claim 10, wherein determining the reference time further comprises: determining a second reference time from the discourse input; and determining a duration associated with the reference time, wherein the reference time is determined based on the second reference time and the duration. 13. The method of claim 1, further comprising: receiving data associated with the criterion; determining from the received data whether the criterion is satisfied; and in response to determining that the criterion is satisfied, providing instructions that cause the device of the set of devices to perform the action. 14. The method of claim 1, wherein the user intent is associated with a second criterion to be satisfied prior to performing the action. 15. The method of claim 14, wherein satisfying the second criterion requires the criterion to be satisfied. 16. The method of claim 14, further comprising: receiving second data associated with the second criterion; and determining from the received second data whether the second criterion is satisfied, wherein the instructions are provided in response to determining that the second criterion is satisfied. 17. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions which, when executed by one or more processors of an electronic device, cause the electronic device to: receive discourse input representing a user request; determine whether the discourse input relates to a device of an established location; in response to determining that the discourse input relates to a device of an established location, retrieve a data structure representing a set of devices of the established location; determine, using the data structure and the discourse input, a user intent corresponding to: an action to be performed by a device of the set of devices and a criterion to be satisfied prior to performing the action; and store the action and the device in association with the criterion, wherein the action is performed by the device in accordance with a determination that the criterion is satisfied. 18. The computer-readable storage medium of claim 17, wherein the criterion is associated with an actual device characteristic of a second device of the set of devices. 19. The computer-readable storage medium of claim 18, wherein determining the user intent further comprises: determining, based on the discourse input and data structure, the actual device characteristic of the second device; and determining, based on the data structure and the discourse input, the second device from the set of devices. 20. The computer-readable storage medium of claim 18, wherein the criterion comprises a requirement that an actual value representing the actual device characteristic is greater than, equal to, or less than a threshold value. 21. The computer-readable storage medium of claim 20, wherein determining the user intent further comprises determining, based on the discourse input, the threshold value. 22. The computer-readable storage medium of claim 17, wherein the criterion is associated with an operating state of a third device of the set of devices. 23. The computer-readable storage medium of claim 22, wherein the criterion comprises a requirement that the operating state of the third device is equal to a reference operating state. 24. The computer-readable storage medium of claim 22, wherein the criterion comprises a requirement that the operating state of the third device transitions from a second reference operating state to a third reference operating state. 25. The computer-readable storage medium of claim 17, wherein the criterion comprises a requirement that the action was performed less than a predetermined number of times within a predetermined period of time. 26. The computer-readable storage medium of claim 17, wherein the criterion comprises a requirement that a time of the electronic device is equal to or greater than a reference time. 27. The computer-readable storage medium of claim 26, wherein determining the user intent further comprises determining the reference time from the discourse input. 28. The computer-readable storage medium of claim 26, wherein determining the reference time further comprises: determining a second reference time from the discourse input; and determining a duration associated with the reference time, wherein the reference time is determined based on the second reference time and the duration. 29. The computer-readable storage medium of claim 17, further comprising: receiving data associated with the criterion; determining from the received data whether the criterion is satisfied; and in response to determining that the criterion is satisfied, providing instructions that cause the device of the set of devices to perform the action. 30. The computer-readable storage medium of claim 17, wherein the user intent is associated with a second criterion to be satisfied prior to performing the action. 31. The computer-readable storage medium of claim 30, wherein satisfying the second criterion requires the criterion to be satisfied. 32. The computer-readable storage medium of claim 30, further comprising: receiving second data associated with the second criterion; and determining from the received second data whether the second criterion is satisfied, wherein the instructions are provided in response to determining that the second criterion is satisfied. 33. An electronic device comprising: one or more processors; and memory storing one or more programs, the one or more programs including instructions which, when executed by the one or more processors, cause the one or more processors to: receive discourse input representing a user request; determine whether the discourse input relates to a device of an established location; in response to determining that the discourse input relates to a device of an established location, retrieve a data structure representing a set of devices of the established location; determine, using the data structure and the discourse input, a user intent corresponding to: an action to be performed by a device of the set of devices and a criterion to be satisfied prior to performing the action; and store the action and the device in association with the criterion, wherein the action is performed by the device in accordance with a determination that the criterion is satisfied.
2,600
10,482
10,482
15,787,449
2,688
Provided are embodiments of a method for averting a danger performed by a control apparatus. The method involves a step of obtaining a plurality of pieces of sensor information. At least partially depending on the obtained pieces of sensor information, it is determined whether a danger exists. A support offer request message is transmitted if it is determined that the danger exists. At least one first support offer message is received from a first support apparatus in response to the support offer request message. At least partially depending on the received first support offer message, it is determined whether a first support measure of the first support apparatus is suitable for averting the danger. The first support measure is then prompted if it is determined that the first support measure is suitable for averting the danger.
1) A method for averting a danger or for prompting the averting of a danger, wherein the method is at least partially performed by a control apparatus, and wherein the method comprises: obtaining a plurality of pieces of sensor information; determining, at least partially depending on the obtained pieces of sensor information, whether a danger exists; transmitting or causing the transmitting of a support offer request message if it is determined that the danger exists; receiving or causing the receiving of at least one first support offer message from a first support apparatus in response to the support offer request message; determining, at least partially depending on the received first support offer message, whether a first support measure of the first support apparatus is suitable for averting the danger; and prompting the first support measure if it is determined that the first support measure is suitable for averting the danger. 2) The method according to claim 1, wherein the support offer request message contains pieces of danger information relating to the danger, in particular relating to the location of the danger, the time of the danger or the type of danger or a combination thereof, and wherein the support offer message contains pieces of support information relating to a possible support measure of the first support apparatus, in particular relating to the position of the first support apparatus, the time of the support measure or the type of support measure or a combination thereof. 3) The method according to claim 1, wherein the first support apparatus is different from the control apparatus or is at least partially autonomous or a combination thereof. 4) The method according to claim 1, wherein the prompting of the first support measure comprises transmitting or causing the transmitting of a support request message to the first support apparatus. 5) The method according to claim 1, wherein the method comprises: receiving or causing the receiving of at least one second support offer message from a second support apparatus in response to the support offer request message; determining, at least partially depending on the received second support offer message, whether a second support measure of the second support apparatus is suitable for averting the danger; and prompting the second support measure if it is determined that the second support measure is suitable for averting the danger. 6) A control apparatus comprising at least one processor and at least one memory containing program instructions, wherein the at least one memory and the program instructions are configured, together with the at least one processor, to cause the apparatus to perform: obtaining a plurality of pieces of sensor information; determining, at least partially depending on the obtained pieces of sensor information, whether a danger exists; transmitting or causing the transmitting of a support offer request message if it is determined that the danger exists; receiving or causing the receiving of at least one first support offer message from a first support apparatus in response to the support offer request message; determining, at least partially depending on the received first support offer message, whether a first support measure of the first support apparatus is suitable for averting the danger; and prompting the first support measure if it is determined that the first support measure is suitable for averting the danger. 7) The control apparatus according to claim 6, wherein the pieces of sensor information originate from one or more of the following sensors: a temperature sensor, a pressure sensor, a brightness sensor, a motion sensor, an acoustic sensor, an ultrasonic sensor, an optical sensor, an infrared sensor, a light sensor, an image sensor, a video sensor, a chemical sensor, a glass breakage sensor, a motion sensor, a radio sensor, a position sensor, a door or window opening sensor or an acceleration sensor. 8) The control apparatus according to claim 6, wherein the determining whether a danger exists is carried out according to one or more predetermined rules or according to a pattern recognition algorithm or according to a machine learning algorithm or a combination thereof. 9) The control apparatus according to claim 6, wherein the support offer request message contains pieces of danger information relating to the danger, in particular relating to the location of the danger, the time of the danger or the type of danger or a combination thereof. 10) The control apparatus according to claim 6, wherein the support offer message contains pieces of support information relating to a possible support measure of the first support apparatus, in particular relating to the position of the first support apparatus, the time of the support measure or the type of support measure or a combination thereof. 11) The control apparatus according to claim 6, wherein the determining whether the first support measure is suitable for averting the danger is carried out according to one or more predetermined rules or according to a pattern recognition algorithm or according to a machine learning algorithm or a combination thereof. 12) The control apparatus according to claim 6, wherein the first support apparatus is different from the control apparatus or is at least partially autonomous or a combination thereof. 13) The control apparatus according to claim 6, wherein the prompting of the first support measure comprises transmitting or causing the transmitting of a support request message to the first support apparatus. 14) The control apparatus according to claim 13, wherein the support offer request message or the support request message contain(s) support authorization information. 15) The control apparatus according to claim 6, wherein the at least one memory and the program instructions are further configured, together with the at least one processor, to cause the apparatus to perform: receiving or causing the receiving of at least one second support offer message from a second support apparatus in response to the support offer request message; determining, at least partially depending on the received second support offer message, whether a second support measure of the second support apparatus is suitable for averting the danger; and prompting the second support measure if it is determined that the second support measure is suitable for averting the danger. 16) The control apparatus according to claim 6, wherein, if a plurality of support offer messages are received from different support apparatuses in response to the support offer request message and if it is determined that a plurality of support measures of the various support apparatuses are suitable for averting the danger, at least one support measure of the various support apparatuses is prompted. 17) The control apparatus according to claim 6, wherein the at least one memory and the program instructions are further configured, together with the at least one processor, to cause the apparatus to perform: storing or prompting the storing of pieces of documentation information for documenting the danger or the averting of the danger. 18) The control apparatus according to claim 6, wherein the at least one memory and the program instructions are further configured, together with the at least one processor, to cause the apparatus to perform: obtaining further pieces of sensor information; and determining, at least partially depending on the obtained further pieces of sensor information, whether the danger still exists. 19) The control apparatus according to claim 6, wherein the control apparatus is part of an unmanned vehicle or a building automation system or an alarm system or a combination thereof. 20) A non-transitory computer readable storage medium including a computer program comprising program instructions which are configured, when executed by at least one processor, to cause an apparatus to perform: obtaining a plurality of pieces of sensor information; determining, at least partially depending on the obtained pieces of sensor information, whether a danger exists; transmitting or causing the transmitting of a support offer request message if it is determined that the danger exists; receiving or causing the receiving of at least one first support offer message from a first support apparatus in response to the support offer request message; determining, at least partially depending on the received first support offer message, whether a first support measure of the first support apparatus is suitable for averting the danger; and prompting the first support measure if it is determined that the first support measure is suitable for averting the danger.
Provided are embodiments of a method for averting a danger performed by a control apparatus. The method involves a step of obtaining a plurality of pieces of sensor information. At least partially depending on the obtained pieces of sensor information, it is determined whether a danger exists. A support offer request message is transmitted if it is determined that the danger exists. At least one first support offer message is received from a first support apparatus in response to the support offer request message. At least partially depending on the received first support offer message, it is determined whether a first support measure of the first support apparatus is suitable for averting the danger. The first support measure is then prompted if it is determined that the first support measure is suitable for averting the danger.1) A method for averting a danger or for prompting the averting of a danger, wherein the method is at least partially performed by a control apparatus, and wherein the method comprises: obtaining a plurality of pieces of sensor information; determining, at least partially depending on the obtained pieces of sensor information, whether a danger exists; transmitting or causing the transmitting of a support offer request message if it is determined that the danger exists; receiving or causing the receiving of at least one first support offer message from a first support apparatus in response to the support offer request message; determining, at least partially depending on the received first support offer message, whether a first support measure of the first support apparatus is suitable for averting the danger; and prompting the first support measure if it is determined that the first support measure is suitable for averting the danger. 2) The method according to claim 1, wherein the support offer request message contains pieces of danger information relating to the danger, in particular relating to the location of the danger, the time of the danger or the type of danger or a combination thereof, and wherein the support offer message contains pieces of support information relating to a possible support measure of the first support apparatus, in particular relating to the position of the first support apparatus, the time of the support measure or the type of support measure or a combination thereof. 3) The method according to claim 1, wherein the first support apparatus is different from the control apparatus or is at least partially autonomous or a combination thereof. 4) The method according to claim 1, wherein the prompting of the first support measure comprises transmitting or causing the transmitting of a support request message to the first support apparatus. 5) The method according to claim 1, wherein the method comprises: receiving or causing the receiving of at least one second support offer message from a second support apparatus in response to the support offer request message; determining, at least partially depending on the received second support offer message, whether a second support measure of the second support apparatus is suitable for averting the danger; and prompting the second support measure if it is determined that the second support measure is suitable for averting the danger. 6) A control apparatus comprising at least one processor and at least one memory containing program instructions, wherein the at least one memory and the program instructions are configured, together with the at least one processor, to cause the apparatus to perform: obtaining a plurality of pieces of sensor information; determining, at least partially depending on the obtained pieces of sensor information, whether a danger exists; transmitting or causing the transmitting of a support offer request message if it is determined that the danger exists; receiving or causing the receiving of at least one first support offer message from a first support apparatus in response to the support offer request message; determining, at least partially depending on the received first support offer message, whether a first support measure of the first support apparatus is suitable for averting the danger; and prompting the first support measure if it is determined that the first support measure is suitable for averting the danger. 7) The control apparatus according to claim 6, wherein the pieces of sensor information originate from one or more of the following sensors: a temperature sensor, a pressure sensor, a brightness sensor, a motion sensor, an acoustic sensor, an ultrasonic sensor, an optical sensor, an infrared sensor, a light sensor, an image sensor, a video sensor, a chemical sensor, a glass breakage sensor, a motion sensor, a radio sensor, a position sensor, a door or window opening sensor or an acceleration sensor. 8) The control apparatus according to claim 6, wherein the determining whether a danger exists is carried out according to one or more predetermined rules or according to a pattern recognition algorithm or according to a machine learning algorithm or a combination thereof. 9) The control apparatus according to claim 6, wherein the support offer request message contains pieces of danger information relating to the danger, in particular relating to the location of the danger, the time of the danger or the type of danger or a combination thereof. 10) The control apparatus according to claim 6, wherein the support offer message contains pieces of support information relating to a possible support measure of the first support apparatus, in particular relating to the position of the first support apparatus, the time of the support measure or the type of support measure or a combination thereof. 11) The control apparatus according to claim 6, wherein the determining whether the first support measure is suitable for averting the danger is carried out according to one or more predetermined rules or according to a pattern recognition algorithm or according to a machine learning algorithm or a combination thereof. 12) The control apparatus according to claim 6, wherein the first support apparatus is different from the control apparatus or is at least partially autonomous or a combination thereof. 13) The control apparatus according to claim 6, wherein the prompting of the first support measure comprises transmitting or causing the transmitting of a support request message to the first support apparatus. 14) The control apparatus according to claim 13, wherein the support offer request message or the support request message contain(s) support authorization information. 15) The control apparatus according to claim 6, wherein the at least one memory and the program instructions are further configured, together with the at least one processor, to cause the apparatus to perform: receiving or causing the receiving of at least one second support offer message from a second support apparatus in response to the support offer request message; determining, at least partially depending on the received second support offer message, whether a second support measure of the second support apparatus is suitable for averting the danger; and prompting the second support measure if it is determined that the second support measure is suitable for averting the danger. 16) The control apparatus according to claim 6, wherein, if a plurality of support offer messages are received from different support apparatuses in response to the support offer request message and if it is determined that a plurality of support measures of the various support apparatuses are suitable for averting the danger, at least one support measure of the various support apparatuses is prompted. 17) The control apparatus according to claim 6, wherein the at least one memory and the program instructions are further configured, together with the at least one processor, to cause the apparatus to perform: storing or prompting the storing of pieces of documentation information for documenting the danger or the averting of the danger. 18) The control apparatus according to claim 6, wherein the at least one memory and the program instructions are further configured, together with the at least one processor, to cause the apparatus to perform: obtaining further pieces of sensor information; and determining, at least partially depending on the obtained further pieces of sensor information, whether the danger still exists. 19) The control apparatus according to claim 6, wherein the control apparatus is part of an unmanned vehicle or a building automation system or an alarm system or a combination thereof. 20) A non-transitory computer readable storage medium including a computer program comprising program instructions which are configured, when executed by at least one processor, to cause an apparatus to perform: obtaining a plurality of pieces of sensor information; determining, at least partially depending on the obtained pieces of sensor information, whether a danger exists; transmitting or causing the transmitting of a support offer request message if it is determined that the danger exists; receiving or causing the receiving of at least one first support offer message from a first support apparatus in response to the support offer request message; determining, at least partially depending on the received first support offer message, whether a first support measure of the first support apparatus is suitable for averting the danger; and prompting the first support measure if it is determined that the first support measure is suitable for averting the danger.
2,600
10,483
10,483
15,199,647
2,613
Embodiments disclosed herein are related to systems and methods for implementing a customizable compact overlay window in a display. In one embodiment, a computing system includes one or more processors and a storage device that stores computer executable instructions that can be executed by the processors to cause the computing system to perform the following. The system receives from an application running on the computing system customization parameters that define how the application is to be configured in a compact overlay window. The system generates the compact overlay window so that the compact overlay window is customizable according to the customization parameters. The system positions the compact overlay window in a portion of a display of the computing system.
1. A computing system for implementing a customizable compact overlay window in a display, the computing system comprising: at least one processor; and at least one storage device having stored thereon computer-executable instructions which, when executed by the at least one processor, causes the computing system to perform the following: an act of receiving from an application running on the computing system one or more customization parameters that define how the application is to be configured in a compact overlay window; an act of generating the compact overlay window, the compact overlay window being customizable according to the customization parameters; and an act of positioning the compact overlay window in a portion of a display of the computing system. 2. The computing system of claim 1, wherein the act of generating the compact overlay window is initiated by receiving user input that activates a UI element of the compact overlay window. 3. The computing system of claim 1, wherein the act of generating the compact overlay window is initiated by one or more of the customization parameters that specify when the application should be configured in the compact overlay window. 4. The computing system of claim 1, wherein the act of generating the compact overlay window further comprises: an act of ensuring that the customization of the compact overlay window specified by the customization parameters is consistent with one or more system constraints that govern how the compact overlay window may be configured. 5. The computing system of claim 1, wherein the one or more customization parameters define one or more application specific controls, application specific branding, or application specific UI elements. 6. The computing system of claim 1, wherein the one or more customization parameters are declarative statements that are provided by the application to the computing system. 7. The computing system of claim 1, wherein the display of the computing system is a first display and the computing system includes a second display that is different from the first display, wherein the act of positioning the compact overlay window in a portion of a display comprises positioning the compact overlay window in the second display. 8. The computing system of claim 1, wherein the compact overlay window is a first compact overlay window, the computing system to further caused to perform the following an act of generating a second compact overlay window, the second compact overlay window being customizable according to the customization parameters; and an act of positioning the second compact overlay window in a portion of the display of the computing system, wherein both the first and second compact windows remain active in the display. 9. The computing system of claim 1, wherein the computing system to further caused to perform the following: an act of receiving from a second application running on the computing system one or more second customization parameters that define how the second application is to be configured in a second compact overlay window; an act of generating the second compact overlay window, the second compact overlay window being customizable according to the second customization parameters; and an act of positioning the second compact overlay window in a second portion of the display of the computing system, wherein both the first and second compact windows remain active in the display. 10. The computing system of claim 1, wherein the act of generating the compact overlay window comprises: generating the compact overlay window from an existing non-compact overlay window that is running the application; and displaying both the existing non-compact overlay window and the compact overlay window in the display. 11. The computing system of claim 1, wherein the act of generating the compact overlay window comprises generating the compact overlay window when the application is launched. 12. The computing system of claim 1, wherein the act of generating the compact overlay window comprises generating the compact overlay window from an existing non-compact overlay window that is running the application. 13. A computing system for implementing a customizable compact overlay window in a display, the computing system comprising: at least one processor; and at least one storage device having stored thereon computer-executable instructions which, when executed by the at least one processor, causes the computing system to perform the following: an act of receiving a definition of one or more regions of a customizable compact overlay window where user input is to be passed directly to an application that uses the compact overlay window, an act of generating the customizable compact overlay window with the one or more defined regions in a display of the computing system; in response to user input being entered into at least one of the one or more defined regions, an act of directly passing the user input to the application, the user input being interpretable by the application in a manner determined by the application; and in response to the user input being entered into a region of the compact overlay window that is not part of the one or more defined regions, an act of the computing system interpreting the user input. 14. The computing system of claim 13, wherein the computing system is further caused to perform the following: an act of receiving one or more customization parameters for the one or more defined regions, the customization parameters defining features that may be rendered in the one or more defined regions by the application. 15. The computing system of claim 14, wherein the user input is interpreted in a manner consistent with the features rendered in the one of the one or more regions. 16. The computing system of claim 13, wherein the act of the computing system receiving and interpreting the user input comprises: an act of determining that user input indicates that the compact overlay window is to be moved in the display to a new position or is to be resized; and an act of moving the compact overlay window to the new position or to resizing the compact overlay window. 17. The computing system of claim 13, wherein the act of the computing system receiving and interpreting the user input comprises: an act of determining that the user input is not a type of input that is to be handled by the computing system; an act of notifying the application that the user input has occurred in the region of the compact overlay window that is not part of the one or more defined regions; and an act of allowing the application to respond to the user input in the region of the compact overlay window that is not part of the one or more defined regions in a manner determined by the application. 18. A computing system for implementing a customizable compact overlay window in a display, the computing system comprising: at least one processor; and at least one storage device having stored thereon computer-executable instructions which, when executed by the at least one processor, causes the computing system to perform the following: an act of implementing in a display of the computing system a customizable compact overlay window that is customizable by an application that uses the compact overlay window; an act of receiving input into at least a portion of the compact overlay window; an act of directly passing the input to the application, wherein if the application has defined a response to the input to the at least one portion of the compact overlay window, any response to the input is determined by the defined response of the application; and in response to an indication that the application has not defined a response to the input to the at least one portion of the compact overly window, an act of the computing system determining a response to the input. 19. The computing system of claim 18, wherein the act of the computing system determining a response to the input comprises one of: an act of determining that the compact overlay window should be resized; or an act of determining that the compact overlay window should be moved to a different portion of the display. 20. The computing system of claim 20, wherein the indication that the application has not defined a response to the input to the at least one portion of the compact overly window comprises: an act of the application passing the input to the computing system.
Embodiments disclosed herein are related to systems and methods for implementing a customizable compact overlay window in a display. In one embodiment, a computing system includes one or more processors and a storage device that stores computer executable instructions that can be executed by the processors to cause the computing system to perform the following. The system receives from an application running on the computing system customization parameters that define how the application is to be configured in a compact overlay window. The system generates the compact overlay window so that the compact overlay window is customizable according to the customization parameters. The system positions the compact overlay window in a portion of a display of the computing system.1. A computing system for implementing a customizable compact overlay window in a display, the computing system comprising: at least one processor; and at least one storage device having stored thereon computer-executable instructions which, when executed by the at least one processor, causes the computing system to perform the following: an act of receiving from an application running on the computing system one or more customization parameters that define how the application is to be configured in a compact overlay window; an act of generating the compact overlay window, the compact overlay window being customizable according to the customization parameters; and an act of positioning the compact overlay window in a portion of a display of the computing system. 2. The computing system of claim 1, wherein the act of generating the compact overlay window is initiated by receiving user input that activates a UI element of the compact overlay window. 3. The computing system of claim 1, wherein the act of generating the compact overlay window is initiated by one or more of the customization parameters that specify when the application should be configured in the compact overlay window. 4. The computing system of claim 1, wherein the act of generating the compact overlay window further comprises: an act of ensuring that the customization of the compact overlay window specified by the customization parameters is consistent with one or more system constraints that govern how the compact overlay window may be configured. 5. The computing system of claim 1, wherein the one or more customization parameters define one or more application specific controls, application specific branding, or application specific UI elements. 6. The computing system of claim 1, wherein the one or more customization parameters are declarative statements that are provided by the application to the computing system. 7. The computing system of claim 1, wherein the display of the computing system is a first display and the computing system includes a second display that is different from the first display, wherein the act of positioning the compact overlay window in a portion of a display comprises positioning the compact overlay window in the second display. 8. The computing system of claim 1, wherein the compact overlay window is a first compact overlay window, the computing system to further caused to perform the following an act of generating a second compact overlay window, the second compact overlay window being customizable according to the customization parameters; and an act of positioning the second compact overlay window in a portion of the display of the computing system, wherein both the first and second compact windows remain active in the display. 9. The computing system of claim 1, wherein the computing system to further caused to perform the following: an act of receiving from a second application running on the computing system one or more second customization parameters that define how the second application is to be configured in a second compact overlay window; an act of generating the second compact overlay window, the second compact overlay window being customizable according to the second customization parameters; and an act of positioning the second compact overlay window in a second portion of the display of the computing system, wherein both the first and second compact windows remain active in the display. 10. The computing system of claim 1, wherein the act of generating the compact overlay window comprises: generating the compact overlay window from an existing non-compact overlay window that is running the application; and displaying both the existing non-compact overlay window and the compact overlay window in the display. 11. The computing system of claim 1, wherein the act of generating the compact overlay window comprises generating the compact overlay window when the application is launched. 12. The computing system of claim 1, wherein the act of generating the compact overlay window comprises generating the compact overlay window from an existing non-compact overlay window that is running the application. 13. A computing system for implementing a customizable compact overlay window in a display, the computing system comprising: at least one processor; and at least one storage device having stored thereon computer-executable instructions which, when executed by the at least one processor, causes the computing system to perform the following: an act of receiving a definition of one or more regions of a customizable compact overlay window where user input is to be passed directly to an application that uses the compact overlay window, an act of generating the customizable compact overlay window with the one or more defined regions in a display of the computing system; in response to user input being entered into at least one of the one or more defined regions, an act of directly passing the user input to the application, the user input being interpretable by the application in a manner determined by the application; and in response to the user input being entered into a region of the compact overlay window that is not part of the one or more defined regions, an act of the computing system interpreting the user input. 14. The computing system of claim 13, wherein the computing system is further caused to perform the following: an act of receiving one or more customization parameters for the one or more defined regions, the customization parameters defining features that may be rendered in the one or more defined regions by the application. 15. The computing system of claim 14, wherein the user input is interpreted in a manner consistent with the features rendered in the one of the one or more regions. 16. The computing system of claim 13, wherein the act of the computing system receiving and interpreting the user input comprises: an act of determining that user input indicates that the compact overlay window is to be moved in the display to a new position or is to be resized; and an act of moving the compact overlay window to the new position or to resizing the compact overlay window. 17. The computing system of claim 13, wherein the act of the computing system receiving and interpreting the user input comprises: an act of determining that the user input is not a type of input that is to be handled by the computing system; an act of notifying the application that the user input has occurred in the region of the compact overlay window that is not part of the one or more defined regions; and an act of allowing the application to respond to the user input in the region of the compact overlay window that is not part of the one or more defined regions in a manner determined by the application. 18. A computing system for implementing a customizable compact overlay window in a display, the computing system comprising: at least one processor; and at least one storage device having stored thereon computer-executable instructions which, when executed by the at least one processor, causes the computing system to perform the following: an act of implementing in a display of the computing system a customizable compact overlay window that is customizable by an application that uses the compact overlay window; an act of receiving input into at least a portion of the compact overlay window; an act of directly passing the input to the application, wherein if the application has defined a response to the input to the at least one portion of the compact overlay window, any response to the input is determined by the defined response of the application; and in response to an indication that the application has not defined a response to the input to the at least one portion of the compact overly window, an act of the computing system determining a response to the input. 19. The computing system of claim 18, wherein the act of the computing system determining a response to the input comprises one of: an act of determining that the compact overlay window should be resized; or an act of determining that the compact overlay window should be moved to a different portion of the display. 20. The computing system of claim 20, wherein the indication that the application has not defined a response to the input to the at least one portion of the compact overly window comprises: an act of the application passing the input to the computing system.
2,600
10,484
10,484
15,715,739
2,648
The present disclosure relates to radiofrequency (RF) communications systems that may operate efficiently over a broad range of signal output levels. Electronic devices may employ amplification circuitry in the communication RF systems to provide output signal power. For example, amplification provided by external power amplifiers disposed in front-end modules may be more efficient at a higher range of output signal power, but may be inefficient at a lower range of output signal power. The disclosure relates to architectures for RF communication systems having transceivers and front-end modules that may provide power-efficient over broad ranges. Front-end modules may, for example, be managed to disable and/or enable external power amplifiers based of the output signal power. Transceivers may, for example, include internal power amplifier which may provide amplification for low output signals, and may operate as a driver to the external power amplifier of the front-end module for high output signals. Methods for managing the circuitry are also discussed.
1. An electrical device comprising: a radio frequency (RF) transceiver comprising an internal power amplifier coupled to a transmit (TX) port of the RF transceiver; a front-end module comprising a power amplifier coupled to the TX port of the RF transceiver, wherein the front-end module is configured to couple to an antenna; and switching circuitry configured to bypass the power amplifier of the front-end module. 2. The electrical device of claim 1, wherein the power amplifier is powered by envelope tracking circuitry. 3. (canceled) 4. The electrical device of claim 1, wherein the front-end module comprises a controller configured to select a mode of operation from a set of modes of operation. 5. The electrical device of claim 4, wherein the set of modes of operation comprises: an internal power amplifier mode, wherein the front-end module is configured to disable the power amplifier; and an envelope tracking mode, wherein a voltage supplied to the power amplifier is dynamically adjusted to follow the envelope of an output RF signal. 6. The electrical device of claim 4, wherein the controller is a Mobile Industry Processor Interface RF Front-End Interface (MIPI RFFE) controller. 7. The electrical device of claim 1, wherein the front-end module comprises a configurable filter bank configured to filter a transmitted signal to the antenna, or a received signal from the antenna, or both. 8. The electrical device of claim 1, wherein the front-end module comprises a low-noise amplifier coupled to a receive (RX) port of the transceiver. 9. The electrical device of claim 1, wherein the internal power amplifier is configured to provides an output signal up to 20 dBm. 10. The electrical device of claim 1, wherein the internal power amplifier comprises a complementary metal-oxide semiconductor (CMOS) amplifier. 11. A front-end module of a radio frequency RF communication system configured to couple to a radio frequency (RF) transceiver and to an antenna, the front-end module comprising: a power amplifier configured to provide a gain to an outgoing signal received from the RF transceiver; switching circuitry configured to bypass the power amplifier; and control circuitry configured to adjust the switching circuitry and the power amplifier based on a target output signal power. 12. The front-end module of claim 11, wherein the power amplifier comprises a single-stage power amplifier, and wherein the RF transceiver comprises an internal power amplifier configured to provide driver amplification to the outgoing signal. 13. The front-end module of claim 11, wherein the power amplifier is coupled to an envelope tracking integrated circuit. 14. The front-end module of claim 13, wherein the envelope tracking integrated circuit comprises control circuitry. 15. The front-end module of claim 14, wherein the control circuitry of the front-end module and the control circuitry of the envelope tracking circuitry comprise a Mobile Industry Processor Interface RF Front-End Interface (MIPI RFFE). 16. The front-end module of claim 11, wherein the front-end module comprises a low-noise amplifier configured to provide a gain to an incoming signal received from the antenna and the switching circuitry is configured to bypass the low-noise amplifier. 17. A method for controlling a radio frequency (RF) communication system, comprising: adjusting modulation circuitry of an RF transceiver of the RF communication system based on a channel specification of a first network of a set of networks; switching a signal path of a front-end module of the RF communication system to bypass a power amplifier of the front-end module based on the channel specification of the first network or an output signal power specification of the first network, or both, wherein the signal path is configured to couple the RF transceiver to an antenna; and configuring at least one amplifier of the front-end module based on the output signal power specification of the first network. 18. The method of claim 17, wherein the first network comprises a cellular network, a Bluetooth network, an IEEE 802.3 network, or any combination thereof. 19. The method of claim 17, wherein the channel specification comprises a carrier frequency of a band of the first network, a time-coding system, a time-multiplexing, system, or any combination thereof. 20. The method of claim 17, wherein configuring the at least one amplifier of the front-end module comprises disabling the power amplifier of the front-end module. 21. The method of claim 17, wherein switching the signal path comprises selecting a filter of a filter bank of the front-end module. 22. The method of claim 17, wherein switching the signal path comprises coupling the antenna to a receive (RX) port of the RF transceiver or coupling the antenna to transmit (TX) port of the RF transceiver. 23. The method of claim 17, wherein configuring the at least one amplifier comprises operating a power amplifier in an envelope tracking mode. 24. The method of claim 17, wherein configuring the at least one amplifier comprises operating a power amplifier in an average power tracking mode. 25. The method of claim 17, comprising adjusting an internal power amplifier of the RF transceiver based on the output signal power specification of the first network.
The present disclosure relates to radiofrequency (RF) communications systems that may operate efficiently over a broad range of signal output levels. Electronic devices may employ amplification circuitry in the communication RF systems to provide output signal power. For example, amplification provided by external power amplifiers disposed in front-end modules may be more efficient at a higher range of output signal power, but may be inefficient at a lower range of output signal power. The disclosure relates to architectures for RF communication systems having transceivers and front-end modules that may provide power-efficient over broad ranges. Front-end modules may, for example, be managed to disable and/or enable external power amplifiers based of the output signal power. Transceivers may, for example, include internal power amplifier which may provide amplification for low output signals, and may operate as a driver to the external power amplifier of the front-end module for high output signals. Methods for managing the circuitry are also discussed.1. An electrical device comprising: a radio frequency (RF) transceiver comprising an internal power amplifier coupled to a transmit (TX) port of the RF transceiver; a front-end module comprising a power amplifier coupled to the TX port of the RF transceiver, wherein the front-end module is configured to couple to an antenna; and switching circuitry configured to bypass the power amplifier of the front-end module. 2. The electrical device of claim 1, wherein the power amplifier is powered by envelope tracking circuitry. 3. (canceled) 4. The electrical device of claim 1, wherein the front-end module comprises a controller configured to select a mode of operation from a set of modes of operation. 5. The electrical device of claim 4, wherein the set of modes of operation comprises: an internal power amplifier mode, wherein the front-end module is configured to disable the power amplifier; and an envelope tracking mode, wherein a voltage supplied to the power amplifier is dynamically adjusted to follow the envelope of an output RF signal. 6. The electrical device of claim 4, wherein the controller is a Mobile Industry Processor Interface RF Front-End Interface (MIPI RFFE) controller. 7. The electrical device of claim 1, wherein the front-end module comprises a configurable filter bank configured to filter a transmitted signal to the antenna, or a received signal from the antenna, or both. 8. The electrical device of claim 1, wherein the front-end module comprises a low-noise amplifier coupled to a receive (RX) port of the transceiver. 9. The electrical device of claim 1, wherein the internal power amplifier is configured to provides an output signal up to 20 dBm. 10. The electrical device of claim 1, wherein the internal power amplifier comprises a complementary metal-oxide semiconductor (CMOS) amplifier. 11. A front-end module of a radio frequency RF communication system configured to couple to a radio frequency (RF) transceiver and to an antenna, the front-end module comprising: a power amplifier configured to provide a gain to an outgoing signal received from the RF transceiver; switching circuitry configured to bypass the power amplifier; and control circuitry configured to adjust the switching circuitry and the power amplifier based on a target output signal power. 12. The front-end module of claim 11, wherein the power amplifier comprises a single-stage power amplifier, and wherein the RF transceiver comprises an internal power amplifier configured to provide driver amplification to the outgoing signal. 13. The front-end module of claim 11, wherein the power amplifier is coupled to an envelope tracking integrated circuit. 14. The front-end module of claim 13, wherein the envelope tracking integrated circuit comprises control circuitry. 15. The front-end module of claim 14, wherein the control circuitry of the front-end module and the control circuitry of the envelope tracking circuitry comprise a Mobile Industry Processor Interface RF Front-End Interface (MIPI RFFE). 16. The front-end module of claim 11, wherein the front-end module comprises a low-noise amplifier configured to provide a gain to an incoming signal received from the antenna and the switching circuitry is configured to bypass the low-noise amplifier. 17. A method for controlling a radio frequency (RF) communication system, comprising: adjusting modulation circuitry of an RF transceiver of the RF communication system based on a channel specification of a first network of a set of networks; switching a signal path of a front-end module of the RF communication system to bypass a power amplifier of the front-end module based on the channel specification of the first network or an output signal power specification of the first network, or both, wherein the signal path is configured to couple the RF transceiver to an antenna; and configuring at least one amplifier of the front-end module based on the output signal power specification of the first network. 18. The method of claim 17, wherein the first network comprises a cellular network, a Bluetooth network, an IEEE 802.3 network, or any combination thereof. 19. The method of claim 17, wherein the channel specification comprises a carrier frequency of a band of the first network, a time-coding system, a time-multiplexing, system, or any combination thereof. 20. The method of claim 17, wherein configuring the at least one amplifier of the front-end module comprises disabling the power amplifier of the front-end module. 21. The method of claim 17, wherein switching the signal path comprises selecting a filter of a filter bank of the front-end module. 22. The method of claim 17, wherein switching the signal path comprises coupling the antenna to a receive (RX) port of the RF transceiver or coupling the antenna to transmit (TX) port of the RF transceiver. 23. The method of claim 17, wherein configuring the at least one amplifier comprises operating a power amplifier in an envelope tracking mode. 24. The method of claim 17, wherein configuring the at least one amplifier comprises operating a power amplifier in an average power tracking mode. 25. The method of claim 17, comprising adjusting an internal power amplifier of the RF transceiver based on the output signal power specification of the first network.
2,600
10,485
10,485
15,968,010
2,689
A door lock system includes a controller including a processor and a memory. the controller is configured to interface with a smart lock. The door lock system also includes an exterior face plate on a first side of a door and an interior face plate on a second side of the door. The second side of the door faces an interior of a room while the door is in a closed position. The door lock system also includes a display screen disposed in the interior face plate. The display screen is connected to the controller and is an input device and an output device.
1. A door lock system comprising: a controller including a processor and a memory and being configured to interface with a smart lock; an exterior face plate on a first side of a door; an interior face plate on a second side of the door, the second side of the door facing an interior of a room while the door is in a closed position; and a display screen disposed in said interior face plate, the display screen being connected to the controller, wherein the display screen is an input device and an output device. 2. The door lock system of claim 1, wherein the controller is connected to at least one server remote from the door. 3. The door lock system of claim 2, further comprising a plurality of additional door lock systems, each of said additional door lock systems being disposed in different corresponding doors, and being connected to the at least one server. 4. The door lock system of claim 1, wherein the memory stores instructions for causing the display screen to display a set of functions to the user and for implementing at least one feature in response to the user selecting a function from the set of functions and wherein the set of functions includes at least one of a do not disturb function, a do not clean function, a guest copy card function, a child lock function, a checkout function, a cancel card function, a privacy function, and a block function. 5. The door lock system of claim 4, wherein the set of functions includes the privacy function, and wherein the door lock system is configured to set a status of the room to private in response to activation of the privacy function. 6. The door lock system of claim 4, wherein the set of functions includes the guest copy card function, the door lock system further includes a card read/write machine and wherein the guest copy card function is configured to enable an access card interfaced with said card read/write machine in response to the user selecting the guest copy card function. 7. The door lock system of claim 6, wherein the guest copy card function further includes a numerical limit, the controller is configured to prevent the guest copy card function from being activated after the guest copy card function has been engaged a number of times equal to the numerical limit. 8. The door lock system of claim 4, wherein the set of functions includes a cancel card function, the door lock system further includes a card read/write machine and wherein the cancel card function is configured to cancel a card interfaced with said card read/write machine in response to the user selecting the cancel card function. 9. The door lock system of claim 3, wherein the set of functions includes an audit function and wherein the controller is configured to cause the display to display at least a list of door opening events in response to a user selecting the audit function. 10. The door lock system of claim 1, further comprising a microphone input, the microphone input being connected to the controller and configured to activate at least one function in response to an audible command. 11. The door lock system of claim 1, further comprising a second display screen disposed in the exterior faceplate. 12. The door lock system of claim 1, wherein the controller is configured to limit functionalities prior to entry and validation of a security access code. 13. A method for securing a room using a smart lock comprising: displaying a set of self-service features on an interior facing display screen; and activating at least one of the self-service features in the set of self-service features in response to a user selecting the at least one of the self-service features. 14. The method of claim 13, wherein the set of self-service features includes at least one of a do not disturb function, a do not clean function, a guest copy card function, a child lock function, a checkout function, a cancel card function, an audit function and a block function. 15. The method of claim 14, wherein the set of self-service features includes the guest copy card function, and wherein the smart lock is configured to write an access card in response to the user selecting the guest copy card function. 16. The method of claim 15, wherein the guest copy card function includes a numerical limit, and wherein a controller in the smart lock is configured to prevent the guest copy card function from being activated after the guest copy card function has been engaged a number of times equal to the numerical limit. 17. The method of claim 13, wherein displaying the set of self-service features on the interior facing display screen includes displaying a subset of available self-service feature prior to entry of a passcode. 18. The method of claim 13, wherein activating the at least one of the self-service features comprises performing the self-service feature using a local processor of the smart lock. 19. The method of claim 13, wherein activating the at least one of the self-service features comprises reporting the requested self-service feature to a remote server. 20. The method of claim 19, further comprising notifying the user that the requested self-service feature has been performed by the remote server in response to receiving a confirmation from said remote server. 21. The door lock system of claim 4, wherein the set of functions includes the child lock function, and the door lock system is configured such that activation of an interior facing door opening mechanism is prevented from allowing the door to open while the child lock function is engaged. 22. The door lock system of claim 21, wherein the door lock system is communicatively connected to an evacuation notice system and configured such that activation of the evacuation notice system disengages the child lock function.
A door lock system includes a controller including a processor and a memory. the controller is configured to interface with a smart lock. The door lock system also includes an exterior face plate on a first side of a door and an interior face plate on a second side of the door. The second side of the door faces an interior of a room while the door is in a closed position. The door lock system also includes a display screen disposed in the interior face plate. The display screen is connected to the controller and is an input device and an output device.1. A door lock system comprising: a controller including a processor and a memory and being configured to interface with a smart lock; an exterior face plate on a first side of a door; an interior face plate on a second side of the door, the second side of the door facing an interior of a room while the door is in a closed position; and a display screen disposed in said interior face plate, the display screen being connected to the controller, wherein the display screen is an input device and an output device. 2. The door lock system of claim 1, wherein the controller is connected to at least one server remote from the door. 3. The door lock system of claim 2, further comprising a plurality of additional door lock systems, each of said additional door lock systems being disposed in different corresponding doors, and being connected to the at least one server. 4. The door lock system of claim 1, wherein the memory stores instructions for causing the display screen to display a set of functions to the user and for implementing at least one feature in response to the user selecting a function from the set of functions and wherein the set of functions includes at least one of a do not disturb function, a do not clean function, a guest copy card function, a child lock function, a checkout function, a cancel card function, a privacy function, and a block function. 5. The door lock system of claim 4, wherein the set of functions includes the privacy function, and wherein the door lock system is configured to set a status of the room to private in response to activation of the privacy function. 6. The door lock system of claim 4, wherein the set of functions includes the guest copy card function, the door lock system further includes a card read/write machine and wherein the guest copy card function is configured to enable an access card interfaced with said card read/write machine in response to the user selecting the guest copy card function. 7. The door lock system of claim 6, wherein the guest copy card function further includes a numerical limit, the controller is configured to prevent the guest copy card function from being activated after the guest copy card function has been engaged a number of times equal to the numerical limit. 8. The door lock system of claim 4, wherein the set of functions includes a cancel card function, the door lock system further includes a card read/write machine and wherein the cancel card function is configured to cancel a card interfaced with said card read/write machine in response to the user selecting the cancel card function. 9. The door lock system of claim 3, wherein the set of functions includes an audit function and wherein the controller is configured to cause the display to display at least a list of door opening events in response to a user selecting the audit function. 10. The door lock system of claim 1, further comprising a microphone input, the microphone input being connected to the controller and configured to activate at least one function in response to an audible command. 11. The door lock system of claim 1, further comprising a second display screen disposed in the exterior faceplate. 12. The door lock system of claim 1, wherein the controller is configured to limit functionalities prior to entry and validation of a security access code. 13. A method for securing a room using a smart lock comprising: displaying a set of self-service features on an interior facing display screen; and activating at least one of the self-service features in the set of self-service features in response to a user selecting the at least one of the self-service features. 14. The method of claim 13, wherein the set of self-service features includes at least one of a do not disturb function, a do not clean function, a guest copy card function, a child lock function, a checkout function, a cancel card function, an audit function and a block function. 15. The method of claim 14, wherein the set of self-service features includes the guest copy card function, and wherein the smart lock is configured to write an access card in response to the user selecting the guest copy card function. 16. The method of claim 15, wherein the guest copy card function includes a numerical limit, and wherein a controller in the smart lock is configured to prevent the guest copy card function from being activated after the guest copy card function has been engaged a number of times equal to the numerical limit. 17. The method of claim 13, wherein displaying the set of self-service features on the interior facing display screen includes displaying a subset of available self-service feature prior to entry of a passcode. 18. The method of claim 13, wherein activating the at least one of the self-service features comprises performing the self-service feature using a local processor of the smart lock. 19. The method of claim 13, wherein activating the at least one of the self-service features comprises reporting the requested self-service feature to a remote server. 20. The method of claim 19, further comprising notifying the user that the requested self-service feature has been performed by the remote server in response to receiving a confirmation from said remote server. 21. The door lock system of claim 4, wherein the set of functions includes the child lock function, and the door lock system is configured such that activation of an interior facing door opening mechanism is prevented from allowing the door to open while the child lock function is engaged. 22. The door lock system of claim 21, wherein the door lock system is communicatively connected to an evacuation notice system and configured such that activation of the evacuation notice system disengages the child lock function.
2,600
10,486
10,486
15,651,623
2,657
Methods and systems for contextually based fulfillment of communication requests are provided herein. In some embodiments, a method for contextually based fulfillment of a communication request via a telephony platform, comprises receiving via a telephony-based communication, at a fulfillment center, a user request for a service; determining a service provider capable of fulfilling the user request; translating the user request into one or more user intents; creating a contextual framework based on the user intent; requesting additional information regarding details of the user intent based on the contextual framework; and fulfilling the user request using the user intents when the contextual framework is complete.
1. A computer implemented method for contextually based fulfillment of communication requests via a telephony platform, comprising: receiving via a telephony-based communication, at a fulfillment center, a user request for a service; determining a service provider capable of fulfilling the user request; translating the user request into one or more user intents; creating a contextual framework based on the user intent; requesting additional information regarding details of the user intent to complete the contextual framework; and fulfilling the user request using the user intents when the contextual framework is complete, wherein fulfilling the user request comprises: converting the user intent into a service provider request; and sending the request to the service provider for fulfillment. 2. The method of claim 1, further comprising adding one or more additional user intents to the contextual framework by translating the additional information into one or more additional user intents. 3. The method of claim 1, wherein the fulfillment center comprises a messaging interface system and a communication server in communication with the messaging interface system. 4. The method of claim 3, wherein the communication server is adapted to interface with a natural language analyzer. 5. The method of claim 1, wherein the user request in a natural language format. 6. The method of claim 1, wherein the user request in an SMS message. 7. The method of claim 1, wherein the user request is directed to a dedicated inbound identifier of a telephony system. 8. The method of claim 7, wherein the dedicated inbound identifier determines the identity of the service provider capable of fulfilling the user request. 9. The method of claim 1, wherein translating the user request comprises an application programming interface (API) to a natural language analyzer. 10. The method of claim 1, wherein fulfilling the user request comprises: formatting the user intent; sending an API request to the service provider; and receiving an acknowledgement from the service provider that the request can be fulfilled. 11. The method of claim 10, further comprising sending a text confirming fulfillment of the user request from a device where the user request was received. 12. The method of claim 1, wherein the user request is a location based request. 13. A fulfillment center for contextually based fulfillment of communication requests via a telephony platform, comprising: a) at least one processor; b) at least one input device; and c) at least one storage device storing processor executable instructions which, when executed by the at least one processor, perform a method to: receiving via a telephony-based communication, at a fulfillment center, a user request for a service; determining a service provider capable of fulfilling the user request; translating the user request into one or more user intents; creating a contextual framework based on the user intent; requesting additional information regarding details of the user intent to complete the contextual framework; and fulfilling the user request using the user intents when the contextual framework is complete, wherein fulfilling the user request comprises: converting the user intent into a service provider request; and sending the request to the service provider for fulfillment. 14. The fulfillment center of claim 13, wherein the performed method further comprising adding one or more additional user intents to the contextual framework by translating the additional information into one or more additional user intents. 15. The fulfillment center of claim 13, wherein the fulfillment center comprises a messaging interface system and a communication server in communication with the messaging interface system, and wherein the communication server is adapted to interface with a natural language analyzer. 16. The fulfillment center of claim 13, wherein the user request in a natural language format. 17. The fulfillment center of claim 13, wherein the user request in an SMS message. 18. The fulfillment center of claim 13, wherein the user request is directed to a dedicated inbound identifier of a telephony system, and wherein the dedicated inbound identifier determines the identity of the service provider capable of fulfilling the user request. 19. The fulfillment center of claim 13, wherein translating the user request comprises an application programming interface (API) to a natural language analyzer. 20. The fulfillment center of claim 13, wherein fulfilling the user request comprises: formatting the user intent; sending an API request to the service provider; receiving an acknowledgement from the service provider that the request can be fulfilled; and sending a text confirming fulfillment of the user request from a device where the user request was received. 21. A non-transitory computer readable medium for storing computer instructions that, when executed by at least one processor causes the at least one processor to perform a method for contextually based fulfillment of communication requests via a telephony platform, comprising: receiving via a telephony-based communication, at a fulfillment center, a user request for a service; determining a service provider capable of fulfilling the user request; translating the user request into one or more user intents; creating a contextual framework based on the user intent; requesting additional information regarding details of the user intent to complete the contextual framework; and fulfilling the user request using the user intents when the contextual framework is complete, wherein fulfilling the user request comprises: converting the user intent into a service provider request; and sending the request to the service provider for fulfillment.
Methods and systems for contextually based fulfillment of communication requests are provided herein. In some embodiments, a method for contextually based fulfillment of a communication request via a telephony platform, comprises receiving via a telephony-based communication, at a fulfillment center, a user request for a service; determining a service provider capable of fulfilling the user request; translating the user request into one or more user intents; creating a contextual framework based on the user intent; requesting additional information regarding details of the user intent based on the contextual framework; and fulfilling the user request using the user intents when the contextual framework is complete.1. A computer implemented method for contextually based fulfillment of communication requests via a telephony platform, comprising: receiving via a telephony-based communication, at a fulfillment center, a user request for a service; determining a service provider capable of fulfilling the user request; translating the user request into one or more user intents; creating a contextual framework based on the user intent; requesting additional information regarding details of the user intent to complete the contextual framework; and fulfilling the user request using the user intents when the contextual framework is complete, wherein fulfilling the user request comprises: converting the user intent into a service provider request; and sending the request to the service provider for fulfillment. 2. The method of claim 1, further comprising adding one or more additional user intents to the contextual framework by translating the additional information into one or more additional user intents. 3. The method of claim 1, wherein the fulfillment center comprises a messaging interface system and a communication server in communication with the messaging interface system. 4. The method of claim 3, wherein the communication server is adapted to interface with a natural language analyzer. 5. The method of claim 1, wherein the user request in a natural language format. 6. The method of claim 1, wherein the user request in an SMS message. 7. The method of claim 1, wherein the user request is directed to a dedicated inbound identifier of a telephony system. 8. The method of claim 7, wherein the dedicated inbound identifier determines the identity of the service provider capable of fulfilling the user request. 9. The method of claim 1, wherein translating the user request comprises an application programming interface (API) to a natural language analyzer. 10. The method of claim 1, wherein fulfilling the user request comprises: formatting the user intent; sending an API request to the service provider; and receiving an acknowledgement from the service provider that the request can be fulfilled. 11. The method of claim 10, further comprising sending a text confirming fulfillment of the user request from a device where the user request was received. 12. The method of claim 1, wherein the user request is a location based request. 13. A fulfillment center for contextually based fulfillment of communication requests via a telephony platform, comprising: a) at least one processor; b) at least one input device; and c) at least one storage device storing processor executable instructions which, when executed by the at least one processor, perform a method to: receiving via a telephony-based communication, at a fulfillment center, a user request for a service; determining a service provider capable of fulfilling the user request; translating the user request into one or more user intents; creating a contextual framework based on the user intent; requesting additional information regarding details of the user intent to complete the contextual framework; and fulfilling the user request using the user intents when the contextual framework is complete, wherein fulfilling the user request comprises: converting the user intent into a service provider request; and sending the request to the service provider for fulfillment. 14. The fulfillment center of claim 13, wherein the performed method further comprising adding one or more additional user intents to the contextual framework by translating the additional information into one or more additional user intents. 15. The fulfillment center of claim 13, wherein the fulfillment center comprises a messaging interface system and a communication server in communication with the messaging interface system, and wherein the communication server is adapted to interface with a natural language analyzer. 16. The fulfillment center of claim 13, wherein the user request in a natural language format. 17. The fulfillment center of claim 13, wherein the user request in an SMS message. 18. The fulfillment center of claim 13, wherein the user request is directed to a dedicated inbound identifier of a telephony system, and wherein the dedicated inbound identifier determines the identity of the service provider capable of fulfilling the user request. 19. The fulfillment center of claim 13, wherein translating the user request comprises an application programming interface (API) to a natural language analyzer. 20. The fulfillment center of claim 13, wherein fulfilling the user request comprises: formatting the user intent; sending an API request to the service provider; receiving an acknowledgement from the service provider that the request can be fulfilled; and sending a text confirming fulfillment of the user request from a device where the user request was received. 21. A non-transitory computer readable medium for storing computer instructions that, when executed by at least one processor causes the at least one processor to perform a method for contextually based fulfillment of communication requests via a telephony platform, comprising: receiving via a telephony-based communication, at a fulfillment center, a user request for a service; determining a service provider capable of fulfilling the user request; translating the user request into one or more user intents; creating a contextual framework based on the user intent; requesting additional information regarding details of the user intent to complete the contextual framework; and fulfilling the user request using the user intents when the contextual framework is complete, wherein fulfilling the user request comprises: converting the user intent into a service provider request; and sending the request to the service provider for fulfillment.
2,600
10,487
10,487
15,699,631
2,612
Systems and methods are provided for planning a procedure. A display device is configured to display a first virtual element. A controller device having a processor is configured to be in communication with the display device, and the controller device is further configured to direct the display device to display the first virtual element. A physical control element is in communication with the controller device, and is configured to correspond to the first virtual element such that an actual manipulation of the control element is displayed, via the processor of the controller device and on the display device, as a corresponding response of the first virtual element to the actual manipulation of the control element. Associated systems, methods, and computer program products are also provided.
1. A system for planning a procedure, comprising: a display device configured to display a first virtual element; a controller device having a processor and being configured to be in communication with the display device, the controller device being further configured to direct the display device to display the first virtual element; and a physical control element in communication with the controller device, the control element being configured to correspond to the first virtual element such that an actual manipulation of the control element is displayed, via the processor of the controller device and on the display device, as a corresponding response of the first virtual element to the actual manipulation of the control element. 2. The system of claim 1, wherein the first virtual element comprises a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 3. The system of claim 1, further comprising a selector device operably engaged with the control element, the selector device being configured to direct the controller device to one of associate and dissociate the control element with the first virtual element. 4. The system of claim 1, further comprising a second virtual element selectively displayed by the display device, the second virtual element being configured to interact with the first virtual element. 5. The system of claim 4, wherein the second virtual element includes a representation of a jaw structure. 6. The system of claim 1, wherein the actual manipulation of the control element includes one of translational motion and rotational motion, and the corresponding response of the first virtual element displayed on the display device includes the one of translational motion and rotational motion. 7. The system of claim 1, wherein the display device is configured to display the first virtual element as one of a two-dimensional image and a three-dimensional image. 8. A method for planning a procedure, comprising: displaying a first virtual element on a display device with a controller device having a processor and configured to be in communication with the display device; manipulating a physical control element, the control element being in communication with the controller device and being configured to correspond to the first virtual element; and displaying, via the processor of the controller device and on the display device, a response of the first virtual element corresponding to the actual manipulation of the control element. 9. The method of claim 8, wherein displaying a first virtual element further comprises displaying a first virtual element comprising a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 10. The method of claim 8, further comprising directing the controller device to one of associate and dissociate the control element with the first virtual element, with a selector device operably engaged with the control element. 11. The method of claim 8, further comprising selectively displaying a second virtual element on the display device, the second virtual element being configured to interact with the first virtual element. 12. The method of claim 11, wherein selectively displaying a second virtual element further comprises selectively displaying a second virtual element including a representation of a jaw structure. 13. The method of claim 8, wherein manipulating the control element further comprises manipulating the control element to impart one of translational motion and rotational motion thereto, and wherein displaying a response of the first virtual element further comprises displaying a response of the first virtual element to the corresponding one of the translational motion and the rotational motion. 14. The method of claim 8, wherein displaying a first virtual element display further comprises displaying the first virtual element as one of a two-dimensional image and a three-dimensional image. 15. A method for planning a procedure, comprising: displaying a first virtual element on a display device; analyzing, via a processor, physical manipulation of a control element interface configured to correspond to the first virtual element; and displaying, in response to the analysis of the physical manipulation of the control element interface, a response of the first virtual element corresponding to the physical manipulation of the control element interface. 16. The method of claim 15, wherein displaying a first virtual element further comprises displaying a first virtual element comprising a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 17. The method of claim 15, further comprising one of associating and dissociating the control element interface with the first virtual element, with a selector device operably engaged with the control element interface. 18. The method of claim 15, further comprising selectively displaying a second virtual element on the display device, the second virtual element being configured to interact with the first virtual element. 19. The method of claim 15, further comprising selectively displaying a second virtual element including a representation of a jaw structure on the display device. 20. The method of claim 15, wherein displaying a response of the first virtual element further comprises displaying a response of the first virtual element corresponding to one of translational motion and rotational motion imparted to the control element interface by the manipulation thereof. 21. The method of claim 15, wherein displaying a first virtual element further comprises displaying the first virtual element as one of a two-dimensional image and a three-dimensional image. 22. A system comprising processing circuitry operatively coupled with a control element interface, wherein the processing circuitry is configured to cause the system to at least: display a first virtual element on a display device; analyze physical manipulation of a control element interface configured to correspond to the first virtual element; and display, in response to physical manipulation of the control element interface, a response of the first virtual element corresponding to the physical manipulation of the control element interface. 23. The system of claim 22, wherein the processing circuitry is further configured to cause the system to display a first virtual element comprising a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 24. The system of claim 22, wherein the processing circuitry is further configured to cause the system to direct the controller device to one of associate and dissociate the control element interface with the first virtual element, with a selector device operably engaged with the control element interface. 25. The system of claim 22, wherein the processing circuitry is further configured to cause the system to selectively display a second virtual element on the display device, the second virtual element being configured to interact with the first virtual element. 26. The system of claim 22, wherein the processing circuitry is further configured to cause the system to selectively display a second virtual element including a representation of a jaw structure on the display device. 27. The system of claim 22, wherein the processing circuitry is further configured to cause the system to display a response of the first virtual element corresponding to one of translational motion and rotational motion imparted to the control element interface by the manipulation thereof. 28. The system of claim 22, wherein the processing circuitry is further configured to cause the system to display the first virtual element as one of a two-dimensional image and a three-dimensional image. 29. A computer program product comprising at least one non-transitory computer readable storage medium having computer readable program instructions stored thereon, the computer readable program instructions comprising program instructions which, when executed by at least one processor implemented on a system for planning a procedure, cause the system to perform a method comprising: displaying a first virtual element via a display device; analyzing, via a processor, physical manipulation of a control element interface configured to correspond to the first virtual element; and displaying, in response to the analysis of the physical manipulation of the control element interface, a response of the first virtual element corresponding to the physical manipulation of the control element interface. 30. The computer program product of claim 29, wherein displaying a first virtual element further comprises displaying a first virtual element comprising a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 31. The computer program product of claim 29, wherein the computer readable program instructions comprising program instructions, when executed by the at least one processor implemented on the system, causes the system to perform a method further comprising one of associating and dissociating the control element interface with the first virtual element, with a selector device operably engaged with the control element interface. 32. The computer program product of claim 29, wherein the computer readable program instructions comprising program instructions, when executed by the at least one processor implemented on the system, causes the system to perform a method further comprising selectively displaying a second virtual element on the display device, the second virtual element being configured to interact with the first virtual element. 33. The computer program product of claim 29, wherein the computer readable program instructions comprising program instructions, when executed by the at least one processor implemented on the system, causes the system to perform a method further comprising selectively displaying a second virtual element including a representation of a jaw structure on the display device. 34. The computer program product of claim 29, wherein displaying a response of the first virtual element further comprises displaying a response of the first virtual element corresponding to one of translational motion and rotational motion imparted to the control element interface by the manipulation thereof. 35. The computer program product of claim 29, wherein displaying a first virtual element further comprises displaying the first virtual element as one of a two-dimensional image and a three-dimensional image.
Systems and methods are provided for planning a procedure. A display device is configured to display a first virtual element. A controller device having a processor is configured to be in communication with the display device, and the controller device is further configured to direct the display device to display the first virtual element. A physical control element is in communication with the controller device, and is configured to correspond to the first virtual element such that an actual manipulation of the control element is displayed, via the processor of the controller device and on the display device, as a corresponding response of the first virtual element to the actual manipulation of the control element. Associated systems, methods, and computer program products are also provided.1. A system for planning a procedure, comprising: a display device configured to display a first virtual element; a controller device having a processor and being configured to be in communication with the display device, the controller device being further configured to direct the display device to display the first virtual element; and a physical control element in communication with the controller device, the control element being configured to correspond to the first virtual element such that an actual manipulation of the control element is displayed, via the processor of the controller device and on the display device, as a corresponding response of the first virtual element to the actual manipulation of the control element. 2. The system of claim 1, wherein the first virtual element comprises a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 3. The system of claim 1, further comprising a selector device operably engaged with the control element, the selector device being configured to direct the controller device to one of associate and dissociate the control element with the first virtual element. 4. The system of claim 1, further comprising a second virtual element selectively displayed by the display device, the second virtual element being configured to interact with the first virtual element. 5. The system of claim 4, wherein the second virtual element includes a representation of a jaw structure. 6. The system of claim 1, wherein the actual manipulation of the control element includes one of translational motion and rotational motion, and the corresponding response of the first virtual element displayed on the display device includes the one of translational motion and rotational motion. 7. The system of claim 1, wherein the display device is configured to display the first virtual element as one of a two-dimensional image and a three-dimensional image. 8. A method for planning a procedure, comprising: displaying a first virtual element on a display device with a controller device having a processor and configured to be in communication with the display device; manipulating a physical control element, the control element being in communication with the controller device and being configured to correspond to the first virtual element; and displaying, via the processor of the controller device and on the display device, a response of the first virtual element corresponding to the actual manipulation of the control element. 9. The method of claim 8, wherein displaying a first virtual element further comprises displaying a first virtual element comprising a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 10. The method of claim 8, further comprising directing the controller device to one of associate and dissociate the control element with the first virtual element, with a selector device operably engaged with the control element. 11. The method of claim 8, further comprising selectively displaying a second virtual element on the display device, the second virtual element being configured to interact with the first virtual element. 12. The method of claim 11, wherein selectively displaying a second virtual element further comprises selectively displaying a second virtual element including a representation of a jaw structure. 13. The method of claim 8, wherein manipulating the control element further comprises manipulating the control element to impart one of translational motion and rotational motion thereto, and wherein displaying a response of the first virtual element further comprises displaying a response of the first virtual element to the corresponding one of the translational motion and the rotational motion. 14. The method of claim 8, wherein displaying a first virtual element display further comprises displaying the first virtual element as one of a two-dimensional image and a three-dimensional image. 15. A method for planning a procedure, comprising: displaying a first virtual element on a display device; analyzing, via a processor, physical manipulation of a control element interface configured to correspond to the first virtual element; and displaying, in response to the analysis of the physical manipulation of the control element interface, a response of the first virtual element corresponding to the physical manipulation of the control element interface. 16. The method of claim 15, wherein displaying a first virtual element further comprises displaying a first virtual element comprising a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 17. The method of claim 15, further comprising one of associating and dissociating the control element interface with the first virtual element, with a selector device operably engaged with the control element interface. 18. The method of claim 15, further comprising selectively displaying a second virtual element on the display device, the second virtual element being configured to interact with the first virtual element. 19. The method of claim 15, further comprising selectively displaying a second virtual element including a representation of a jaw structure on the display device. 20. The method of claim 15, wherein displaying a response of the first virtual element further comprises displaying a response of the first virtual element corresponding to one of translational motion and rotational motion imparted to the control element interface by the manipulation thereof. 21. The method of claim 15, wherein displaying a first virtual element further comprises displaying the first virtual element as one of a two-dimensional image and a three-dimensional image. 22. A system comprising processing circuitry operatively coupled with a control element interface, wherein the processing circuitry is configured to cause the system to at least: display a first virtual element on a display device; analyze physical manipulation of a control element interface configured to correspond to the first virtual element; and display, in response to physical manipulation of the control element interface, a response of the first virtual element corresponding to the physical manipulation of the control element interface. 23. The system of claim 22, wherein the processing circuitry is further configured to cause the system to display a first virtual element comprising a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 24. The system of claim 22, wherein the processing circuitry is further configured to cause the system to direct the controller device to one of associate and dissociate the control element interface with the first virtual element, with a selector device operably engaged with the control element interface. 25. The system of claim 22, wherein the processing circuitry is further configured to cause the system to selectively display a second virtual element on the display device, the second virtual element being configured to interact with the first virtual element. 26. The system of claim 22, wherein the processing circuitry is further configured to cause the system to selectively display a second virtual element including a representation of a jaw structure on the display device. 27. The system of claim 22, wherein the processing circuitry is further configured to cause the system to display a response of the first virtual element corresponding to one of translational motion and rotational motion imparted to the control element interface by the manipulation thereof. 28. The system of claim 22, wherein the processing circuitry is further configured to cause the system to display the first virtual element as one of a two-dimensional image and a three-dimensional image. 29. A computer program product comprising at least one non-transitory computer readable storage medium having computer readable program instructions stored thereon, the computer readable program instructions comprising program instructions which, when executed by at least one processor implemented on a system for planning a procedure, cause the system to perform a method comprising: displaying a first virtual element via a display device; analyzing, via a processor, physical manipulation of a control element interface configured to correspond to the first virtual element; and displaying, in response to the analysis of the physical manipulation of the control element interface, a response of the first virtual element corresponding to the physical manipulation of the control element interface. 30. The computer program product of claim 29, wherein displaying a first virtual element further comprises displaying a first virtual element comprising a surgical apparatus, the surgical apparatus including one of a dental implant and a surgical instrument configured to prepare a site on a jaw structure to receive the dental implant. 31. The computer program product of claim 29, wherein the computer readable program instructions comprising program instructions, when executed by the at least one processor implemented on the system, causes the system to perform a method further comprising one of associating and dissociating the control element interface with the first virtual element, with a selector device operably engaged with the control element interface. 32. The computer program product of claim 29, wherein the computer readable program instructions comprising program instructions, when executed by the at least one processor implemented on the system, causes the system to perform a method further comprising selectively displaying a second virtual element on the display device, the second virtual element being configured to interact with the first virtual element. 33. The computer program product of claim 29, wherein the computer readable program instructions comprising program instructions, when executed by the at least one processor implemented on the system, causes the system to perform a method further comprising selectively displaying a second virtual element including a representation of a jaw structure on the display device. 34. The computer program product of claim 29, wherein displaying a response of the first virtual element further comprises displaying a response of the first virtual element corresponding to one of translational motion and rotational motion imparted to the control element interface by the manipulation thereof. 35. The computer program product of claim 29, wherein displaying a first virtual element further comprises displaying the first virtual element as one of a two-dimensional image and a three-dimensional image.
2,600
10,488
10,488
15,944,662
2,631
Various embodiments of the present technology comprise a method and apparatus for a continuous time linear equalizer (CTLE). In various embodiments, the CTLE comprises a cross-coupled transistor pair that operates as a negative impedance converter. The CTLE produces a transfer function that provides high gain peaking at a high frequency without increasing the size of the die area and/or the power supply level.
1. A continuous time linear equalizer circuit, comprising: a differential transistor pair; a cross-coupled circuit, connected in parallel with the differential transistor pair, comprising: a first transistor; and a second transistor; wherein the first and second transistors are cross-coupled with each other; and an RC network connected in parallel with the cross-coupled circuit. 2. The continuous time linear equalizer circuit according to claim 1, wherein the RC network comprises a resistor connected in parallel with a capacitor. 3. The continuous time linear equalizer circuit according to claim 1, wherein: each of the first and second transistors comprise a bipolar transistor; and the differential transistor pair comprises two bipolar transistors. 4. The continuous time linear equalizer circuit according to claim 3, wherein: the continuous time linear equalizer circuit is connected to a power supply having a minimum voltage of approximately 2.5 volts; and the continuous time linear equalizer circuit has a transfer function comprising a peak at a frequency above 5 giga hertz and a gain above 15 decibels. 5. The continuous time linear equalizer circuit according to claim 1, wherein: each of the first and second transistors comprise a metal-oxide semiconductor field-effect transistor; and the differential transistor pair comprises two metal-oxide semiconductor field-effect transistors. 6. The continuous time linear equalizer circuit according to claim 5, wherein: the continuous time linear equalizer circuit is connected to a power supply having a minimum voltage of approximately 1.2 volts. 7. The continuous time linear equalizer circuit according to claim 1, further comprising: a first current source directly connected to the first transistor; and a second current source directly connected to the second transistor. 8. The continuous time linear equalizer circuit according to claim 1, wherein: the first transistor is directly connected to a transistor of the differential transistor pair; and the second transistor is directly connected to a remaining transistor of the differential transistor pair. 9. A method for compensating for losses of high-frequency components in an analog signal, comprising: providing a continuous time equalizer circuit, comprising: a differential transistor pair; a cross-coupled circuit, connected in parallel with the differential transistor pair, comprising a first transistor cross-coupled with a second transistor; and applying a voltage range from approximately 1.2 volts to approximately 2.5 volts to the continuous time equalizer circuit. 10. The method according to claim 9, wherein the continuous time equalizer circuit further comprises an adaptive control circuit connected in parallel with the cross-coupled circuit and configured to control a peaking gain and a frequency gain according to a control voltage. 11. The method according to claim 9, wherein the continuous time equalizer circuit has a transfer function comprising a peak having a frequency above 5 gigahertz and a gain above 15 decibels. 12. A communication system having a transmitter and a receiver, comprising: an equalizer circuit, connected between the transmitter and the receiver, comprising: a differential transistor pair; a cross-coupled circuit connected to the differential transistor pair, comprising: a first transistor; and a second transistor; wherein the first and second transistors are cross-coupled with each other; a first current source connected to a terminal of the first transistor; and a current source connected to a terminal of the second transistor. 13. The communication system according to claim 12, wherein the equalizer circuit further comprises an RC network connected in parallel with the cross-coupled circuit, the RC network comprises a capacitor connected in parallel with a resistor. 14. The communication system according to claim 12, wherein the equalizer circuit further comprises an adaptive control circuit connected in parallel with the cross-coupled circuit and configured to control a peaking gain and a frequency gain according to a control voltage. 15. The communication system according to claim 12, wherein the equalizer circuit has a transfer function comprising a peak having a frequency above 5 gigahertz and a gain above 15 decibels. 16. The communication system according to claim 12, wherein: the first transistor is directly connected to a transistor of the differential transistor pair; and the second transistor is directly connected to a remaining transistor of the differential transistor pair. 17. The communication system according to claim 12, wherein: each of the first and second transistors comprise a bipolar transistor; and the differential transistor pair comprises two bipolar transistors. 18. The communication system according to claim 17, wherein the equalizer circuit is connected to a power supply having a minimum voltage of approximately 2.5 volts. 19. The communication system according to claim 12, wherein: each of the first and second transistors comprise a metal-oxide semiconductor field-effect transistor; and the differential transistor pair comprises two metal-oxide semiconductor field-effect transistors. 20. The communication system according to claim 19, wherein the equalizer circuit is connected to a power supply having a minimum voltage of approximately 1.2 volts.
Various embodiments of the present technology comprise a method and apparatus for a continuous time linear equalizer (CTLE). In various embodiments, the CTLE comprises a cross-coupled transistor pair that operates as a negative impedance converter. The CTLE produces a transfer function that provides high gain peaking at a high frequency without increasing the size of the die area and/or the power supply level.1. A continuous time linear equalizer circuit, comprising: a differential transistor pair; a cross-coupled circuit, connected in parallel with the differential transistor pair, comprising: a first transistor; and a second transistor; wherein the first and second transistors are cross-coupled with each other; and an RC network connected in parallel with the cross-coupled circuit. 2. The continuous time linear equalizer circuit according to claim 1, wherein the RC network comprises a resistor connected in parallel with a capacitor. 3. The continuous time linear equalizer circuit according to claim 1, wherein: each of the first and second transistors comprise a bipolar transistor; and the differential transistor pair comprises two bipolar transistors. 4. The continuous time linear equalizer circuit according to claim 3, wherein: the continuous time linear equalizer circuit is connected to a power supply having a minimum voltage of approximately 2.5 volts; and the continuous time linear equalizer circuit has a transfer function comprising a peak at a frequency above 5 giga hertz and a gain above 15 decibels. 5. The continuous time linear equalizer circuit according to claim 1, wherein: each of the first and second transistors comprise a metal-oxide semiconductor field-effect transistor; and the differential transistor pair comprises two metal-oxide semiconductor field-effect transistors. 6. The continuous time linear equalizer circuit according to claim 5, wherein: the continuous time linear equalizer circuit is connected to a power supply having a minimum voltage of approximately 1.2 volts. 7. The continuous time linear equalizer circuit according to claim 1, further comprising: a first current source directly connected to the first transistor; and a second current source directly connected to the second transistor. 8. The continuous time linear equalizer circuit according to claim 1, wherein: the first transistor is directly connected to a transistor of the differential transistor pair; and the second transistor is directly connected to a remaining transistor of the differential transistor pair. 9. A method for compensating for losses of high-frequency components in an analog signal, comprising: providing a continuous time equalizer circuit, comprising: a differential transistor pair; a cross-coupled circuit, connected in parallel with the differential transistor pair, comprising a first transistor cross-coupled with a second transistor; and applying a voltage range from approximately 1.2 volts to approximately 2.5 volts to the continuous time equalizer circuit. 10. The method according to claim 9, wherein the continuous time equalizer circuit further comprises an adaptive control circuit connected in parallel with the cross-coupled circuit and configured to control a peaking gain and a frequency gain according to a control voltage. 11. The method according to claim 9, wherein the continuous time equalizer circuit has a transfer function comprising a peak having a frequency above 5 gigahertz and a gain above 15 decibels. 12. A communication system having a transmitter and a receiver, comprising: an equalizer circuit, connected between the transmitter and the receiver, comprising: a differential transistor pair; a cross-coupled circuit connected to the differential transistor pair, comprising: a first transistor; and a second transistor; wherein the first and second transistors are cross-coupled with each other; a first current source connected to a terminal of the first transistor; and a current source connected to a terminal of the second transistor. 13. The communication system according to claim 12, wherein the equalizer circuit further comprises an RC network connected in parallel with the cross-coupled circuit, the RC network comprises a capacitor connected in parallel with a resistor. 14. The communication system according to claim 12, wherein the equalizer circuit further comprises an adaptive control circuit connected in parallel with the cross-coupled circuit and configured to control a peaking gain and a frequency gain according to a control voltage. 15. The communication system according to claim 12, wherein the equalizer circuit has a transfer function comprising a peak having a frequency above 5 gigahertz and a gain above 15 decibels. 16. The communication system according to claim 12, wherein: the first transistor is directly connected to a transistor of the differential transistor pair; and the second transistor is directly connected to a remaining transistor of the differential transistor pair. 17. The communication system according to claim 12, wherein: each of the first and second transistors comprise a bipolar transistor; and the differential transistor pair comprises two bipolar transistors. 18. The communication system according to claim 17, wherein the equalizer circuit is connected to a power supply having a minimum voltage of approximately 2.5 volts. 19. The communication system according to claim 12, wherein: each of the first and second transistors comprise a metal-oxide semiconductor field-effect transistor; and the differential transistor pair comprises two metal-oxide semiconductor field-effect transistors. 20. The communication system according to claim 19, wherein the equalizer circuit is connected to a power supply having a minimum voltage of approximately 1.2 volts.
2,600
10,489
10,489
15,155,408
2,651
A headset having game, chat and microphone audio signals is provided with a programmable signal processor for individually modifying the audio signals and a memory configured to store a plurality of user-selectable signal-processing parameter settings that determine the manner in which the audio signals will be altered by the signal processor. The parameter settings collectively form a preset, and one or more user-operable controls can select and activate a preset from the plurality of presets stored in memory. The parameters stored in the selected preset can be loaded into the signal processor such that the sound characteristics of the audio paths are modified in accordance with the parameter settings in the selected preset.
1. A headset system comprising: signal processor operable to detect a tone in an audio track, a sound characteristic of the audio signal being modified in accordance with the detected tone. 2. The headset system of claim 1, wherein the headset system comprises: a data port connection operable to receive one or more settings that control the modification of the audio signal. 3. The headset system of claim 1, wherein the signal processor is located in a headset. 4. The headset system of claim 1, wherein the modification of the audio signal may be selected with controls located on an external control device connected via a wire or wirelessly to a headset. 5. The headset system of claim 1, wherein the signal processor is operably controlled by an external device wirelessly connected to a headset. 6. The headset system of claim 1, wherein the modification of the audio signal is determined by a set of buttons, one of which is used to switch between a plurality of presets and another of which is used as a master preset, thereby enabling a user to alternate between the two selected presets by pressing the master preset button. 7. 8. The headset system of claim 1, wherein the modification of the audio signal is determined by speech recognition. 9. The headset system of claim 1, wherein the modification of the audio signal is programmed with a computer. 10. The headset system of claim 1, wherein the signal processor is reprogrammable. 11. The headset system of claim 1, wherein the signal processor comprises a plurality of signal paths, wherein each signal path in the plurality of signal paths includes a noise gate to remove undesired sounds whose amplitude is below a preset threshold, thereby improving the signal-to-noise ratio of the audio path. 12. The headset system of claim 1, wherein the headset system comprises a volume limiter incorporating a variable threshold level to limit the maximum volume level sent to the speakers such that it does not exceed a predetermined amplitude. 13. The headset system of claim 1, wherein the headset system comprises a sound storage device that announces prerecorded messages in accordance with the detected tone. 14. The headset system of 1, wherein the headset system comprises a frequency-limited volume expander having; a variable gain amplifier with adjustable threshold, the variable gain amplifier being operable to increase the sound level of the audio signal in a frequency range without affecting the sound level of the audio signal outside of the frequency range. 15. The headset system of claim 1, wherein the audio signal comprises one of a game audio signal, a chat audio signal and a microphone audio signal. 16. The headset system in claim 1, wherein the control signal is generated by a video game. 17. A headset operable to receive a plurality of audio signals from a device, wherein the headset comprises: a signal processor operable to modify one or more sound characteristic of an audio signal; and an input operable to receive a control signal that determines the modification, the control signal comprising one or more embedded tones. 18. The headset in claim 17, wherein the plurality of audio signals comprises a game audio signal, a chat audio signal and a microphone audio signal. 19. The headset in claim 17, wherein the control signal is a wireless signal. 20. The headset in claim 17, wherein the control signal is generated by a video game. 21. The headset in claim 17, wherein the control signal is decoded from the audio signal.
A headset having game, chat and microphone audio signals is provided with a programmable signal processor for individually modifying the audio signals and a memory configured to store a plurality of user-selectable signal-processing parameter settings that determine the manner in which the audio signals will be altered by the signal processor. The parameter settings collectively form a preset, and one or more user-operable controls can select and activate a preset from the plurality of presets stored in memory. The parameters stored in the selected preset can be loaded into the signal processor such that the sound characteristics of the audio paths are modified in accordance with the parameter settings in the selected preset.1. A headset system comprising: signal processor operable to detect a tone in an audio track, a sound characteristic of the audio signal being modified in accordance with the detected tone. 2. The headset system of claim 1, wherein the headset system comprises: a data port connection operable to receive one or more settings that control the modification of the audio signal. 3. The headset system of claim 1, wherein the signal processor is located in a headset. 4. The headset system of claim 1, wherein the modification of the audio signal may be selected with controls located on an external control device connected via a wire or wirelessly to a headset. 5. The headset system of claim 1, wherein the signal processor is operably controlled by an external device wirelessly connected to a headset. 6. The headset system of claim 1, wherein the modification of the audio signal is determined by a set of buttons, one of which is used to switch between a plurality of presets and another of which is used as a master preset, thereby enabling a user to alternate between the two selected presets by pressing the master preset button. 7. 8. The headset system of claim 1, wherein the modification of the audio signal is determined by speech recognition. 9. The headset system of claim 1, wherein the modification of the audio signal is programmed with a computer. 10. The headset system of claim 1, wherein the signal processor is reprogrammable. 11. The headset system of claim 1, wherein the signal processor comprises a plurality of signal paths, wherein each signal path in the plurality of signal paths includes a noise gate to remove undesired sounds whose amplitude is below a preset threshold, thereby improving the signal-to-noise ratio of the audio path. 12. The headset system of claim 1, wherein the headset system comprises a volume limiter incorporating a variable threshold level to limit the maximum volume level sent to the speakers such that it does not exceed a predetermined amplitude. 13. The headset system of claim 1, wherein the headset system comprises a sound storage device that announces prerecorded messages in accordance with the detected tone. 14. The headset system of 1, wherein the headset system comprises a frequency-limited volume expander having; a variable gain amplifier with adjustable threshold, the variable gain amplifier being operable to increase the sound level of the audio signal in a frequency range without affecting the sound level of the audio signal outside of the frequency range. 15. The headset system of claim 1, wherein the audio signal comprises one of a game audio signal, a chat audio signal and a microphone audio signal. 16. The headset system in claim 1, wherein the control signal is generated by a video game. 17. A headset operable to receive a plurality of audio signals from a device, wherein the headset comprises: a signal processor operable to modify one or more sound characteristic of an audio signal; and an input operable to receive a control signal that determines the modification, the control signal comprising one or more embedded tones. 18. The headset in claim 17, wherein the plurality of audio signals comprises a game audio signal, a chat audio signal and a microphone audio signal. 19. The headset in claim 17, wherein the control signal is a wireless signal. 20. The headset in claim 17, wherein the control signal is generated by a video game. 21. The headset in claim 17, wherein the control signal is decoded from the audio signal.
2,600
10,490
10,490
15,917,937
2,687
A computed implemented method, apparatus and computed program product are provided that, under control of one or more processors configured with executable instructions, detect wireless activity of a mobile device in proximity to a local environment. The method, apparatus and computed program product track a detected path of the mobile device based on one or more characteristics of a signal broadcast from the mobile device, determine whether the detected path of the mobile device indicates an unauthorized physical presence relative to a restricted zone related to the local environment and output an alert when the detected path indicates the unauthorized physical presence of the mobile device relative to the restricted zone.
1. A method, comprising: under control of one or more processors configured with executable instructions; detecting wireless activity of a mobile device in proximity to a local environment; tracking a detected path of the mobile device based on one or more characteristics of a broadcast signal from the mobile device; determining whether the detected path of the mobile device indicates an unauthorized physical presence of the mobile device relative to a restricted zone related to the local environment; and generating an alert in response to the detected path indicates the unauthorized physical presence of the mobile device relative to the restricted zone. 2. The method of claim 1, wherein the detecting comprises detecting wireless activity of multiple mobile devices and the tracking comprises tracking separate detected paths of the multiple mobile devices utilizing wireless signatures of the corresponding mobile devices. 3. The method of claim 1, wherein the detecting comprises detecting a wireless signature (WLS) of a mobile device, wherein the WLS is assigned a limited access authorization level and wherein the determining comprises determining whether the detected path follows a limited access route corresponding to the limited access authorization level. 4. The method of claim 1, further comprising collecting wireless activity over time in connection with multiple mobile devices passing through the local environment, defining one or more predetermined routes based on the wireless activity collected for the multiple mobile devices, wherein the determining comprises comparing the detected path to one or more predetermined routes associated with the restricted zone. 5. The method of claim 1, wherein the detecting, tracking and determining are performed while denying the mobile device access to a wireless network. 6. The method of claim 1, wherein the mobile device is one or more of a phone, wearable device or tablet device. 7. The method of claim 1, wherein the detecting comprises detecting a request to connect from the mobile device, denying the request to connect and thereafter continuing the detecting of the wireless activity, the request to connect including a wireless signature of the mobile device. 8. The method of claim 1, wherein the determining comprises characterizing the detected path as normal activity or abnormal activity. 9. An apparatus, comprising: a tracking circuit to detect wireless activity of a mobile device in proximity to a local environment, the tracking circuit to track a detected path of the mobile device based on one or more characteristics of a broadcast signal from the mobile device; one or more processors; a memory storing program instructions accessible by the one or more processors, wherein, responsive to execution of the program instructions, the one or more processors to perform the following: determining whether the detected path of the mobile device indicates an unauthorized physical presence of the mobile device relative to a restricted zone related to the local environment; and generating an alert in response to the detected path indicates the unauthorized physical presence of the mobile device relative to the restricted zone. 10. The apparatus of claim 9, wherein the one or more processors determine whether the detected path follows a limited access route corresponding to a limited access authorization level. 11. The apparatus of claim 9, further comprising a network transceiver to communicate with a wireless network, wherein the network transceiver includes the tracking circuit to track the wireless activity in connection with granting access to the wireless network, and the one or more processors to deny the mobile device access to the wireless network while the one or more processors determines whether the detected path indicates the unauthorized physical presence. 12. The apparatus of claim 9, wherein the tracking circuit is configured to detect a wireless signature (WLS) of a mobile device prior to or independent of i) establishing access to a wireless network or ii) establishing a communications session, and wherein the one or more processors detect, within the wireless activity, a request to connect from the mobile device, the request to connect including the WLS for the mobile device. 13. The apparatus of claim 9, wherein the one or more processors store and manage one or more of i) tracking records, ii) a learning log and iii) an activity log. 14. The apparatus of claim 13, wherein the tracking circuit is configured to detect a wireless signature (WLS) of the mobile device that is transmitted by the mobile device without or before establishing a communication session, and wherein the tracking records are associated with tracking events for the mobile device and the corresponding WLS. 15. The apparatus of claim 13, wherein the learning log defines a restricted zone in connection with the local environment, the restricted zone corresponding to a predetermined route. 16. The apparatus of claim 13, wherein the wireless activity including a wireless signature (WLS) of a mobile device and wherein the activity log maintains access authorization levels assigned to individual WLS, and an activity history, the access authorization levels include one or more of complete access, full exterior access, entry area access, delivery route access, garbage route access, utility route access and no access. 17. A computer program product comprising a non-transitory non-signal computer readable storage medium comprising computer executable code to: detect wireless activity of a mobile device in proximity to a local environment; track a detected path of the mobile device based on one or more characteristics of a broadcast signal from the mobile device; determine whether the detected path of the mobile device indicates an unauthorized physical presence of the mobile device relative to a restricted zone related to the local environment; and generate an alert when the detected path indicates the unauthorized physical presence of the mobile device relative to the restricted zone. 18. The computer program product of claim 17, wherein the computer executable code further characterizes the detected path as normal activity or abnormal activity. 19. The computer program product of claim 17, wherein the computer executable code further compares the detected path to one or more predetermined routes associated with the restricted zone. 20. The computer program product of claim 17, wherein the computer executable code wherein the tracking is performed in connection with a wireless network, and wherein the detecting, tracking and determining are performed while denying the mobile device access to the wireless network. 21. The apparatus of claim 9, wherein the one or more processors are to collect wireless activity over time in connection with multiple mobile devices passing through the local environment, and to track detected paths of the multiple mobile devices based on the wireless activity collected for the multiple mobile devices, the apparatus further comprising a display to present, through a graphical user interface, a graphical model of the local environment and to present the detected paths on the graphical model. 22. The apparatus of claim 9, wherein the tracking circuit detects the wireless activity before and without establishing a communications link with the mobile device.
A computed implemented method, apparatus and computed program product are provided that, under control of one or more processors configured with executable instructions, detect wireless activity of a mobile device in proximity to a local environment. The method, apparatus and computed program product track a detected path of the mobile device based on one or more characteristics of a signal broadcast from the mobile device, determine whether the detected path of the mobile device indicates an unauthorized physical presence relative to a restricted zone related to the local environment and output an alert when the detected path indicates the unauthorized physical presence of the mobile device relative to the restricted zone.1. A method, comprising: under control of one or more processors configured with executable instructions; detecting wireless activity of a mobile device in proximity to a local environment; tracking a detected path of the mobile device based on one or more characteristics of a broadcast signal from the mobile device; determining whether the detected path of the mobile device indicates an unauthorized physical presence of the mobile device relative to a restricted zone related to the local environment; and generating an alert in response to the detected path indicates the unauthorized physical presence of the mobile device relative to the restricted zone. 2. The method of claim 1, wherein the detecting comprises detecting wireless activity of multiple mobile devices and the tracking comprises tracking separate detected paths of the multiple mobile devices utilizing wireless signatures of the corresponding mobile devices. 3. The method of claim 1, wherein the detecting comprises detecting a wireless signature (WLS) of a mobile device, wherein the WLS is assigned a limited access authorization level and wherein the determining comprises determining whether the detected path follows a limited access route corresponding to the limited access authorization level. 4. The method of claim 1, further comprising collecting wireless activity over time in connection with multiple mobile devices passing through the local environment, defining one or more predetermined routes based on the wireless activity collected for the multiple mobile devices, wherein the determining comprises comparing the detected path to one or more predetermined routes associated with the restricted zone. 5. The method of claim 1, wherein the detecting, tracking and determining are performed while denying the mobile device access to a wireless network. 6. The method of claim 1, wherein the mobile device is one or more of a phone, wearable device or tablet device. 7. The method of claim 1, wherein the detecting comprises detecting a request to connect from the mobile device, denying the request to connect and thereafter continuing the detecting of the wireless activity, the request to connect including a wireless signature of the mobile device. 8. The method of claim 1, wherein the determining comprises characterizing the detected path as normal activity or abnormal activity. 9. An apparatus, comprising: a tracking circuit to detect wireless activity of a mobile device in proximity to a local environment, the tracking circuit to track a detected path of the mobile device based on one or more characteristics of a broadcast signal from the mobile device; one or more processors; a memory storing program instructions accessible by the one or more processors, wherein, responsive to execution of the program instructions, the one or more processors to perform the following: determining whether the detected path of the mobile device indicates an unauthorized physical presence of the mobile device relative to a restricted zone related to the local environment; and generating an alert in response to the detected path indicates the unauthorized physical presence of the mobile device relative to the restricted zone. 10. The apparatus of claim 9, wherein the one or more processors determine whether the detected path follows a limited access route corresponding to a limited access authorization level. 11. The apparatus of claim 9, further comprising a network transceiver to communicate with a wireless network, wherein the network transceiver includes the tracking circuit to track the wireless activity in connection with granting access to the wireless network, and the one or more processors to deny the mobile device access to the wireless network while the one or more processors determines whether the detected path indicates the unauthorized physical presence. 12. The apparatus of claim 9, wherein the tracking circuit is configured to detect a wireless signature (WLS) of a mobile device prior to or independent of i) establishing access to a wireless network or ii) establishing a communications session, and wherein the one or more processors detect, within the wireless activity, a request to connect from the mobile device, the request to connect including the WLS for the mobile device. 13. The apparatus of claim 9, wherein the one or more processors store and manage one or more of i) tracking records, ii) a learning log and iii) an activity log. 14. The apparatus of claim 13, wherein the tracking circuit is configured to detect a wireless signature (WLS) of the mobile device that is transmitted by the mobile device without or before establishing a communication session, and wherein the tracking records are associated with tracking events for the mobile device and the corresponding WLS. 15. The apparatus of claim 13, wherein the learning log defines a restricted zone in connection with the local environment, the restricted zone corresponding to a predetermined route. 16. The apparatus of claim 13, wherein the wireless activity including a wireless signature (WLS) of a mobile device and wherein the activity log maintains access authorization levels assigned to individual WLS, and an activity history, the access authorization levels include one or more of complete access, full exterior access, entry area access, delivery route access, garbage route access, utility route access and no access. 17. A computer program product comprising a non-transitory non-signal computer readable storage medium comprising computer executable code to: detect wireless activity of a mobile device in proximity to a local environment; track a detected path of the mobile device based on one or more characteristics of a broadcast signal from the mobile device; determine whether the detected path of the mobile device indicates an unauthorized physical presence of the mobile device relative to a restricted zone related to the local environment; and generate an alert when the detected path indicates the unauthorized physical presence of the mobile device relative to the restricted zone. 18. The computer program product of claim 17, wherein the computer executable code further characterizes the detected path as normal activity or abnormal activity. 19. The computer program product of claim 17, wherein the computer executable code further compares the detected path to one or more predetermined routes associated with the restricted zone. 20. The computer program product of claim 17, wherein the computer executable code wherein the tracking is performed in connection with a wireless network, and wherein the detecting, tracking and determining are performed while denying the mobile device access to the wireless network. 21. The apparatus of claim 9, wherein the one or more processors are to collect wireless activity over time in connection with multiple mobile devices passing through the local environment, and to track detected paths of the multiple mobile devices based on the wireless activity collected for the multiple mobile devices, the apparatus further comprising a display to present, through a graphical user interface, a graphical model of the local environment and to present the detected paths on the graphical model. 22. The apparatus of claim 9, wherein the tracking circuit detects the wireless activity before and without establishing a communications link with the mobile device.
2,600
10,491
10,491
15,942,877
2,684
A motor vehicle includes a rearview camera capturing images of a scene behind the motor vehicle. An electronic processor receives the images captured by the camera. A virtual image projection arrangement is communicatively coupled to the electronic processor and presents a virtual image dependent upon the images captured by the camera. The virtual image is visible by a driver of the vehicle after being reflected by a windshield.
1. A motor vehicle comprising: a windshield; a rearview camera configured to capture images of a scene behind the motor vehicle; an electronic processor configured to receive the images captured by the camera; and a virtual image projection arrangement communicatively coupled to the electronic processor and configured to present a virtual image dependent upon the images captured by the camera, the virtual image being visible by a driver of the vehicle after being reflected by the windshield. 2. The vehicle of claim 1 wherein the virtual image projection arrangement includes a head up display. 3. The vehicle of claim 1 wherein the virtual image appears to the driver to be disposed at a distance of at least one foot beyond the windshield. 4. The vehicle of claim 1 wherein the virtual image appears to the driver to be visible through the windshield. 5. The vehicle of claim 1 wherein the virtual image appears to be disposed in a same direction relative to the driver's eyes as a conventional rearview mirror would be. 6. The vehicle of claim 1 wherein the virtual image is based on a mirror image of at least one of the images captured by the rearview camera. 7. The vehicle of claim 1 wherein the virtual image includes alphanumeric text information and/or an icon. 8. A method of presenting information to a driver of a motor vehicle having a windshield, the method comprising the steps of: capturing images of a scene behind the motor vehicle; and presenting a virtual image dependent upon the captured images, the virtual image being visible by a driver of the vehicle after being reflected by the windshield, the virtual image appearing to the driver to be at least two meters away from the driver. 9. The method of claim 8 wherein the virtual image is presented by a head up display. 10. The method of claim 8 wherein the images are captured by a camera mounted on a rear bumper of the motor vehicle. 11. The method of claim 8 wherein the virtual image appears to the driver to be visible through the windshield. 12. The method of claim 8 wherein the virtual image is visible below and adjacent to a top edge of the windshield and at a midpoint of the top edge of the windshield. 13. The method of claim 8 wherein the virtual image is based on a mirror image of the captured image. 14. The method of claim 8 wherein the virtual image includes alphanumeric text information and/or an icon. 15. A motor vehicle comprising: a rearview camera configured to capture images of a scene behind the motor vehicle; an electronic processor configured to receive the images captured by the camera and produce a video signal based upon the captured images; a flat panel display communicatively coupled to the electronic processor and configured to receive the video signal and produce a light field dependent upon the video signal; and a concavely curved reflective surface positioned to reflect the light field toward eyes of a driver of the motor vehicle. 16. The vehicle of claim 15 further comprising a partially reflective surface positioned to pass a portion of the reflected light field to the driver. 17. The vehicle of claim 15 wherein the reflected light field appears to the driver to be virtual image disposed at a distance of at least two meters from the driver. 18. The vehicle of claim 17 wherein the virtual image appears to the driver to be visible through the windshield. 19. The vehicle of claim 17 wherein the virtual image appears to be disposed in a same direction relative to the driver's eyes as a conventional rearview mirror would be. 20. The vehicle of claim 17 wherein the virtual image includes alphanumeric text information and/or an icon.
A motor vehicle includes a rearview camera capturing images of a scene behind the motor vehicle. An electronic processor receives the images captured by the camera. A virtual image projection arrangement is communicatively coupled to the electronic processor and presents a virtual image dependent upon the images captured by the camera. The virtual image is visible by a driver of the vehicle after being reflected by a windshield.1. A motor vehicle comprising: a windshield; a rearview camera configured to capture images of a scene behind the motor vehicle; an electronic processor configured to receive the images captured by the camera; and a virtual image projection arrangement communicatively coupled to the electronic processor and configured to present a virtual image dependent upon the images captured by the camera, the virtual image being visible by a driver of the vehicle after being reflected by the windshield. 2. The vehicle of claim 1 wherein the virtual image projection arrangement includes a head up display. 3. The vehicle of claim 1 wherein the virtual image appears to the driver to be disposed at a distance of at least one foot beyond the windshield. 4. The vehicle of claim 1 wherein the virtual image appears to the driver to be visible through the windshield. 5. The vehicle of claim 1 wherein the virtual image appears to be disposed in a same direction relative to the driver's eyes as a conventional rearview mirror would be. 6. The vehicle of claim 1 wherein the virtual image is based on a mirror image of at least one of the images captured by the rearview camera. 7. The vehicle of claim 1 wherein the virtual image includes alphanumeric text information and/or an icon. 8. A method of presenting information to a driver of a motor vehicle having a windshield, the method comprising the steps of: capturing images of a scene behind the motor vehicle; and presenting a virtual image dependent upon the captured images, the virtual image being visible by a driver of the vehicle after being reflected by the windshield, the virtual image appearing to the driver to be at least two meters away from the driver. 9. The method of claim 8 wherein the virtual image is presented by a head up display. 10. The method of claim 8 wherein the images are captured by a camera mounted on a rear bumper of the motor vehicle. 11. The method of claim 8 wherein the virtual image appears to the driver to be visible through the windshield. 12. The method of claim 8 wherein the virtual image is visible below and adjacent to a top edge of the windshield and at a midpoint of the top edge of the windshield. 13. The method of claim 8 wherein the virtual image is based on a mirror image of the captured image. 14. The method of claim 8 wherein the virtual image includes alphanumeric text information and/or an icon. 15. A motor vehicle comprising: a rearview camera configured to capture images of a scene behind the motor vehicle; an electronic processor configured to receive the images captured by the camera and produce a video signal based upon the captured images; a flat panel display communicatively coupled to the electronic processor and configured to receive the video signal and produce a light field dependent upon the video signal; and a concavely curved reflective surface positioned to reflect the light field toward eyes of a driver of the motor vehicle. 16. The vehicle of claim 15 further comprising a partially reflective surface positioned to pass a portion of the reflected light field to the driver. 17. The vehicle of claim 15 wherein the reflected light field appears to the driver to be virtual image disposed at a distance of at least two meters from the driver. 18. The vehicle of claim 17 wherein the virtual image appears to the driver to be visible through the windshield. 19. The vehicle of claim 17 wherein the virtual image appears to be disposed in a same direction relative to the driver's eyes as a conventional rearview mirror would be. 20. The vehicle of claim 17 wherein the virtual image includes alphanumeric text information and/or an icon.
2,600
10,492
10,492
15,368,684
2,657
A method for sonic acoustic beacon processing. The method may include constantly operating a microphone and a chip, even when an application processor is asleep; receiving by the microphone, while the application processor is asleep, an acoustic beacon; converting by the acoustic beacon to electrical signals representative of the acoustic beacon; receiving by the chip the electrical signals representative of the acoustic beacon; searching, by the chip and in the electrical signals representative of the acoustic beacon for a predefined preamble; when detecting the predefined preamble then decoding, by the chip, electrical signals representative of a rest of the acoustic beacon to provide digital data, and determining by the chip whether to awake the application processor; when determining to awake the application processor then participating, by the chip, in an awakening of the application processor; and sending, by the chip, the digital data to the application processor, after the awakening of the application processor.
1. A method for sonic acoustic beacon processing, the method comprises: constantly operating a microphone and a chip, even when an application processor is asleep; receiving by the microphone, while the application processor is asleep, an acoustic beacon; converting by the acoustic beacon to electrical signals representative of the acoustic beacon; receiving by the chip the electrical signals representative of the acoustic beacon; searching, by the chip and in the electrical signals representative of the acoustic beacon for a predefined preamble; when detecting the predefined preamble then decoding, by the chip, electrical signals representative of a rest of the acoustic beacon to provide digital data, and determining by the chip whether to awake the application processor; when determining to awake the application processor then participating, by the chip, in an awakening of the application processor; and sending, by the chip, the digital data to the application processor, after the awakening of the application processor. 2. The method according to claim 1 comprising constantly monitoring by the chip for the electrical signals that are representative of the acoustic beacon. 3. The method according to claim 1 comprising determining to awake the application processor based on the digital data. 4. The method according to claim 1 wherein the digital data comprises filtering information and additional content; wherein the method comprises determining to awake the application processor and to send the additional content to the application processor when the filtering information is of a first value and when detecting the predefine preamble. 5. The method according to claim 4 wherein the additional content is an application activation code or a universally unique identifier. 6. The method according to claim 1 wherein the digital data comprises filtering information and additional content; wherein the method comprises determining to awake the application processor and to send the additional content to the application processor when the filtering information is of a second value, when detecting the predefine preamble and when the additional content is of a predefined value. 7. The method according to claim 6 wherein the additional content is an application activation code or a universally unique identifier. 8. The method according to claim 1 wherein the digital data comprises filtering information and additional content; wherein the method comprises determining to awake the application processor and to send the additional content to the application processor when the filtering information is of a third value, when detecting the predefine preamble and when the additional content is within a predefined range of values. 9. The method according to claim 8 wherein the additional content is an application activation code or a universally unique identifier. 10. The method according to claim 1 wherein the digital data comprises filtering information and additional content; wherein the additional content comprises (a) an application identifier that identifies an application to be executed by the application processor, and (b) at least one application data unit to be processed by the application processor when the application processor executes the application; wherein the method comprises determining to awake the application processor and to send the additional content to the application processor in response to a value of the filtering information. 11. The method according to claim 10 wherein the at least one application data unit comprises coupon appearance information and a coupon identifier. 12. The method according to claim 1 comprising sending, to a computer and by a device that comprises the chip and the application processor, the digital data; and receiving from the computer a response to the digital data. 13. The method according to claim 1 comprising determining by at least one of the chip and the application processor whether the acoustic beacon is associated with a predefined application; transmitting at least a part of the digital data to a computer when the acoustic beacon is associated with the predefined application; receiving from the computer coupon information and presenting to a user of the device the coupon information. 14. The method according to claim 13 comprising ignoring the acoustic beacon when determining that the acoustic beacon is not associated with the predefined application. 15. The method according to claim 1 wherein comprising converting the acoustic beacon when the acoustic beacon is within a human audible acoustic range. 16. The method according to claim 1 wherein comprising converting the acoustic beacon when the acoustic beacon is outside a human audible acoustic range. 17. The method according to claim 1 comprising determining by at least one of the chip and the application processor whether the acoustic beacon is associated with a predefined application; and performing, by a device that comprises the chip and the application processor, at least one operation out of performing a payment, presenting information to a user of the device, presenting services to the user, performing an automatic execution of the predefined and sending a message to a sender of the acoustic beacon. 18. The method according to claim 1 wherein the digital data comprises at least one out of a message, contact information, a store promotion, an advertisement, a coupon, a password, a Uniform Resource Locator, indoor navigation metadata and location metadata. 19. A device for sonic acoustic beacon processing, the device comprises a microphone, a chip and an application processor; wherein the microphone is configured to (a) receive, while the application processor is asleep, an acoustic beacon and (b) convert the acoustic beacon to electrical signals representative of the acoustic beacon; wherein the chip is configured to receive the electrical signals representative of the acoustic beacon and to search for a predefined preamble; wherein when detecting the predefined preamble the chip is configured to (i) decode electrical signals representative of a rest of the acoustic beacon to provide digital data; and (ii) determine whether to awake the application processor; and wherein when determining to awake the application processor the chip is configured to participate in an awakening of the application processor and to send the digital data to the application processor, after the awakening of the application processor. 20. The device according to claim 19 comprising a power supply that is configured to constantly power the microphone and the chip even when the application processor is asleep. 21. A computer program product that stores instructions that once executed by a device that comprises a microphone, a chip and an application processor causes the device to perform the steps of constantly operating a microphone and a chip, even when an application processor is asleep; receiving by the microphone, while the application processor is asleep, an acoustic beacon; converting by the acoustic beacon to electrical signals representative of the acoustic beacon; receiving by the chip the electrical signals representative of the acoustic beacon; searching, by the chip and in the electrical signals representative of the acoustic beacon for a predefined preamble; when detecting the predefined preamble then decoding, by the chip, electrical signals representative of a rest of the acoustic beacon to provide digital data, and determining by the chip whether to awake the application processor; when determining to awake the application processor then participating, by the chip, in an awakening of the application processor; and sending, by the chip, the digital data to the application processor, after the awakening of the application processor.
A method for sonic acoustic beacon processing. The method may include constantly operating a microphone and a chip, even when an application processor is asleep; receiving by the microphone, while the application processor is asleep, an acoustic beacon; converting by the acoustic beacon to electrical signals representative of the acoustic beacon; receiving by the chip the electrical signals representative of the acoustic beacon; searching, by the chip and in the electrical signals representative of the acoustic beacon for a predefined preamble; when detecting the predefined preamble then decoding, by the chip, electrical signals representative of a rest of the acoustic beacon to provide digital data, and determining by the chip whether to awake the application processor; when determining to awake the application processor then participating, by the chip, in an awakening of the application processor; and sending, by the chip, the digital data to the application processor, after the awakening of the application processor.1. A method for sonic acoustic beacon processing, the method comprises: constantly operating a microphone and a chip, even when an application processor is asleep; receiving by the microphone, while the application processor is asleep, an acoustic beacon; converting by the acoustic beacon to electrical signals representative of the acoustic beacon; receiving by the chip the electrical signals representative of the acoustic beacon; searching, by the chip and in the electrical signals representative of the acoustic beacon for a predefined preamble; when detecting the predefined preamble then decoding, by the chip, electrical signals representative of a rest of the acoustic beacon to provide digital data, and determining by the chip whether to awake the application processor; when determining to awake the application processor then participating, by the chip, in an awakening of the application processor; and sending, by the chip, the digital data to the application processor, after the awakening of the application processor. 2. The method according to claim 1 comprising constantly monitoring by the chip for the electrical signals that are representative of the acoustic beacon. 3. The method according to claim 1 comprising determining to awake the application processor based on the digital data. 4. The method according to claim 1 wherein the digital data comprises filtering information and additional content; wherein the method comprises determining to awake the application processor and to send the additional content to the application processor when the filtering information is of a first value and when detecting the predefine preamble. 5. The method according to claim 4 wherein the additional content is an application activation code or a universally unique identifier. 6. The method according to claim 1 wherein the digital data comprises filtering information and additional content; wherein the method comprises determining to awake the application processor and to send the additional content to the application processor when the filtering information is of a second value, when detecting the predefine preamble and when the additional content is of a predefined value. 7. The method according to claim 6 wherein the additional content is an application activation code or a universally unique identifier. 8. The method according to claim 1 wherein the digital data comprises filtering information and additional content; wherein the method comprises determining to awake the application processor and to send the additional content to the application processor when the filtering information is of a third value, when detecting the predefine preamble and when the additional content is within a predefined range of values. 9. The method according to claim 8 wherein the additional content is an application activation code or a universally unique identifier. 10. The method according to claim 1 wherein the digital data comprises filtering information and additional content; wherein the additional content comprises (a) an application identifier that identifies an application to be executed by the application processor, and (b) at least one application data unit to be processed by the application processor when the application processor executes the application; wherein the method comprises determining to awake the application processor and to send the additional content to the application processor in response to a value of the filtering information. 11. The method according to claim 10 wherein the at least one application data unit comprises coupon appearance information and a coupon identifier. 12. The method according to claim 1 comprising sending, to a computer and by a device that comprises the chip and the application processor, the digital data; and receiving from the computer a response to the digital data. 13. The method according to claim 1 comprising determining by at least one of the chip and the application processor whether the acoustic beacon is associated with a predefined application; transmitting at least a part of the digital data to a computer when the acoustic beacon is associated with the predefined application; receiving from the computer coupon information and presenting to a user of the device the coupon information. 14. The method according to claim 13 comprising ignoring the acoustic beacon when determining that the acoustic beacon is not associated with the predefined application. 15. The method according to claim 1 wherein comprising converting the acoustic beacon when the acoustic beacon is within a human audible acoustic range. 16. The method according to claim 1 wherein comprising converting the acoustic beacon when the acoustic beacon is outside a human audible acoustic range. 17. The method according to claim 1 comprising determining by at least one of the chip and the application processor whether the acoustic beacon is associated with a predefined application; and performing, by a device that comprises the chip and the application processor, at least one operation out of performing a payment, presenting information to a user of the device, presenting services to the user, performing an automatic execution of the predefined and sending a message to a sender of the acoustic beacon. 18. The method according to claim 1 wherein the digital data comprises at least one out of a message, contact information, a store promotion, an advertisement, a coupon, a password, a Uniform Resource Locator, indoor navigation metadata and location metadata. 19. A device for sonic acoustic beacon processing, the device comprises a microphone, a chip and an application processor; wherein the microphone is configured to (a) receive, while the application processor is asleep, an acoustic beacon and (b) convert the acoustic beacon to electrical signals representative of the acoustic beacon; wherein the chip is configured to receive the electrical signals representative of the acoustic beacon and to search for a predefined preamble; wherein when detecting the predefined preamble the chip is configured to (i) decode electrical signals representative of a rest of the acoustic beacon to provide digital data; and (ii) determine whether to awake the application processor; and wherein when determining to awake the application processor the chip is configured to participate in an awakening of the application processor and to send the digital data to the application processor, after the awakening of the application processor. 20. The device according to claim 19 comprising a power supply that is configured to constantly power the microphone and the chip even when the application processor is asleep. 21. A computer program product that stores instructions that once executed by a device that comprises a microphone, a chip and an application processor causes the device to perform the steps of constantly operating a microphone and a chip, even when an application processor is asleep; receiving by the microphone, while the application processor is asleep, an acoustic beacon; converting by the acoustic beacon to electrical signals representative of the acoustic beacon; receiving by the chip the electrical signals representative of the acoustic beacon; searching, by the chip and in the electrical signals representative of the acoustic beacon for a predefined preamble; when detecting the predefined preamble then decoding, by the chip, electrical signals representative of a rest of the acoustic beacon to provide digital data, and determining by the chip whether to awake the application processor; when determining to awake the application processor then participating, by the chip, in an awakening of the application processor; and sending, by the chip, the digital data to the application processor, after the awakening of the application processor.
2,600
10,493
10,493
16,101,121
2,646
A method for concurrent execution of multiple protocols using a single radio of a wireless communication device is provided that includes receiving, in a radio command scheduler, a first radio command from a first protocol stack of a plurality of protocol states executing on the wireless communication device, determining a scheduling policy for the first radio command based on a current state of each protocol stack of the plurality of protocol stacks, and scheduling the first radio command in a radio command queue for the radio based on the scheduling policy, wherein the radio command scheduler uses the radio command queue to schedule radio commands received from the plurality of protocol stacks.
1. A method for concurrent execution of multiple protocols using a single radio comprised in a wireless communication device, the method comprising: receiving, in a radio command scheduler, a first radio command from a first protocol stack of a plurality of protocol states executing on the wireless communication device; determining a scheduling policy for the first radio command based on a current state of each protocol stack of the plurality of protocol stacks; and scheduling the first radio command in a radio command queue for the radio based on the scheduling policy, wherein the radio command scheduler uses the radio command queue to schedule radio commands received from the plurality of protocol stacks. 2. The method of claim 1, wherein the scheduling policy comprises a priority and a time constraint for radio commands from each protocol stack, wherein the time constraint for a protocol stack indicates whether or not a radio command from the protocol stack is time critical. 3. The method of claim 2, wherein scheduling the first radio command further comprises: aborting execution of a second radio command currently being executed by the radio when the first radio command has a higher priority than the second radio command; and adding the first radio command to a head of the radio command queue. 4. The method of claim 3, further comprising rescheduling the second radio command in the radio command queue, wherein the second radio command is scheduled to start execution from a beginning of the second radio command. 5. The method of claim 2, wherein scheduling the first radio command further comprises: scanning the radio command queue to determine if a specified time slot for the first radio command is available; and inserting the first radio command in the radio command queue at the specified time slot if the specified time slot is available. 6. The method of claim 5, further comprising: pre-empting a second radio command occupying the specified time slot when the first radio command has a higher priority than the second radio command; and inserting the first radio command in the radio command queue at the specified time slot. 7. The method of claim 6, further comprising appending the second radio command to the radio command queue if the second radio command is not time critical. 8. The method of claim 5, further comprising appending the first radio command to the radio command queue when a second radio command occupying the specified time slot has a higher priority than the first radio command and the first radio command is not time critical. 9. The method of claim 5, further comprising rejecting the first radio command when a second radio command occupying the specified time slot has a higher priority than the first radio command and the first radio command is time critical. 10. The method of claim 6, wherein the second radio command is from a second protocol stack of the plurality of protocol stacks. 11. A wireless communication device comprising: a radio; a radio command scheduler; a memory storing software instructions, wherein execution of the software instructions causes the wireless communication device to concurrently execute multiple protocols using the radio, the software instructions comprising software instructions to cause the radio command scheduler to: receive a first radio command from a first protocol stack of a plurality of protocol stacks executing on the wireless communication device; determine a scheduling policy for the first radio command based on a current state of each protocol stack of the plurality of protocol stacks; and schedule the first radio command in a radio command queue for the radio based on the scheduling policy, wherein the radio command scheduler uses the radio command queue to schedule radio commands received from the plurality of protocol stacks; and a processor coupled to the memory to execute the software instructions. 12. The wireless communication device of claim 11, wherein the scheduling policy comprises a priority and a time constraint for radio commands from each protocol stack, wherein the time constraint for a protocol stack indicates whether or not a radio command from the protocol stack is time critical. 13. The wireless communication device of claim 12, wherein the software instructions to schedule the first radio command further comprise software instructions to: abort execution of a second radio command currently being executed by the radio when the first radio command has a higher priority than the second radio command; and add the first radio command to a head of the radio command queue. 14. The wireless communication device of claim 13, further comprising software instructions to reschedule the second radio command in the radio command queue, wherein the second radio command is scheduled to start execution from a beginning of the second radio command. 15. The wireless communication device of claim 12, wherein the software instructions to schedule the first radio command further comprise software instructions to: scan the radio command queue to determine if a specified time slot for the first radio command is available; and insert the first radio command in the radio command queue at the specified time slot if the specified time slot is available. 16. The wireless communication device of claim 15, wherein the software instructions further comprise software instructions to: pre-empt a second radio command occupying the specified time slot when the first radio command has a higher priority than the second radio command; and insert the first radio command in the radio command queue at the specified time slot. 17. The wireless communication device of claim 16, wherein the software instructions further comprise software instructions to append the second radio command to the radio command queue if the second radio command is not time critical. 18. The wireless communication device of claim 15, wherein the software instructions further comprise software instructions to append the first radio command to the radio command queue when a second radio command occupying the specified time slot has a higher priority than the first radio command and the first radio command is not time critical. 19. The wireless communication device of claim 15, wherein the software instructions further comprise software instructions to reject the first radio command when a second radio command occupying the specified time slot has a higher priority than the first radio command and the first radio command is time critical. 20. The wireless communication device of claim 16, wherein the second radio command is from a second protocol stack of the plurality of protocol stacks.
A method for concurrent execution of multiple protocols using a single radio of a wireless communication device is provided that includes receiving, in a radio command scheduler, a first radio command from a first protocol stack of a plurality of protocol states executing on the wireless communication device, determining a scheduling policy for the first radio command based on a current state of each protocol stack of the plurality of protocol stacks, and scheduling the first radio command in a radio command queue for the radio based on the scheduling policy, wherein the radio command scheduler uses the radio command queue to schedule radio commands received from the plurality of protocol stacks.1. A method for concurrent execution of multiple protocols using a single radio comprised in a wireless communication device, the method comprising: receiving, in a radio command scheduler, a first radio command from a first protocol stack of a plurality of protocol states executing on the wireless communication device; determining a scheduling policy for the first radio command based on a current state of each protocol stack of the plurality of protocol stacks; and scheduling the first radio command in a radio command queue for the radio based on the scheduling policy, wherein the radio command scheduler uses the radio command queue to schedule radio commands received from the plurality of protocol stacks. 2. The method of claim 1, wherein the scheduling policy comprises a priority and a time constraint for radio commands from each protocol stack, wherein the time constraint for a protocol stack indicates whether or not a radio command from the protocol stack is time critical. 3. The method of claim 2, wherein scheduling the first radio command further comprises: aborting execution of a second radio command currently being executed by the radio when the first radio command has a higher priority than the second radio command; and adding the first radio command to a head of the radio command queue. 4. The method of claim 3, further comprising rescheduling the second radio command in the radio command queue, wherein the second radio command is scheduled to start execution from a beginning of the second radio command. 5. The method of claim 2, wherein scheduling the first radio command further comprises: scanning the radio command queue to determine if a specified time slot for the first radio command is available; and inserting the first radio command in the radio command queue at the specified time slot if the specified time slot is available. 6. The method of claim 5, further comprising: pre-empting a second radio command occupying the specified time slot when the first radio command has a higher priority than the second radio command; and inserting the first radio command in the radio command queue at the specified time slot. 7. The method of claim 6, further comprising appending the second radio command to the radio command queue if the second radio command is not time critical. 8. The method of claim 5, further comprising appending the first radio command to the radio command queue when a second radio command occupying the specified time slot has a higher priority than the first radio command and the first radio command is not time critical. 9. The method of claim 5, further comprising rejecting the first radio command when a second radio command occupying the specified time slot has a higher priority than the first radio command and the first radio command is time critical. 10. The method of claim 6, wherein the second radio command is from a second protocol stack of the plurality of protocol stacks. 11. A wireless communication device comprising: a radio; a radio command scheduler; a memory storing software instructions, wherein execution of the software instructions causes the wireless communication device to concurrently execute multiple protocols using the radio, the software instructions comprising software instructions to cause the radio command scheduler to: receive a first radio command from a first protocol stack of a plurality of protocol stacks executing on the wireless communication device; determine a scheduling policy for the first radio command based on a current state of each protocol stack of the plurality of protocol stacks; and schedule the first radio command in a radio command queue for the radio based on the scheduling policy, wherein the radio command scheduler uses the radio command queue to schedule radio commands received from the plurality of protocol stacks; and a processor coupled to the memory to execute the software instructions. 12. The wireless communication device of claim 11, wherein the scheduling policy comprises a priority and a time constraint for radio commands from each protocol stack, wherein the time constraint for a protocol stack indicates whether or not a radio command from the protocol stack is time critical. 13. The wireless communication device of claim 12, wherein the software instructions to schedule the first radio command further comprise software instructions to: abort execution of a second radio command currently being executed by the radio when the first radio command has a higher priority than the second radio command; and add the first radio command to a head of the radio command queue. 14. The wireless communication device of claim 13, further comprising software instructions to reschedule the second radio command in the radio command queue, wherein the second radio command is scheduled to start execution from a beginning of the second radio command. 15. The wireless communication device of claim 12, wherein the software instructions to schedule the first radio command further comprise software instructions to: scan the radio command queue to determine if a specified time slot for the first radio command is available; and insert the first radio command in the radio command queue at the specified time slot if the specified time slot is available. 16. The wireless communication device of claim 15, wherein the software instructions further comprise software instructions to: pre-empt a second radio command occupying the specified time slot when the first radio command has a higher priority than the second radio command; and insert the first radio command in the radio command queue at the specified time slot. 17. The wireless communication device of claim 16, wherein the software instructions further comprise software instructions to append the second radio command to the radio command queue if the second radio command is not time critical. 18. The wireless communication device of claim 15, wherein the software instructions further comprise software instructions to append the first radio command to the radio command queue when a second radio command occupying the specified time slot has a higher priority than the first radio command and the first radio command is not time critical. 19. The wireless communication device of claim 15, wherein the software instructions further comprise software instructions to reject the first radio command when a second radio command occupying the specified time slot has a higher priority than the first radio command and the first radio command is time critical. 20. The wireless communication device of claim 16, wherein the second radio command is from a second protocol stack of the plurality of protocol stacks.
2,600
10,494
10,494
14,636,562
2,672
When the noise in an audio signal made up of both speech and noise is suppressed, the quality of the speech in the audio signal is usually degraded. The speech obtained from a noise-suppressed audio signal is improved by determining linear predictive coding (LPC) characteristics of the audio signal without or prior to noise suppression and by determining the LPC characteristics of the noise-suppressed audio. The convolution of those differing characteristics provides an improved-quality speech signal, with the original noise level reduced or suppressed.
1. A method of improving speech quality in an audio signal comprising speech and noise, after noise in the audio signal is suppressed, the method comprising: receiving a first audio signal comprising speech and noise; determining characteristics of the first audio signal and generating a linear predictive coding (LPC) representation of said first audio signal; providing the first audio signal to a noise suppressor, which is configured to suppress at least some of the noise in the first audio signal and to thereby produce a noise-reduced audio signal; generating an error signal using liner predictive coding (LPC) estimation method from the noise-reduced audio signal, the error signal comprising speech in the first audio signal after at least some of the noise in the first audio signal is removed; applying the error signal to the LPC representation of the first audio signal to synthesize a second audio signal having a reduced distortion speech and speech quality better than the speech in the first audio signal; and providing the second audio signal to an audio signal transducer configured to produce audible sound waves from the second audio signal. 2. The method of claim 1, wherein the step of applying the error signal to the LPC causes the audibility of identifying characteristics of vowels, consonants and other sounds to be increased. 3. The method of claim 1, wherein the step of receiving a time-domain representation of a first audio signal comprising speech and noise, occurs in a motor vehicle and wherein the step of providing the second audio signal to an audio signal transducer occurs in said vehicle. 4. The method of claim 1, wherein the step of determining characteristics of the first audio signal comprises determining speech formants in the first audio signal and generating an LPC representation of the speech formants. 5. The method of claim 1, wherein the step of generating an error signal comprises receiving a time-domain representation of a noise-reduced distorted version of the first audio signal. 6. The method of claim 1, wherein the step of applying the error signal to the LPC representation of the first audio signal comprises a convolution of the error signal by the first audio signal. 7. A method of improving speech quality in an audio signal after suppressing noise in the audio signal, the method comprising: receiving a first audio signal comprising speech and noise, the first audio signal being represented by frames of digital data; performing a first partial noise suppression on the first audio signal using to provide a first partial noise-reduced audio signal; performing a second partial noise suppression on the first partial noise-reduced audio signal to provide a second noise-reduced audio signal; determining a linear predictive coding (LPC) representation of the first partial noise-reduced audio signal; generating an error signal from the second noise-reduced audio signal; applying the error signal to the LPC representation of the first partial noise-reduced audio signal to synthesize an improved-quality speech signal from the first audio signal; receiving a first audio signal comprising speech and noise, the first audio signal being represented by frames of digital data; performing a first partial noise suppression on the first audio signal using to provide a first partial noise-reduced audio signal; performing a second partial noise suppression on the first partial noise-reduced audio signal to provide a second noise-reduced audio signal; determining a linear predictive coding (LPC) representation of the first partial noise-reduced audio signal; generating an error signal from the second noise-reduced audio signal; applying the error signal to the LPC representation of the first partial noise-reduced audio signal to synthesize an improved-quality speech signal from the first audio signal; and providing the improved-quality speech signal to an audio signal transducer, which is configured to produce audible sound waves from the time domain improved-quality audio signal. 8. The method of claim 7, wherein the step of applying the error signal to the LPC representation comprises convolution of the error signal by the LPC representation. 9. The method of claim 7, wherein the step of receiving a first audio signal and the step of providing the improved-quality speech signal to an audio signal transducer take place in a motor vehicle. 10. An apparatus for improving the quality of speech obtained from a first audio signal having speech and noise, after noise in the first audio signal is suppressed, the apparatus comprising: a first noise suppressor, configured to: suppress at least some of the noise in the first audio signal to thereby produce a noise-reduced first audio signal, the noise-reduced first audio signal comprising distorted speech obtained from the first audio signal; a linear predictive code (LPC) analyzer comprising: an LPC estimator configured to receive the first audio signal and provide a linear predictive code (LPC) representation of the first audio signal; an error signal generator comprising: linear predictive code (LPC) estimator configured to generate a linear predictive code (LPC) representation of the noise-reduced first audio signal, after noise in the first audio signal is at least partially suppressed; an LPC synthesizer, configured to synthesize speech signals from the LPC representation of the first audio signal and the LPC representation of the audio signal after noise in the first audio signal is at least partially suppressed; and an audio signal transducer configured to generate audible sound waves. from synthesized speech signals. 11. The apparatus of claim 10, wherein at least one of: the first noise suppressor; the linear predictive code analyzer and the error signal generator comprise a processor. 12. The apparatus of claim 10, further comprising a microphone and a cellular telephone in a motor vehicle, the microphone being operatively coupled to the first noise suppressor and the cellular telephone being operatively coupled to the audio signal transducer, wherein noise in audio signals obtained from the microphone is reduced and speech in the audio signals obtained from the microphone are provided to the cellular telephone. 13. The apparatus of claim 10, wherein the noise suppressor is configured to suppress wind noise, road noise and engine noise.
When the noise in an audio signal made up of both speech and noise is suppressed, the quality of the speech in the audio signal is usually degraded. The speech obtained from a noise-suppressed audio signal is improved by determining linear predictive coding (LPC) characteristics of the audio signal without or prior to noise suppression and by determining the LPC characteristics of the noise-suppressed audio. The convolution of those differing characteristics provides an improved-quality speech signal, with the original noise level reduced or suppressed.1. A method of improving speech quality in an audio signal comprising speech and noise, after noise in the audio signal is suppressed, the method comprising: receiving a first audio signal comprising speech and noise; determining characteristics of the first audio signal and generating a linear predictive coding (LPC) representation of said first audio signal; providing the first audio signal to a noise suppressor, which is configured to suppress at least some of the noise in the first audio signal and to thereby produce a noise-reduced audio signal; generating an error signal using liner predictive coding (LPC) estimation method from the noise-reduced audio signal, the error signal comprising speech in the first audio signal after at least some of the noise in the first audio signal is removed; applying the error signal to the LPC representation of the first audio signal to synthesize a second audio signal having a reduced distortion speech and speech quality better than the speech in the first audio signal; and providing the second audio signal to an audio signal transducer configured to produce audible sound waves from the second audio signal. 2. The method of claim 1, wherein the step of applying the error signal to the LPC causes the audibility of identifying characteristics of vowels, consonants and other sounds to be increased. 3. The method of claim 1, wherein the step of receiving a time-domain representation of a first audio signal comprising speech and noise, occurs in a motor vehicle and wherein the step of providing the second audio signal to an audio signal transducer occurs in said vehicle. 4. The method of claim 1, wherein the step of determining characteristics of the first audio signal comprises determining speech formants in the first audio signal and generating an LPC representation of the speech formants. 5. The method of claim 1, wherein the step of generating an error signal comprises receiving a time-domain representation of a noise-reduced distorted version of the first audio signal. 6. The method of claim 1, wherein the step of applying the error signal to the LPC representation of the first audio signal comprises a convolution of the error signal by the first audio signal. 7. A method of improving speech quality in an audio signal after suppressing noise in the audio signal, the method comprising: receiving a first audio signal comprising speech and noise, the first audio signal being represented by frames of digital data; performing a first partial noise suppression on the first audio signal using to provide a first partial noise-reduced audio signal; performing a second partial noise suppression on the first partial noise-reduced audio signal to provide a second noise-reduced audio signal; determining a linear predictive coding (LPC) representation of the first partial noise-reduced audio signal; generating an error signal from the second noise-reduced audio signal; applying the error signal to the LPC representation of the first partial noise-reduced audio signal to synthesize an improved-quality speech signal from the first audio signal; receiving a first audio signal comprising speech and noise, the first audio signal being represented by frames of digital data; performing a first partial noise suppression on the first audio signal using to provide a first partial noise-reduced audio signal; performing a second partial noise suppression on the first partial noise-reduced audio signal to provide a second noise-reduced audio signal; determining a linear predictive coding (LPC) representation of the first partial noise-reduced audio signal; generating an error signal from the second noise-reduced audio signal; applying the error signal to the LPC representation of the first partial noise-reduced audio signal to synthesize an improved-quality speech signal from the first audio signal; and providing the improved-quality speech signal to an audio signal transducer, which is configured to produce audible sound waves from the time domain improved-quality audio signal. 8. The method of claim 7, wherein the step of applying the error signal to the LPC representation comprises convolution of the error signal by the LPC representation. 9. The method of claim 7, wherein the step of receiving a first audio signal and the step of providing the improved-quality speech signal to an audio signal transducer take place in a motor vehicle. 10. An apparatus for improving the quality of speech obtained from a first audio signal having speech and noise, after noise in the first audio signal is suppressed, the apparatus comprising: a first noise suppressor, configured to: suppress at least some of the noise in the first audio signal to thereby produce a noise-reduced first audio signal, the noise-reduced first audio signal comprising distorted speech obtained from the first audio signal; a linear predictive code (LPC) analyzer comprising: an LPC estimator configured to receive the first audio signal and provide a linear predictive code (LPC) representation of the first audio signal; an error signal generator comprising: linear predictive code (LPC) estimator configured to generate a linear predictive code (LPC) representation of the noise-reduced first audio signal, after noise in the first audio signal is at least partially suppressed; an LPC synthesizer, configured to synthesize speech signals from the LPC representation of the first audio signal and the LPC representation of the audio signal after noise in the first audio signal is at least partially suppressed; and an audio signal transducer configured to generate audible sound waves. from synthesized speech signals. 11. The apparatus of claim 10, wherein at least one of: the first noise suppressor; the linear predictive code analyzer and the error signal generator comprise a processor. 12. The apparatus of claim 10, further comprising a microphone and a cellular telephone in a motor vehicle, the microphone being operatively coupled to the first noise suppressor and the cellular telephone being operatively coupled to the audio signal transducer, wherein noise in audio signals obtained from the microphone is reduced and speech in the audio signals obtained from the microphone are provided to the cellular telephone. 13. The apparatus of claim 10, wherein the noise suppressor is configured to suppress wind noise, road noise and engine noise.
2,600
10,495
10,495
15,957,887
2,651
A device may control a video communication via transcoding and/or traffic shaping. The device may include a multipoint control unit (MCU) and/or a server. The device may receive one or more video streams from one or more devices. The device may analyze a received video stream to determine a viewing parameter. The viewing parameter may include a user viewing parameter, a device viewing parameter, and/or a content viewing parameter. The device may modify a video stream based on the viewing parameter. Modifying the video stream may include re-encoding the video stream, adjusting an orientation, removing a video detail, and/or adjusting a bit rate. The device may send the modified video stream to another device. The device may determine a hit rate for the video stream based on the viewing parameter. The device may indicate the bit rate by sending a feedback message and/or by signaling a bandwidth limit.
1-44. (canceled) 45. A first wireless transmit/receive unit (WTRU) comprising: a memory; and a processor configured to: receive an incoming video stream from a second WTRU; decode incoming video content of the incoming video stream; display the decoded incoming video content on a first screen of the first WTRU; analyze the decoded incoming video content to determine a property of a viewing environment in which the second WTRU is operating, the property of the viewing environment comprising at least one of a presence of a user in a vicinity of the second WTRU, a user's distance from a second screen of the second WTRU, or an ambient lighting condition associated with the second WTRU; capture video content using a camera of the first WTRU; encode the captured video content to produce an outgoing video stream, wherein the captured video content is encoded based on the determined property of the viewing environment in which the second WTRU is operating; and send the outgoing video stream to the second WTRU. 46. The first WTRU of claim 45, wherein the processor is further configured to determine an amount of detail that a user of the second WTRU could perceive based on the determined property of the viewing environment in which the second WTRU is operating, and wherein one or more details that are greater than the determined amount of detail that the user of the second WTRU could perceive are removed from the captured video content in the outgoing video stream. 47. The first WTRU of claim 45, wherein the processor is further configured to perform a two-way video communication session between the first WTRU and the second WTRU, the two-way video communication session comprising exchange of the incoming video stream and the outgoing video stream. 48. The first WTRU of claim 45, wherein the processor is further configured to: adjust the capture of the video content; or modify a re-sampling of the outgoing video stream. 49. The first WTRU of claim 48, wherein being configured to adjust the capture of the video content comprises being configured to crop the captured video content prior to being encoded. 50. The first WTRU of claim 45, wherein being configured to analyze the decoded incoming video content comprises the processor configured to analyze one or more images of the decoded incoming video content. 51. The first WTRU of claim 45, wherein the processor is further configured to detect a change in the determined property of the viewing environment in which the second WTRU is operating. 52. The first WTRU of claim 51, wherein the processor is further configured to adjust the encoding of the captured video content based on the detected change. 53. The first WTRU of claim 51, wherein the processor is further configured to adjust the capture of the video content. 54. A method performed by a first wireless transmit/receive unit (WTRU) in a two-way video communication, the method comprising: receiving an incoming video stream from a second WTRU; decoding incoming video content of the incoming video stream; displaying the decoded incoming video content on a first screen of the first WTRU; analyzing the decoded incoming video content to determine a property of a viewing environment in which the second WTRU is operating, the property of the viewing environment comprising at least one of a presence of a user in a vicinity of the second WTRU, a user's distance from a second screen of the second WTRU, or an ambient lighting condition associated with the second WTRU; capturing video content via a camera associated with the first WTRU; encoding the captured video content to produce an outgoing video stream, wherein the captured video content is encoded based on the determined property of the viewing environment in which the second WTRU is operating; and sending the outgoing video stream to the second WTRU. 55. The method of claim 54, further comprising determining an amount of detail that a user of the second WTRU could perceive based on the determined property of the viewing environment in which the second WTRU is operating, and wherein one or more details that are greater than the determined amount of detail that the user of the second WTRU could perceive are removed from the captured video content in the outgoing video stream. 56. The method of claim 54, further comprising performing a two-way video communication session with the second WTRU, the two-way video communication session comprising exchange of the incoming video stream and the outgoing video stream. 57. The method of claim 54, further comprising: adjusting the capture of the video content; or modifying a re-sampling of the outgoing video stream sent to the second WTRU. 58. The method of claim 57, wherein adjusting the capture of the video content comprises cropping the captured video content prior to being encoded. 59. The method of claim 54, wherein analyzing the decoded incoming video content comprises analyzing one or more images of the decoded incoming video content. 60. The method of claim 54, further comprising detecting a change in the determined property of the viewing environment in which the second WTRU is operating. 61. The method of claim 60, further comprising adjusting the encoding of the captured video content based on the detected change. 62. The method of claim 60, further comprising adjusting the capture of the video content.
A device may control a video communication via transcoding and/or traffic shaping. The device may include a multipoint control unit (MCU) and/or a server. The device may receive one or more video streams from one or more devices. The device may analyze a received video stream to determine a viewing parameter. The viewing parameter may include a user viewing parameter, a device viewing parameter, and/or a content viewing parameter. The device may modify a video stream based on the viewing parameter. Modifying the video stream may include re-encoding the video stream, adjusting an orientation, removing a video detail, and/or adjusting a bit rate. The device may send the modified video stream to another device. The device may determine a hit rate for the video stream based on the viewing parameter. The device may indicate the bit rate by sending a feedback message and/or by signaling a bandwidth limit.1-44. (canceled) 45. A first wireless transmit/receive unit (WTRU) comprising: a memory; and a processor configured to: receive an incoming video stream from a second WTRU; decode incoming video content of the incoming video stream; display the decoded incoming video content on a first screen of the first WTRU; analyze the decoded incoming video content to determine a property of a viewing environment in which the second WTRU is operating, the property of the viewing environment comprising at least one of a presence of a user in a vicinity of the second WTRU, a user's distance from a second screen of the second WTRU, or an ambient lighting condition associated with the second WTRU; capture video content using a camera of the first WTRU; encode the captured video content to produce an outgoing video stream, wherein the captured video content is encoded based on the determined property of the viewing environment in which the second WTRU is operating; and send the outgoing video stream to the second WTRU. 46. The first WTRU of claim 45, wherein the processor is further configured to determine an amount of detail that a user of the second WTRU could perceive based on the determined property of the viewing environment in which the second WTRU is operating, and wherein one or more details that are greater than the determined amount of detail that the user of the second WTRU could perceive are removed from the captured video content in the outgoing video stream. 47. The first WTRU of claim 45, wherein the processor is further configured to perform a two-way video communication session between the first WTRU and the second WTRU, the two-way video communication session comprising exchange of the incoming video stream and the outgoing video stream. 48. The first WTRU of claim 45, wherein the processor is further configured to: adjust the capture of the video content; or modify a re-sampling of the outgoing video stream. 49. The first WTRU of claim 48, wherein being configured to adjust the capture of the video content comprises being configured to crop the captured video content prior to being encoded. 50. The first WTRU of claim 45, wherein being configured to analyze the decoded incoming video content comprises the processor configured to analyze one or more images of the decoded incoming video content. 51. The first WTRU of claim 45, wherein the processor is further configured to detect a change in the determined property of the viewing environment in which the second WTRU is operating. 52. The first WTRU of claim 51, wherein the processor is further configured to adjust the encoding of the captured video content based on the detected change. 53. The first WTRU of claim 51, wherein the processor is further configured to adjust the capture of the video content. 54. A method performed by a first wireless transmit/receive unit (WTRU) in a two-way video communication, the method comprising: receiving an incoming video stream from a second WTRU; decoding incoming video content of the incoming video stream; displaying the decoded incoming video content on a first screen of the first WTRU; analyzing the decoded incoming video content to determine a property of a viewing environment in which the second WTRU is operating, the property of the viewing environment comprising at least one of a presence of a user in a vicinity of the second WTRU, a user's distance from a second screen of the second WTRU, or an ambient lighting condition associated with the second WTRU; capturing video content via a camera associated with the first WTRU; encoding the captured video content to produce an outgoing video stream, wherein the captured video content is encoded based on the determined property of the viewing environment in which the second WTRU is operating; and sending the outgoing video stream to the second WTRU. 55. The method of claim 54, further comprising determining an amount of detail that a user of the second WTRU could perceive based on the determined property of the viewing environment in which the second WTRU is operating, and wherein one or more details that are greater than the determined amount of detail that the user of the second WTRU could perceive are removed from the captured video content in the outgoing video stream. 56. The method of claim 54, further comprising performing a two-way video communication session with the second WTRU, the two-way video communication session comprising exchange of the incoming video stream and the outgoing video stream. 57. The method of claim 54, further comprising: adjusting the capture of the video content; or modifying a re-sampling of the outgoing video stream sent to the second WTRU. 58. The method of claim 57, wherein adjusting the capture of the video content comprises cropping the captured video content prior to being encoded. 59. The method of claim 54, wherein analyzing the decoded incoming video content comprises analyzing one or more images of the decoded incoming video content. 60. The method of claim 54, further comprising detecting a change in the determined property of the viewing environment in which the second WTRU is operating. 61. The method of claim 60, further comprising adjusting the encoding of the captured video content based on the detected change. 62. The method of claim 60, further comprising adjusting the capture of the video content.
2,600
10,496
10,496
13,963,729
2,694
An embodiment provides a method, including: displaying content on a display screen of a first device; determining a position of a second device relative to the display screen of the first device; selecting a portion of the display screen of the first device based on a determined position of the second device; and transferring the portion of the display screen selected to one or more devices. Other aspects are described and claimed.
1. A method, comprising: displaying content on a display screen of a first device; determining a position of a second device relative to the display screen of the first device; selecting a portion of the display screen of the first device based on a determined position of the second device; and transferring the portion of the display screen selected to one or more devices. 2. The method of claim 1, wherein the selecting comprises selecting a portion of the display screen of the first device based on an outline of the second device. 3. The method of claim 2, wherein the outline of the second device is defined by the outer peripheral edge of the second device. 4. The method of claim 1, wherein the selecting comprises providing a preview of the portion of the display screen of the first device on a display screen of the second device. 5. The method of claim 1, further comprising applying scaling to the portion of the display screen of the first device. 6. The method of claim 1, further comprising adjusting the content of the portion of the display screen of the first device for display on the second device. 7. The method of claim 5, wherein applying scaling comprises scaling the portion of the display screen of the first device selected based on sizing information received at the first device. 8. The method of claim 5, wherein applying scaling comprises scaling the portion of the display screen of the first device selected based on gesture information received at the first device, wherein the gesture information is derived from the second device and is selected from the group consisting of a pinch gesture and a zoom gesture. 9. The method of claim 1, wherein the one or more devices comprises the second device. 10. The method of claim 1, wherein the one or more devices comprises a device associated with the first device by an association selected from the group consisting of a personal area network association and a cloud account association. 11. An information handling device, comprising: a display screen; one or more processors; a memory storing instructions accessible to the one or more processors, the instructions being executable by the one or more processors to: display content on the display screen of the information handling device; determine a position of a second device relative to the display screen of the information handling device; select a portion of the display screen of the information handling device based on a determined position of the second device; and transfer the portion of the display screen selected to one or more devices. 12. The information handling device of claim 11, wherein to select comprises selecting a portion of the display screen of the information handling device based on an outline of the second device. 13. The information handling device of claim 12, wherein the outline of the second device is defined by the outer peripheral edge of the second device. 14. The information handling device of claim 11, wherein to select comprises providing a preview of the portion of the display screen of the information handling device on a display screen of the second device. 15. The information handling device of claim 11, wherein the instructions are further executable by the one or more processors to apply scaling to the portion of the display screen of the information handling device. 16. The information handling device of claim 11, wherein the instructions are further executable by the one or more processors to adjust the content of the portion of the display screen of the information handling device for display on the second device. 17. The information handling device of claim 15, wherein to apply scaling comprises scaling the portion of the display screen of the information handling device selected based on sizing information received at the information handling device. 18. The information handling device of claim 15, wherein to apply scaling comprises scaling the portion of the display screen of the information handling device selected based on gesture information received at the information handling device, wherein the gesture information is derived from the second device and is selected from the group consisting of a pinch gesture and a zoom gesture. 19. The information handling device of claim 11, wherein the one or more devices comprises a device associated with the information handling device by an association selected from the group consisting of a personal area network association and a cloud account association. 20. A program product, comprising: a storage medium having computer readable program code stored therewith, the computer readable program code comprising: computer readable program code configured to display content on a display screen of a first device; computer readable program code configured to determine a position of a second device relative to the display screen of the first device; computer readable program code configured to select a portion of the display screen of the first device based on a determined position of the second device; and computer readable program code configured to transfer the portion of the display screen selected to one or more devices.
An embodiment provides a method, including: displaying content on a display screen of a first device; determining a position of a second device relative to the display screen of the first device; selecting a portion of the display screen of the first device based on a determined position of the second device; and transferring the portion of the display screen selected to one or more devices. Other aspects are described and claimed.1. A method, comprising: displaying content on a display screen of a first device; determining a position of a second device relative to the display screen of the first device; selecting a portion of the display screen of the first device based on a determined position of the second device; and transferring the portion of the display screen selected to one or more devices. 2. The method of claim 1, wherein the selecting comprises selecting a portion of the display screen of the first device based on an outline of the second device. 3. The method of claim 2, wherein the outline of the second device is defined by the outer peripheral edge of the second device. 4. The method of claim 1, wherein the selecting comprises providing a preview of the portion of the display screen of the first device on a display screen of the second device. 5. The method of claim 1, further comprising applying scaling to the portion of the display screen of the first device. 6. The method of claim 1, further comprising adjusting the content of the portion of the display screen of the first device for display on the second device. 7. The method of claim 5, wherein applying scaling comprises scaling the portion of the display screen of the first device selected based on sizing information received at the first device. 8. The method of claim 5, wherein applying scaling comprises scaling the portion of the display screen of the first device selected based on gesture information received at the first device, wherein the gesture information is derived from the second device and is selected from the group consisting of a pinch gesture and a zoom gesture. 9. The method of claim 1, wherein the one or more devices comprises the second device. 10. The method of claim 1, wherein the one or more devices comprises a device associated with the first device by an association selected from the group consisting of a personal area network association and a cloud account association. 11. An information handling device, comprising: a display screen; one or more processors; a memory storing instructions accessible to the one or more processors, the instructions being executable by the one or more processors to: display content on the display screen of the information handling device; determine a position of a second device relative to the display screen of the information handling device; select a portion of the display screen of the information handling device based on a determined position of the second device; and transfer the portion of the display screen selected to one or more devices. 12. The information handling device of claim 11, wherein to select comprises selecting a portion of the display screen of the information handling device based on an outline of the second device. 13. The information handling device of claim 12, wherein the outline of the second device is defined by the outer peripheral edge of the second device. 14. The information handling device of claim 11, wherein to select comprises providing a preview of the portion of the display screen of the information handling device on a display screen of the second device. 15. The information handling device of claim 11, wherein the instructions are further executable by the one or more processors to apply scaling to the portion of the display screen of the information handling device. 16. The information handling device of claim 11, wherein the instructions are further executable by the one or more processors to adjust the content of the portion of the display screen of the information handling device for display on the second device. 17. The information handling device of claim 15, wherein to apply scaling comprises scaling the portion of the display screen of the information handling device selected based on sizing information received at the information handling device. 18. The information handling device of claim 15, wherein to apply scaling comprises scaling the portion of the display screen of the information handling device selected based on gesture information received at the information handling device, wherein the gesture information is derived from the second device and is selected from the group consisting of a pinch gesture and a zoom gesture. 19. The information handling device of claim 11, wherein the one or more devices comprises a device associated with the information handling device by an association selected from the group consisting of a personal area network association and a cloud account association. 20. A program product, comprising: a storage medium having computer readable program code stored therewith, the computer readable program code comprising: computer readable program code configured to display content on a display screen of a first device; computer readable program code configured to determine a position of a second device relative to the display screen of the first device; computer readable program code configured to select a portion of the display screen of the first device based on a determined position of the second device; and computer readable program code configured to transfer the portion of the display screen selected to one or more devices.
2,600
10,497
10,497
14,266,523
2,613
Systems and methods according to various embodiments enable a user to view three-dimensional representations of data objects (“nodes”) within a 3D environment from a first person perspective. The system may be configured to allow the user to interact with the nodes by moving a virtual camera through the 3D environment. The nodes may have one or more attributes that may correspond, respectively, to particular static or dynamic values within the data object's data fields. The attributes may include physical aspects of the nodes, such as color, size, or shape. The system may group related data objects within the 3D environment into clusters that are demarked using one or more cluster designators, which may be in the form of a dome or similar feature that encompasses the related data objects. The system may enable multiple users to access the 3D environment simultaneously, or to record their interactions with the 3D environment.
1. A method, comprising: receiving, using a computer processor, a sequence of time-dependent data values; mapping the sequence of time-dependent data values to an attribute of a three-dimensional object that is available to appear in views of a three-dimensional environment, wherein a presentation of the attribute at a particular point in time corresponds to the time-dependent data value mapped to it at that particular point in time; generating a first view of the three-dimensional environment at a first point in time, wherein the first view of the three-dimensional environment includes a first perspective of the three-dimensional environment from a first location, and wherein the first view includes computer generated graphics; receiving user input indicating a second location in the three-dimensional environment and a second perspective for viewing the three-dimensional environment from the second location, wherein the user input corresponds to a second point in time later than the first point in time; based on the user input, generating a second view of the three-dimensional environment at the second point in time, wherein the second view of the three-dimensional environment includes a second perspective of the three-dimensional environment from the second location, and wherein the second view includes computer generated graphics. 2. The method of claim 1, wherein the three-dimensional environment is rendered on a two-dimensional display. 3. The method of claim 1, wherein the three-dimensional environment is rendered in a virtual reality display. 4. The method of claim 1, wherein the three-dimensional environment is rendered using holograms. 5. The method of claim 1, wherein the computer generated graphics of the first view of the three-dimensional environment and the computer generated graphics of the second view of the three-dimensional environment are presented to a user as superimposed on a portion of a real-world environment in which the user moves, and wherein the user input is generated through one or more sensors configured to track the user's motion. 6. The method of claim 1, wherein receiving the user input comprises receiving navigation information through a keyboard. 7. The method of claim 1, wherein the three-dimensional object is included in the first view and the second view, wherein the attribute of the three-dimensional object in the first view is based on a value in the sequence of time-dependent data values corresponding to the first point in time, and wherein the attribute of the three-dimensional object in the second view is based on a value in the sequence of time-dependent data values corresponding to the second point in time. 8. The method of claim 1, wherein the three-dimensional object is included in the first view and the second view, wherein the attribute of the three-dimensional object in the first view is based on a value in the sequence of time-dependent data values corresponding to the first point in time; wherein the attribute of the three-dimensional object in the second view is based on a value in the sequence of time-dependent data values corresponding to the second point in time; and wherein an appearance of the attribute in the first view is visibly distinct from an appearance of the attribute in the second view. 9. The method of claim 1, wherein the sequence of time-dependent data values corresponds to values for a field in a set of time-stamped, searchable events. 10. The method of claim 1, wherein the sequence of time-dependent data values corresponds to values for a field in a set of time-stamped, searchable events, and wherein the field is included in a late-binding schema. 11. The method of claim 1, wherein the sequence of time-dependent data values correspond to a metric for evaluating performance of a real-world object represented in the three-dimensional environment by the three-dimensional object. 12. The method of claim 1, wherein the sequence of time-dependent data values correspond to a metric for evaluating performance of a host or virtual machine in an information technology environment, and wherein the three-dimensional object represents the host or virtual machine. 13. The method of claim 1, wherein the sequence of time-dependent data values comprise sensor readings. 14. The method of claim 1, the three-dimensional object includes a cube, a hexahedron, a rhombohedron, a prism, a pyramid, a sphere, a cone, or a cylinder. 15. The method of claim 1, wherein the attribute of the three-dimensional object includes a visual property of the three-dimensional object. 16. The method of claim 1, wherein the attribute of the three-dimensional object includes a shape, color, size, height, width, or depth. 17. The method of claim 1, wherein the attribute of the three-dimensional object includes a material, lighting, texture, transparency, pulsing, or beaconing. 18. The method of claim 1, wherein the attribute of the three-dimensional object includes a sound. 19. The method of claim 1, wherein a second three-dimensional object is included in the first view of the three-dimensional environment and the second view of the three-dimensional environment, wherein a second attribute of the second three-dimensional object in the first view is based on a value in a second sequence of time-dependent data values corresponding to the first point in time; and wherein the second attribute of the second three-dimensional object in the second view is based on a value in the second sequence of time-dependent data values corresponding to the second point in time. 20. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, wherein the clustering object includes at least a portion of the three-dimensional object and at least of a portion of another three-dimensional object. 21. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, wherein the clustering object contains the three-dimensional object and a second three-dimensional object, and wherein the first location corresponding to the first view of the three-dimensional environment is outside the clustering object, and wherein the second location corresponding to the second view of the three-dimensional environment is inside the clustering object. 22. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, wherein the clustering object includes a dome or a sphere, and wherein the clustering object contains the three-dimensional object and at least a second three-dimensional object. 23. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, and wherein the clustering object has an attribute that changes based on states of three-dimensional objects in the clustering object. 24. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, and wherein the clustering object pulses or produces a beacon signal based on states of three-dimensional objects in the clustering object. 25. The method of claim 1, further comprising: generating a first clustering object and a second clustering object, each of the first clustering object and the second clustering object containing other objects, and each of the first clustering object and the second clustering object available for viewing in the three-dimensional environment; and generating a third clustering object, the third clustering containing the first clustering object and the second clustering object, the third clustering object available for viewing in the three-dimensional environment. 26. The method of claim 1, further comprising: generating a first clustering object and a second clustering object, each of the first clustering object and the second clustering object containing other objects, the first clustering object associated with a state based on states of the objects that it contains, the second clustering object associated with a state based on the states of the objects that it contains, and each of the first clustering object and the second clustering object available for viewing along with a visual indication of their associated states during navigation of the three-dimensional environment; generating a third clustering object, the third clustering object containing the first clustering object and the second clustering object, the third clustering object associated with a state based on the states of the first clustering object and the second clustering object, and the third clustering object available for viewing along with a visual indication of its associated state during navigation of the three-dimensional environment. 27. The method of claim 1, further comprising: displaying a sequence of views of the three-dimensional environment, wherein the sequence of views is generated based on navigation input received from a user for navigating the three-dimensional environment, wherein the sequence of views includes the first view and the second view, and wherein at least one of the views includes the three-dimensional object and its attribute. 28. The method of claim 1, further comprising: recording a sequence of views of the three-dimensional environment, wherein the sequence of views is generated based on navigation input received from a user for navigating the three-dimensional environment, wherein the sequence of views includes the first view and the second view, and wherein at least one of the views includes the three-dimensional object and its attribute; and after recording the sequence of views, replaying the sequence of views. 29. The method of claim 1, further comprising: causing display of a sequence of views of the three-dimensional environment, wherein the sequence of views is generated based on navigation input received from a user for navigating the three-dimensional environment, wherein the sequence of views includes the first view at the first point in time and the second view at the second point in time, and wherein at least one of the views includes the three-dimensional object and its attribute; and after causing display of the sequence of views of the three-dimensional environment, replicating the mapping of the sequence of time-dependent data values to the attribute of the three-dimensional object over a new time period while enabling the user to navigate the three-dimensional environment during the new time period to generate an alternative sequence of views of the three-dimensional environment that differs from the sequence of views of the three-dimensional environment. 30. The method of claim 1, further comprising: causing display, to a first user, of a sequence of views of the three-dimensional environment, wherein the sequence of views is generated based on navigation input received from the first user for navigating the three-dimensional environment, wherein the sequence of views includes the first view at the first point in time and the second view at the second point in time, and wherein at least one of the views includes the three-dimensional object and its attribute; and concurrently with the display to the first user of the sequence of views of the three-dimensional environment, enabling a second user to navigate the three-dimensional environment and generate an alternative sequence of views of the three-dimensional environment that differs from the sequence of views displayed to the first user. 31. The method of claim 1, further comprising: receiving user input corresponding to selection of the attribute; and causing display of data used to derive the sequence of time-dependent data values mapped to the attribute. 32. The method of claim 1, further comprising: receiving user input corresponding to selection of the three-dimensional object; and causing display of information about the three-dimensional object or causing display of underlying data used to generate the sequence of time-dependent data values mapped to the attribute of the three-dimensional object. 33. The method of claim 1, further comprising: receiving user input specifying a marker that calls attention to the three-dimensional object or an aspect of the three-dimensional object; and causing display of the marker. 34. The method of claim 1, further comprising: receiving user input, from a first user navigating the three-dimensional environment, that specifies a marker that calls attention to the three-dimensional object or an aspect of the three-dimensional object; and causing display of the marker to a second user navigating the three-dimensional environment. 35. The method of claim 1, wherein the three-dimensional object is included in the first view and the second view, wherein the attribute of the three-dimensional object in the first view is based on a value in the sequence of time-dependent data values corresponding to the first point in time, wherein the attribute of the three-dimensional object in the second view is based on a value in the sequence of time-dependent data values corresponding to the second point in time, and wherein the method further comprises: displaying in the second view a tracer corresponding to a presentation of the attribute from the first view. 36. A non-transitory computer readable storage medium, storing software instructions, which when executed by one or more processors cause performance of operations of: receiving, using a computer processor, a sequence of time-dependent data values; mapping the sequence of time-dependent data values to an attribute of a three-dimensional object that is available to appear in views of a three-dimensional environment, wherein a presentation of the attribute at a particular point in time corresponds to the time-dependent data value mapped to it at that particular point in time; generating a first view of the three-dimensional environment at a first point in time, wherein the first view of the three-dimensional environment includes a first perspective of the three-dimensional environment from a first location, and wherein the first view includes computer generated graphics; receiving user input indicating a second location in the three-dimensional environment and a second perspective for viewing the three-dimensional environment from the second location, wherein the user input corresponds to a second point in time later than the first point in time; based on the user input, generating a second view of the three-dimensional environment at the second point in time, wherein the second view of the three-dimensional environment includes a second perspective of the three-dimensional environment from the second location, and wherein the second view includes computer generated graphics. 37. An apparatus comprising: a subsystem, implemented at least partially in hardware, that receives, using a computer processor, a sequence of time-dependent data values; a subsystem, implemented at least partially in hardware, that maps the sequence of time-dependent data values to an attribute of a three-dimensional object that is available to appear in views of a three-dimensional environment, wherein a presentation of the attribute at a particular point in time corresponds to the time-dependent data value mapped to it at that particular point in time; a subsystem, implemented at least partially in hardware, that generates a first view of the three-dimensional environment at a first point in time, wherein the first view of the three-dimensional environment includes a first perspective of the three-dimensional environment from a first location, and wherein the first view includes computer generated graphics; a subsystem, implemented at least partially in hardware, that receives user input indicating a second location in the three-dimensional environment and a second perspective for viewing the three-dimensional environment from the second location, wherein the user input corresponds to a second point in time later than the first point in time; a subsystem, implemented at least partially in hardware, that, based on the user input, generates a second view of the three-dimensional environment at the second point in time, wherein the second view of the three-dimensional environment includes a second perspective of the three-dimensional environment from the second location, and wherein the second view includes computer generated graphics.
Systems and methods according to various embodiments enable a user to view three-dimensional representations of data objects (“nodes”) within a 3D environment from a first person perspective. The system may be configured to allow the user to interact with the nodes by moving a virtual camera through the 3D environment. The nodes may have one or more attributes that may correspond, respectively, to particular static or dynamic values within the data object's data fields. The attributes may include physical aspects of the nodes, such as color, size, or shape. The system may group related data objects within the 3D environment into clusters that are demarked using one or more cluster designators, which may be in the form of a dome or similar feature that encompasses the related data objects. The system may enable multiple users to access the 3D environment simultaneously, or to record their interactions with the 3D environment.1. A method, comprising: receiving, using a computer processor, a sequence of time-dependent data values; mapping the sequence of time-dependent data values to an attribute of a three-dimensional object that is available to appear in views of a three-dimensional environment, wherein a presentation of the attribute at a particular point in time corresponds to the time-dependent data value mapped to it at that particular point in time; generating a first view of the three-dimensional environment at a first point in time, wherein the first view of the three-dimensional environment includes a first perspective of the three-dimensional environment from a first location, and wherein the first view includes computer generated graphics; receiving user input indicating a second location in the three-dimensional environment and a second perspective for viewing the three-dimensional environment from the second location, wherein the user input corresponds to a second point in time later than the first point in time; based on the user input, generating a second view of the three-dimensional environment at the second point in time, wherein the second view of the three-dimensional environment includes a second perspective of the three-dimensional environment from the second location, and wherein the second view includes computer generated graphics. 2. The method of claim 1, wherein the three-dimensional environment is rendered on a two-dimensional display. 3. The method of claim 1, wherein the three-dimensional environment is rendered in a virtual reality display. 4. The method of claim 1, wherein the three-dimensional environment is rendered using holograms. 5. The method of claim 1, wherein the computer generated graphics of the first view of the three-dimensional environment and the computer generated graphics of the second view of the three-dimensional environment are presented to a user as superimposed on a portion of a real-world environment in which the user moves, and wherein the user input is generated through one or more sensors configured to track the user's motion. 6. The method of claim 1, wherein receiving the user input comprises receiving navigation information through a keyboard. 7. The method of claim 1, wherein the three-dimensional object is included in the first view and the second view, wherein the attribute of the three-dimensional object in the first view is based on a value in the sequence of time-dependent data values corresponding to the first point in time, and wherein the attribute of the three-dimensional object in the second view is based on a value in the sequence of time-dependent data values corresponding to the second point in time. 8. The method of claim 1, wherein the three-dimensional object is included in the first view and the second view, wherein the attribute of the three-dimensional object in the first view is based on a value in the sequence of time-dependent data values corresponding to the first point in time; wherein the attribute of the three-dimensional object in the second view is based on a value in the sequence of time-dependent data values corresponding to the second point in time; and wherein an appearance of the attribute in the first view is visibly distinct from an appearance of the attribute in the second view. 9. The method of claim 1, wherein the sequence of time-dependent data values corresponds to values for a field in a set of time-stamped, searchable events. 10. The method of claim 1, wherein the sequence of time-dependent data values corresponds to values for a field in a set of time-stamped, searchable events, and wherein the field is included in a late-binding schema. 11. The method of claim 1, wherein the sequence of time-dependent data values correspond to a metric for evaluating performance of a real-world object represented in the three-dimensional environment by the three-dimensional object. 12. The method of claim 1, wherein the sequence of time-dependent data values correspond to a metric for evaluating performance of a host or virtual machine in an information technology environment, and wherein the three-dimensional object represents the host or virtual machine. 13. The method of claim 1, wherein the sequence of time-dependent data values comprise sensor readings. 14. The method of claim 1, the three-dimensional object includes a cube, a hexahedron, a rhombohedron, a prism, a pyramid, a sphere, a cone, or a cylinder. 15. The method of claim 1, wherein the attribute of the three-dimensional object includes a visual property of the three-dimensional object. 16. The method of claim 1, wherein the attribute of the three-dimensional object includes a shape, color, size, height, width, or depth. 17. The method of claim 1, wherein the attribute of the three-dimensional object includes a material, lighting, texture, transparency, pulsing, or beaconing. 18. The method of claim 1, wherein the attribute of the three-dimensional object includes a sound. 19. The method of claim 1, wherein a second three-dimensional object is included in the first view of the three-dimensional environment and the second view of the three-dimensional environment, wherein a second attribute of the second three-dimensional object in the first view is based on a value in a second sequence of time-dependent data values corresponding to the first point in time; and wherein the second attribute of the second three-dimensional object in the second view is based on a value in the second sequence of time-dependent data values corresponding to the second point in time. 20. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, wherein the clustering object includes at least a portion of the three-dimensional object and at least of a portion of another three-dimensional object. 21. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, wherein the clustering object contains the three-dimensional object and a second three-dimensional object, and wherein the first location corresponding to the first view of the three-dimensional environment is outside the clustering object, and wherein the second location corresponding to the second view of the three-dimensional environment is inside the clustering object. 22. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, wherein the clustering object includes a dome or a sphere, and wherein the clustering object contains the three-dimensional object and at least a second three-dimensional object. 23. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, and wherein the clustering object has an attribute that changes based on states of three-dimensional objects in the clustering object. 24. The method of claim 1, further comprising causing a display in the first view of the three-dimensional environment and in the second view of the three-dimensional environment of a clustering object, and wherein the clustering object pulses or produces a beacon signal based on states of three-dimensional objects in the clustering object. 25. The method of claim 1, further comprising: generating a first clustering object and a second clustering object, each of the first clustering object and the second clustering object containing other objects, and each of the first clustering object and the second clustering object available for viewing in the three-dimensional environment; and generating a third clustering object, the third clustering containing the first clustering object and the second clustering object, the third clustering object available for viewing in the three-dimensional environment. 26. The method of claim 1, further comprising: generating a first clustering object and a second clustering object, each of the first clustering object and the second clustering object containing other objects, the first clustering object associated with a state based on states of the objects that it contains, the second clustering object associated with a state based on the states of the objects that it contains, and each of the first clustering object and the second clustering object available for viewing along with a visual indication of their associated states during navigation of the three-dimensional environment; generating a third clustering object, the third clustering object containing the first clustering object and the second clustering object, the third clustering object associated with a state based on the states of the first clustering object and the second clustering object, and the third clustering object available for viewing along with a visual indication of its associated state during navigation of the three-dimensional environment. 27. The method of claim 1, further comprising: displaying a sequence of views of the three-dimensional environment, wherein the sequence of views is generated based on navigation input received from a user for navigating the three-dimensional environment, wherein the sequence of views includes the first view and the second view, and wherein at least one of the views includes the three-dimensional object and its attribute. 28. The method of claim 1, further comprising: recording a sequence of views of the three-dimensional environment, wherein the sequence of views is generated based on navigation input received from a user for navigating the three-dimensional environment, wherein the sequence of views includes the first view and the second view, and wherein at least one of the views includes the three-dimensional object and its attribute; and after recording the sequence of views, replaying the sequence of views. 29. The method of claim 1, further comprising: causing display of a sequence of views of the three-dimensional environment, wherein the sequence of views is generated based on navigation input received from a user for navigating the three-dimensional environment, wherein the sequence of views includes the first view at the first point in time and the second view at the second point in time, and wherein at least one of the views includes the three-dimensional object and its attribute; and after causing display of the sequence of views of the three-dimensional environment, replicating the mapping of the sequence of time-dependent data values to the attribute of the three-dimensional object over a new time period while enabling the user to navigate the three-dimensional environment during the new time period to generate an alternative sequence of views of the three-dimensional environment that differs from the sequence of views of the three-dimensional environment. 30. The method of claim 1, further comprising: causing display, to a first user, of a sequence of views of the three-dimensional environment, wherein the sequence of views is generated based on navigation input received from the first user for navigating the three-dimensional environment, wherein the sequence of views includes the first view at the first point in time and the second view at the second point in time, and wherein at least one of the views includes the three-dimensional object and its attribute; and concurrently with the display to the first user of the sequence of views of the three-dimensional environment, enabling a second user to navigate the three-dimensional environment and generate an alternative sequence of views of the three-dimensional environment that differs from the sequence of views displayed to the first user. 31. The method of claim 1, further comprising: receiving user input corresponding to selection of the attribute; and causing display of data used to derive the sequence of time-dependent data values mapped to the attribute. 32. The method of claim 1, further comprising: receiving user input corresponding to selection of the three-dimensional object; and causing display of information about the three-dimensional object or causing display of underlying data used to generate the sequence of time-dependent data values mapped to the attribute of the three-dimensional object. 33. The method of claim 1, further comprising: receiving user input specifying a marker that calls attention to the three-dimensional object or an aspect of the three-dimensional object; and causing display of the marker. 34. The method of claim 1, further comprising: receiving user input, from a first user navigating the three-dimensional environment, that specifies a marker that calls attention to the three-dimensional object or an aspect of the three-dimensional object; and causing display of the marker to a second user navigating the three-dimensional environment. 35. The method of claim 1, wherein the three-dimensional object is included in the first view and the second view, wherein the attribute of the three-dimensional object in the first view is based on a value in the sequence of time-dependent data values corresponding to the first point in time, wherein the attribute of the three-dimensional object in the second view is based on a value in the sequence of time-dependent data values corresponding to the second point in time, and wherein the method further comprises: displaying in the second view a tracer corresponding to a presentation of the attribute from the first view. 36. A non-transitory computer readable storage medium, storing software instructions, which when executed by one or more processors cause performance of operations of: receiving, using a computer processor, a sequence of time-dependent data values; mapping the sequence of time-dependent data values to an attribute of a three-dimensional object that is available to appear in views of a three-dimensional environment, wherein a presentation of the attribute at a particular point in time corresponds to the time-dependent data value mapped to it at that particular point in time; generating a first view of the three-dimensional environment at a first point in time, wherein the first view of the three-dimensional environment includes a first perspective of the three-dimensional environment from a first location, and wherein the first view includes computer generated graphics; receiving user input indicating a second location in the three-dimensional environment and a second perspective for viewing the three-dimensional environment from the second location, wherein the user input corresponds to a second point in time later than the first point in time; based on the user input, generating a second view of the three-dimensional environment at the second point in time, wherein the second view of the three-dimensional environment includes a second perspective of the three-dimensional environment from the second location, and wherein the second view includes computer generated graphics. 37. An apparatus comprising: a subsystem, implemented at least partially in hardware, that receives, using a computer processor, a sequence of time-dependent data values; a subsystem, implemented at least partially in hardware, that maps the sequence of time-dependent data values to an attribute of a three-dimensional object that is available to appear in views of a three-dimensional environment, wherein a presentation of the attribute at a particular point in time corresponds to the time-dependent data value mapped to it at that particular point in time; a subsystem, implemented at least partially in hardware, that generates a first view of the three-dimensional environment at a first point in time, wherein the first view of the three-dimensional environment includes a first perspective of the three-dimensional environment from a first location, and wherein the first view includes computer generated graphics; a subsystem, implemented at least partially in hardware, that receives user input indicating a second location in the three-dimensional environment and a second perspective for viewing the three-dimensional environment from the second location, wherein the user input corresponds to a second point in time later than the first point in time; a subsystem, implemented at least partially in hardware, that, based on the user input, generates a second view of the three-dimensional environment at the second point in time, wherein the second view of the three-dimensional environment includes a second perspective of the three-dimensional environment from the second location, and wherein the second view includes computer generated graphics.
2,600
10,498
10,498
14,549,505
2,657
A credit risk decision management system and method using voice analytics are disclosed. The voice analysis may be applied to speaker authentication and emotion detection. The system introduces use of voice analysis as a tool for credit assessment, fraud detection and a measure of customer satisfaction and return rate probability when lending to an individual or a group. Emotions in voice interactions during a credit granting process are shown to have high correlation with specific loan outcomes. This system may predicts lending outcomes that determine if a customer might face financial difficulty in near future and ascertains affordable credit limit for such a customer. Information carrying features are extracted from the customer's voice files, and mathematical and logical transformations are performed on these features to get derived features. The data is then fed to a predictive model which captures the probability of default, intent to pay and fraudulent activity involved in a credit transaction. The voice prints can also be transcribed into text and text analytics can be performed on the data obtained to infer similar lending outcomes using Natural Language Processing and predictive modeling techniques.
1. A voice analytic based predictive modeling system, comprising: a processor and a memory; the processor configured to receive information from an entity and third party information about the entity; the processor configured to receive voice recordings from a telephone call with the entity; a voice analyzer component, executed by the processor, that processes the voice recordings of the entity to identify a plurality of features of the entity voice from the voice recordings and generate a plurality of voice feature pieces of data; and a predictor component, executed by the processor, that generates an outcome of an event for the entity based on the voice features piece of data, the information from the entity and third party information about the entity. 2. The system of claim 1, wherein the predictor component generates a provisional approval for a loan to the entity based on the loan application from the entity and third party information about the entity. 3. The system of claim 1, wherein the voice analyzer component separates the voice recordings of the entity into one or more voice recording segments. 4. The system of claim 3, wherein the voice analyzer component separates the voice recordings of the entity using a plurality of segmentation processes. 5. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment of a question from an agent and an answer from the entity. 6. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment of a specific dialog in the voice recordings. 7. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment of a phrase in the voice recording. 8. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment based on a frequently used word in the voice recording. 9. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment based on a tag created by an agent during a conversation with the entity. 10. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment based on a tag created by an agent during a conversation with the entity. 11. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment based on a keyword trigger. 12. The system of claim 1, wherein the feature is a reference in the voice recording. 13. The system of claim 1, wherein the voice analyzer component is configured to determine a human emotion based on voice recordings. 14. The system of claim 1, wherein the voice analyzer component is configured to create one of a VIP list and a fraud blacklist 15. The system of claim 1, wherein the voice analyzer component is configured to transcribe the voice recording into text and analyzes the text. 16. The system of claim 1, wherein the plurality of features further comprises a primary feature and a derived feature. 17. The system of claim 16, wherein the voice analyzer component is configured to generate the derived feature by applying a transformation to the primary feature. 18. The system of claim 16, wherein the primary feature is one of a time domain primary feature that captures variations of amplitude of the voice recording in a time domain and a frequency domain primary feature that captures variations of amplitude and phase of the voice recording in a frequency domain. 19. The system of claim 16, wherein the derived feature is one of a derivative of formant frequencies, a first and second order derivative of a Mel Frequency Cepstral Coefficient, a maximum and minimum deviation from mean value, a mean deviation between adjacent samples, a frequency distribution on aggregated deviations and a digital filter. 20. The system of claim 1, wherein the entity is one of an individual and a group of individuals. 21. The system of claim 1, wherein the event is a return of the entity to a business and the voice analyzer component categorizes the voice recordings in real time and generates a recommendations for use in a customer care centre. 22. The system of claim 1, wherein the event is a loan to the entity and the information from the entity is a loan application. 23. The system of claim 1, wherein the event is a return of the entity to a business and the information from the entity is a call with customer service. 24. A method for predictive modeling using voice analytics, the method comprising: receiving information from an entity and third party information about the entity; receiving voice recordings from a telephone call with the entity; processing, a voice analyzer component, the voice recordings of the entity to identify a plurality of features of the entity voice from the voice recordings and generate a plurality of voice feature pieces of data; and generating, by a predictor component, an outcome of an event for the entity based on the voice features piece of data, the information from the entity and third party information about the entity. 25. The method of claim 24 further comprising generating a provisional approval for a loan to the entity based on the loan application from the entity and third party information about the entity. 26. The method of claim 24, wherein processing the voice recordings further comprises separating the voice recordings of the entity into one or more voice recording segments. 27. The method of claim 26, wherein separating the voice recordings further comprises separating the voice recordings of the entity using a plurality of segmentation processes. 28. The method of claim 26 further comprising generating a segment of a question from an agent and an answer from the entity. 29. The method of claim 26 further comprising generating a segment of a specific dialog in the voice recordings. 30. The method of claim 26 further comprising generating a segment of a phrase in the voice recording. 31. The method of claim 26 further comprising generating a segment based on a frequently used word in the voice recording. 32. The method of claim 26 further comprising generating a segment based on a tag created by an agent during a conversation with the entity. 33. The method of claim 26 further comprising generating a segment based on a tag created by an agent during a conversation with the entity. 34. The method of claim 26 further comprising generating a segment based on a keyword trigger. 35. The method of claim 24, wherein the feature is a reference in the voice recording. 36. The method of claim 24 further comprising determining a human emotion based on voice recordings. 37. The method of claim 24 further comprising creating one of a VIP list and a fraud blacklist based on the features. 38. The method of claim 24, wherein processing the voice recordings further comprises transcribing the voice recording into text and analyzing the text. 39. The method of claim 24, wherein the plurality of features further comprises a primary feature and a derived feature. 40. The method of claim 39 further comprising generating the derived feature by applying a transformation to the primary feature. 41. The method of claim 39, wherein the primary feature is one of a time domain primary feature that captures variations of amplitude of the voice recording in a time domain and a frequency domain primary feature that captures variations of amplitude and phase of the voice recording in a frequency domain. 42. The method of claim 39, wherein the derived feature is one of a derivative of formant frequencies, a first and second order derivative of a Mel Frequency Cepstral Coefficient, a maximum and minimum deviation from mean value, a mean deviation between adjacent samples, a frequency distribution on aggregated deviations and a digital filter. 43. The method of claim 24, wherein the entity is one of an individual and a group of individuals. 44. The method of claim 24, wherein the event is a return of the entity to a business and further comprising categorizing the voice recordings in real time and generating a recommendations for use in a customer care centre. 45. The method of claim 24, wherein the event is a loan to the entity and the information from the entity is a loan application. 46. The method of claim 24, wherein the event is a return of the entity to a business and the information from the entity is a call with customer service.
A credit risk decision management system and method using voice analytics are disclosed. The voice analysis may be applied to speaker authentication and emotion detection. The system introduces use of voice analysis as a tool for credit assessment, fraud detection and a measure of customer satisfaction and return rate probability when lending to an individual or a group. Emotions in voice interactions during a credit granting process are shown to have high correlation with specific loan outcomes. This system may predicts lending outcomes that determine if a customer might face financial difficulty in near future and ascertains affordable credit limit for such a customer. Information carrying features are extracted from the customer's voice files, and mathematical and logical transformations are performed on these features to get derived features. The data is then fed to a predictive model which captures the probability of default, intent to pay and fraudulent activity involved in a credit transaction. The voice prints can also be transcribed into text and text analytics can be performed on the data obtained to infer similar lending outcomes using Natural Language Processing and predictive modeling techniques.1. A voice analytic based predictive modeling system, comprising: a processor and a memory; the processor configured to receive information from an entity and third party information about the entity; the processor configured to receive voice recordings from a telephone call with the entity; a voice analyzer component, executed by the processor, that processes the voice recordings of the entity to identify a plurality of features of the entity voice from the voice recordings and generate a plurality of voice feature pieces of data; and a predictor component, executed by the processor, that generates an outcome of an event for the entity based on the voice features piece of data, the information from the entity and third party information about the entity. 2. The system of claim 1, wherein the predictor component generates a provisional approval for a loan to the entity based on the loan application from the entity and third party information about the entity. 3. The system of claim 1, wherein the voice analyzer component separates the voice recordings of the entity into one or more voice recording segments. 4. The system of claim 3, wherein the voice analyzer component separates the voice recordings of the entity using a plurality of segmentation processes. 5. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment of a question from an agent and an answer from the entity. 6. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment of a specific dialog in the voice recordings. 7. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment of a phrase in the voice recording. 8. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment based on a frequently used word in the voice recording. 9. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment based on a tag created by an agent during a conversation with the entity. 10. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment based on a tag created by an agent during a conversation with the entity. 11. The system of claim 4, wherein the plurality of segmentation processes further comprise the voice analyzer component generating a segment based on a keyword trigger. 12. The system of claim 1, wherein the feature is a reference in the voice recording. 13. The system of claim 1, wherein the voice analyzer component is configured to determine a human emotion based on voice recordings. 14. The system of claim 1, wherein the voice analyzer component is configured to create one of a VIP list and a fraud blacklist 15. The system of claim 1, wherein the voice analyzer component is configured to transcribe the voice recording into text and analyzes the text. 16. The system of claim 1, wherein the plurality of features further comprises a primary feature and a derived feature. 17. The system of claim 16, wherein the voice analyzer component is configured to generate the derived feature by applying a transformation to the primary feature. 18. The system of claim 16, wherein the primary feature is one of a time domain primary feature that captures variations of amplitude of the voice recording in a time domain and a frequency domain primary feature that captures variations of amplitude and phase of the voice recording in a frequency domain. 19. The system of claim 16, wherein the derived feature is one of a derivative of formant frequencies, a first and second order derivative of a Mel Frequency Cepstral Coefficient, a maximum and minimum deviation from mean value, a mean deviation between adjacent samples, a frequency distribution on aggregated deviations and a digital filter. 20. The system of claim 1, wherein the entity is one of an individual and a group of individuals. 21. The system of claim 1, wherein the event is a return of the entity to a business and the voice analyzer component categorizes the voice recordings in real time and generates a recommendations for use in a customer care centre. 22. The system of claim 1, wherein the event is a loan to the entity and the information from the entity is a loan application. 23. The system of claim 1, wherein the event is a return of the entity to a business and the information from the entity is a call with customer service. 24. A method for predictive modeling using voice analytics, the method comprising: receiving information from an entity and third party information about the entity; receiving voice recordings from a telephone call with the entity; processing, a voice analyzer component, the voice recordings of the entity to identify a plurality of features of the entity voice from the voice recordings and generate a plurality of voice feature pieces of data; and generating, by a predictor component, an outcome of an event for the entity based on the voice features piece of data, the information from the entity and third party information about the entity. 25. The method of claim 24 further comprising generating a provisional approval for a loan to the entity based on the loan application from the entity and third party information about the entity. 26. The method of claim 24, wherein processing the voice recordings further comprises separating the voice recordings of the entity into one or more voice recording segments. 27. The method of claim 26, wherein separating the voice recordings further comprises separating the voice recordings of the entity using a plurality of segmentation processes. 28. The method of claim 26 further comprising generating a segment of a question from an agent and an answer from the entity. 29. The method of claim 26 further comprising generating a segment of a specific dialog in the voice recordings. 30. The method of claim 26 further comprising generating a segment of a phrase in the voice recording. 31. The method of claim 26 further comprising generating a segment based on a frequently used word in the voice recording. 32. The method of claim 26 further comprising generating a segment based on a tag created by an agent during a conversation with the entity. 33. The method of claim 26 further comprising generating a segment based on a tag created by an agent during a conversation with the entity. 34. The method of claim 26 further comprising generating a segment based on a keyword trigger. 35. The method of claim 24, wherein the feature is a reference in the voice recording. 36. The method of claim 24 further comprising determining a human emotion based on voice recordings. 37. The method of claim 24 further comprising creating one of a VIP list and a fraud blacklist based on the features. 38. The method of claim 24, wherein processing the voice recordings further comprises transcribing the voice recording into text and analyzing the text. 39. The method of claim 24, wherein the plurality of features further comprises a primary feature and a derived feature. 40. The method of claim 39 further comprising generating the derived feature by applying a transformation to the primary feature. 41. The method of claim 39, wherein the primary feature is one of a time domain primary feature that captures variations of amplitude of the voice recording in a time domain and a frequency domain primary feature that captures variations of amplitude and phase of the voice recording in a frequency domain. 42. The method of claim 39, wherein the derived feature is one of a derivative of formant frequencies, a first and second order derivative of a Mel Frequency Cepstral Coefficient, a maximum and minimum deviation from mean value, a mean deviation between adjacent samples, a frequency distribution on aggregated deviations and a digital filter. 43. The method of claim 24, wherein the entity is one of an individual and a group of individuals. 44. The method of claim 24, wherein the event is a return of the entity to a business and further comprising categorizing the voice recordings in real time and generating a recommendations for use in a customer care centre. 45. The method of claim 24, wherein the event is a loan to the entity and the information from the entity is a loan application. 46. The method of claim 24, wherein the event is a return of the entity to a business and the information from the entity is a call with customer service.
2,600
10,499
10,499
15,885,385
2,637
A passive optical network having an optical-signal monitor configured to monitor carrier-wavelength drifts during optical bursts transmitted between the optical line terminal and optical network units thereof. In an example embodiment, the optical-signal monitor uses heterodyne beating between two differently delayed portions of an optical burst to generate an estimate of the carrier-wavelength drift during that optical burst. The passive optical network may also include an electronic controller configured to use the estimates generated by the optical-signal monitor to make configuration changes at the optical network units and/or implement other control measures directed at reducing to an acceptable level the amounts of carrier-wavelength drift during the optical bursts and/or mitigating some adverse effects thereof.
1. An apparatus comprising an optical receiver, the optical receiver having an optical burst monitor and being configured to receive data-modulated optical signals from a plurality of optical transmitters operable in optical burst mode; and wherein the optical burst monitor comprises: an optical mixer configured to generate an optical output signal by optically mixing a first portion of an optical burst and a second portion of the optical burst, the second portion being delayed with respect to the first portion; a photodetector configured to convert the optical output signal into a corresponding electrical signal; and a signal processor configured to generate an estimate of carrier-wavelength drift during the optical burst based on a beat frequency of said corresponding electrical signal. 2. The apparatus of claim 1, wherein the optical mixer comprises: a first optical coupler configured to power-split light of the optical burst to generate the first and second portions thereof; a delay element configured to delay the second portion with respect to the first portion; and a second optical coupler configured to power-combine the first portion and the second portion delayed by the delay element to generate the optical output signal. 3. The apparatus of claim 2, wherein the delay element is controllably tunable to change a relative delay time between the first and second portions. 4. The apparatus of claim 2, wherein the optical burst monitor further comprises an optical band-pass filter connected to filter the light to apply the optical burst to the first optical coupler. 5. The apparatus of claim 4, wherein the optical band-pass filter is controllably tunable to change a pass band thereof to allow for sequential testing of carrier-wavelength drifts of different ones of the respective optical transmitters. 6. The apparatus of claim 1, wherein the signal processor is configured to generate the estimate of the carrier-wavelength drift using one of more of the following: a set of values representing the beat frequency; an estimate of a minimum carrier wavelength during the optical burst; an estimate of a maximum carrier wavelength during the optical burst; and an estimate of a magnitude of the carrier-wavelength drift during the optical burst. 7. The apparatus of claim 1, further comprising an electronic controller configured to generate one or more control signals in response to receiving from the optical burst monitor an input signal indicative of the estimate of the carrier-wavelength drift, the one or more control signals including a control signal directed to a corresponding one of the optical transmitters. 8. The apparatus of claim 7, wherein the control signal directed to the corresponding one of the optical transmitters is configured to change an operating configuration of a laser source thereof. 9. The apparatus of claim 8, wherein a change of the operating configuration includes a change for the corresponding one of the optical transmitters of one or more of the following: duration of an optical burst; duration of an inter-burst interval; a carrier wavelength to which the corresponding one of the optical transmitters is tuned at a beginning of an optical burst; a data modulation format used during an optical burst; a duty cycle used during an optical burst; a baud rate used during an optical burst; one or more laser bias voltages; one or more laser injection currents; and an optical power of an optical burst. 10. The apparatus of claim 7, wherein the one or more control signals include a control signal configured to change a TDMA transmit schedule for at least some of the optical transmitters. 11. The apparatus of claim 7, wherein the electronic controller is configured to generate the one or more control signals in a manner that causes a carrier-wavelength drift during an optical burst to be smaller than a fixed threshold. 12. The apparatus of claim 1, further comprising an optical line terminal that comprises the optical receiver. 13. The apparatus of claim 12, wherein the optical line terminal comprises a WDM receiver configured to recover data encoded in the optical bursts. 14. The apparatus of claim 12, further comprising a passive optical router having a first optical port and a plurality of second optical ports, the first optical port being externally connected to the optical line terminal, and each of the second optical ports being externally connected to a respective one of the plurality of optical transmitters. 15. The apparatus of claim 1, wherein: the first portion is a first attenuated copy of the optical burst; and the second portion is a second attenuated copy of the optical burst. 16. The apparatus of claim 3, wherein the delay element is controllably tunable to change the relative delay time such that the relative time delay is between ten and one hundred signaling intervals. 17. The apparatus of claim 1, wherein the optical receiver is configured to receive the data-modulated optical signals from the plurality of optical transmitters in accordance with a TDMA transmit schedule.
A passive optical network having an optical-signal monitor configured to monitor carrier-wavelength drifts during optical bursts transmitted between the optical line terminal and optical network units thereof. In an example embodiment, the optical-signal monitor uses heterodyne beating between two differently delayed portions of an optical burst to generate an estimate of the carrier-wavelength drift during that optical burst. The passive optical network may also include an electronic controller configured to use the estimates generated by the optical-signal monitor to make configuration changes at the optical network units and/or implement other control measures directed at reducing to an acceptable level the amounts of carrier-wavelength drift during the optical bursts and/or mitigating some adverse effects thereof.1. An apparatus comprising an optical receiver, the optical receiver having an optical burst monitor and being configured to receive data-modulated optical signals from a plurality of optical transmitters operable in optical burst mode; and wherein the optical burst monitor comprises: an optical mixer configured to generate an optical output signal by optically mixing a first portion of an optical burst and a second portion of the optical burst, the second portion being delayed with respect to the first portion; a photodetector configured to convert the optical output signal into a corresponding electrical signal; and a signal processor configured to generate an estimate of carrier-wavelength drift during the optical burst based on a beat frequency of said corresponding electrical signal. 2. The apparatus of claim 1, wherein the optical mixer comprises: a first optical coupler configured to power-split light of the optical burst to generate the first and second portions thereof; a delay element configured to delay the second portion with respect to the first portion; and a second optical coupler configured to power-combine the first portion and the second portion delayed by the delay element to generate the optical output signal. 3. The apparatus of claim 2, wherein the delay element is controllably tunable to change a relative delay time between the first and second portions. 4. The apparatus of claim 2, wherein the optical burst monitor further comprises an optical band-pass filter connected to filter the light to apply the optical burst to the first optical coupler. 5. The apparatus of claim 4, wherein the optical band-pass filter is controllably tunable to change a pass band thereof to allow for sequential testing of carrier-wavelength drifts of different ones of the respective optical transmitters. 6. The apparatus of claim 1, wherein the signal processor is configured to generate the estimate of the carrier-wavelength drift using one of more of the following: a set of values representing the beat frequency; an estimate of a minimum carrier wavelength during the optical burst; an estimate of a maximum carrier wavelength during the optical burst; and an estimate of a magnitude of the carrier-wavelength drift during the optical burst. 7. The apparatus of claim 1, further comprising an electronic controller configured to generate one or more control signals in response to receiving from the optical burst monitor an input signal indicative of the estimate of the carrier-wavelength drift, the one or more control signals including a control signal directed to a corresponding one of the optical transmitters. 8. The apparatus of claim 7, wherein the control signal directed to the corresponding one of the optical transmitters is configured to change an operating configuration of a laser source thereof. 9. The apparatus of claim 8, wherein a change of the operating configuration includes a change for the corresponding one of the optical transmitters of one or more of the following: duration of an optical burst; duration of an inter-burst interval; a carrier wavelength to which the corresponding one of the optical transmitters is tuned at a beginning of an optical burst; a data modulation format used during an optical burst; a duty cycle used during an optical burst; a baud rate used during an optical burst; one or more laser bias voltages; one or more laser injection currents; and an optical power of an optical burst. 10. The apparatus of claim 7, wherein the one or more control signals include a control signal configured to change a TDMA transmit schedule for at least some of the optical transmitters. 11. The apparatus of claim 7, wherein the electronic controller is configured to generate the one or more control signals in a manner that causes a carrier-wavelength drift during an optical burst to be smaller than a fixed threshold. 12. The apparatus of claim 1, further comprising an optical line terminal that comprises the optical receiver. 13. The apparatus of claim 12, wherein the optical line terminal comprises a WDM receiver configured to recover data encoded in the optical bursts. 14. The apparatus of claim 12, further comprising a passive optical router having a first optical port and a plurality of second optical ports, the first optical port being externally connected to the optical line terminal, and each of the second optical ports being externally connected to a respective one of the plurality of optical transmitters. 15. The apparatus of claim 1, wherein: the first portion is a first attenuated copy of the optical burst; and the second portion is a second attenuated copy of the optical burst. 16. The apparatus of claim 3, wherein the delay element is controllably tunable to change the relative delay time such that the relative time delay is between ten and one hundred signaling intervals. 17. The apparatus of claim 1, wherein the optical receiver is configured to receive the data-modulated optical signals from the plurality of optical transmitters in accordance with a TDMA transmit schedule.
2,600